I am working on a VB.NET project which requires the extensive used of Unit Tests but am having problems mocking on of the classes.
Here is a breakdown of the issue:
Using NUnit and Rhino Mock 3.6
VS2010 & VB.NET
I have an interface which contains a number of methods and an Event.
The class which implements that Interface raises the event when one of the methods is called.
When I mock the object in my tests I can stub methods and create/assert expectations on the methods with no problems.
How do I configure the mock object so that when a method is called the event is raised so that I can assert that is was raised?
I have found numerous posts using C# which suggest code like this
mockObject.MyEvent += null...
When I try this 'MyEvent' does not appear in Intellisense.
I'm obviously not configuring my test/mock correctly but with so few VB.NET examples out there I'm drawing a blank.
Sorry for my lack of VB syntax; I'm a C# guy. Also, I think you should be congratulated for writing tests at all, regardless of test first or test last.
I think your code needs refactoring. It sounds like you have an interface that requires implementations to contain an event, and then another class (which you're testing) depends on this interface. The code under test then executes the event when certain things happen.
The question in my mind is, "Why is it a publically exposed event?" Why not just a method that implementations can define? I suppose the event could have multiple delegates being added to it dynamically somewhere, but if that's something you really need, then the implementation should figure out how that works. You could replace the event with a pair of methods: HandleEvent([event parameters]) and AddEventListener(TheDelegateType listener). I think the meaning and usage of those should be obvious enough. If the implementation wants to use events internally, it can, but I feel like that's an implementation detail that users of the interface should not care about. All they should care about is adding their listener and that all the listeners get called. Then you can just assert that HandleEvent or AddEventListener were called. This is probably the simplest way to make this more testable.
If you really need to keep the event, then see here for information on mocking delegates. My advice would be to mock a delegate, add it to the event during set up, and then assert it was called. This might also be useful if you need to test that things are added to the event.
Also, I wouldn't rely on Intellisense too much. Mocking is done via some crafty IL code, I believe. I wouldn't count on Intellisense to keep up with members of its objects, especially when you start getting beyond normal methods.
Related
Is it a good practice to write a EXPECT(something) inside a test double (e.g. spy or mock) method? To ensure the test double is used in a specific way for testing?
If not, what would be a preferred solution?
If you would write a true Mock (as per definition from xUnit Test Patterns) this is exactly what defines this kind of test double. It is set up with the expectations how it will be called and therefore also includes the assertions. That's also how mocking frameworks produce mock objects under the hood. See also the definition from xUnit Test Patterns:
How do we implement Behavior Verification for indirect outputs of the SUT?
How can we verify logic independently when it depends on indirect inputs from other software components?
Replace an object the system under test (SUT) depends on with a test-specific object that verifies it is being used correctly by the SUT.
Here, indirect outputs means that you don't want to verify that the method under test returns some value but that there is something happening inside the method being tested that is behaviour relevant to callers of the method. For instance, that while executing some method the correct behaviour lead to an expected important action. Like sending an email or sending a message somewhere. The mock would be the doubled dependency that also verifies itself that this really happened, i.e. that the method under test really called the method of the dependency with the expected parameter(s).
A spy on the other hand shall just record things of interest that happened to the doubled dependency. Interrogating the spy about what happened (and sometimes also how often) and then judging if that was correct by asserting on the expected events is the responsibility of the test itself. So a mock is always also a spy with the addition of the assertion (expectation) logic. See also Uncle Bobs blog The Little Mocker for a great explanation of the different types of test doubles.
TL;DR
Yes, the mock includes the expectations (assertion) itself, the spy just records what happened and lets the test itself asks the spy and asserts on the expected events.
Mocking frameworks also implement mocks like explained above as they all follow the specified xunit framework.
mock.Verify(p => p.Send(It.IsAny<string>()));
If you look at the above Moq example (C#), you see that the mock object itself is configured to in the end perform the expected verification. The framework makes sure that the mock's verification methods are executed. A hand-written would be setup and than you would call the verification method on the mock object yourself.
Generally, you want to put all EXPECT statements inside individual tests to make your code readable.
If you want to enforce certain things on your test stub/spy, it is probably better to use exceptions or static asserts because your test is usually using them as a black box, and it uses them in an unintended way, your code will either not get compiled, or it will throw and give you the full stack trace which also will cause your test to fail (so you can catch the misuse).
For mocks, however, you have full control over the use and you can be very specific about how they are called and used inside each test. For example in Google test, using GMock matchers, you can say something like:
EXPECT_CALL(turtle, Forward(Ge(100)));
which means expect Forward to be called on the mock object turtle with a parameter equal or greater than 100. Any other value will cause the test to fail.
See this video for more examples on GMock matchers.
It is also very common to check general things in a test fixture (e.g. in Setup or TearDown). For example, this sample from google test enforces each test to finish in a certain amount of time, and the EXPECT statement is in teardown rather than each individual test.
There's a solid chance I'm misusing classes here which is why I need your help.
I've started developing with Java EE and one of the problems I am facing is I have a process which I have organised in a class, call it: "SendEmail.java".
Now let's say I have two other classes called "Thunderalert.java" and "FloodAlert.java" which will use all the methods that SendEmails.java has within it.
So I want to know the best way of using the SendEmails methods from each of the other classes.
Should I be creating an instance of SendEmails and accessing each method individually and error checking along the way (what if an exception is thrown?).. It's methods are just procedural code, so it's not really an 'object' as such
Shall I just be using the one method that runs all the other internal ones from within SendMail
Should this SendMail be redesigned as a helper class-type design?
I'm still quite new at Java EE so I'm not sure if there are any options available which I am missing
I think you should have one public method inside SendEmail class.
Btw, I would consider changing its name. I think having method send() when class is called SendEmail is not the best way (not to mention about names like call(), invoke() etc).
This is great article about this problem (The Kingdom of Nouns) in java.
What about something like: new Email(recipient, body).send()?
Or if you want to do it in a service style, I'd call it for example MailService
I found, that when I writing unit tests, especially for methods who do not return the value, I mostly write tests in white box testing manner. I could use reflection to read private data to check is it in the proper state after method execution, etc...
this approach has a lot of limitation, most important of which is
You need to change your tests if you rework method, even is API stay
the same
It's wrong from information hiding (encapsulation) point of view -
tests is a good documentation for our code, so person who will read
it could get some unnecessary info about implementation
But, if method do not return a value and operate with private data, so it's start's very hard (almost impossible) to test like with a black-box testing paradigm.
So, any ideas for a good solution in that problem?
White box testing means that you necessarily have to pull some of the wiring out on the table to hook up your instruments. Stuff I've found helpful:
1) One monolithic sequence of code, that I inherited and didn't want to rewrite, I was able to instrument by putting a state class variable into, and then setting the state as each step passed. Then I tested with different data and matched up the expected state with the actual state.
2) Create mocks for any method calls of your method under test. Check to see that the mock was called as expected.
3) Make needed properties into protected instead of private, and create a sub-class that I actually tested. The sub-class allowed me to inspect the state.
I could use reflection to read private data to check is it in the proper state after method execution
This can really be a great problem for maintenance of your test suite
in .Net instead you could use internal access modifier, so you could use the InternalsVisibleToAttribute in your class library to make your internal types visible to your unit test project.
The internal keyword is an access modifier for types and type members. Internal types or members are accessible only within files in the same assembly
This will not resolve every testing difficulty, but can help
Reference
ObjC has a very unique way of overriding methods. Specifically, that you can override functions in OSX's own framework. Via "categories" or "Swizzling". You can even override "buried" functions only used internally.
Can someone provide me with an example where there was a good reason to do this? Something you would use in released commercial software and not just some hacked up tool for internal use?
For example, maybe you wanted to improve on some built in method, or maybe there was a bug in a framework method you wanted to fix.
Also, can you explain why this can best be done with features in ObjC, and not in C++ / Java and the like. I mean, I've heard of the ability to load a C library, but allow certain functions to be replaced, with functions of the same name that were previously loaded. How is ObjC better at modifying library behaviour than that?
If you're extending the question from mere swizzling to actual library modification then I can think of useful examples.
As of iOS 5, NSURLConnection provides sendAsynchronousRequest:queue:completionHandler:, which is a block (/closure) driven way to perform an asynchronous load from any resource identifiable with a URL (local or remote). It's a very useful way to be able to proceed as it makes your code cleaner and smaller than the classical delegate alternative and is much more likely to keep the related parts of your code close to one another.
That method isn't supplied in iOS 4. So what I've done in my project is that, when the application is launched (via a suitable + (void)load), I check whether the method is defined. If not I patch an implementation of it onto the class. Henceforth every other part of the program can be written to the iOS 5 specification without performing any sort of version or availability check exactly as if I was targeting iOS 5 only, except that it'll also run on iOS 4.
In Java or C++ I guess the same sort of thing would be achieved by creating your own class to issue URL connections that performs a runtime check each time it is called. That's a worse solution because it's more difficult to step back from. This way around if I decide one day to support iOS 5 only I simply delete the source file that adds my implementation of sendAsynchronousRequest:.... Nothing else changes.
As for method swizzling, the only times I see it suggested are where somebody wants to change the functionality of an existing class and doesn't have access to the code in which the class is created. So you're usually talking about trying to modify logically opaque code from the outside by making assumptions about its implementation. I wouldn't really support that as an idea on any language. I guess it gets recommended more in Objective-C because Apple are more prone to making things opaque (see, e.g. every app that wanted to show a customised camera view prior to iOS 3.1, every app that wanted to perform custom processing on camera input prior to iOS 4.0, etc), rather than because it's a good idea in Objective-C. It isn't.
EDIT: so, in further exposition — I can't post full code because I wrote it as part of my job, but I have a class named NSURLConnectionAsyncForiOS4 with an implementation of sendAsynchronousRequest:queue:completionHandler:. That implementation is actually quite trivial, just dispatching an operation to the nominated queue that does a synchronous load via the old sendSynchronousRequest:... interface and then posts the results from that on to the handler.
That class has a + (void)load, which is the class method you add to a class that will be issued immediately after that class has been loaded into memory, effectively as a global constructor for the metaclass and with all the usual caveats.
In my +load I use the Objective-C runtime directly via its C interface to check whether sendAsynchronousRequest:... is defined on NSURLConnection. If it isn't then I add my implementation to NSURLConnection, so from henceforth it is defined. This explicitly isn't swizzling — I'm not adjusting the existing implementation of anything, I'm just adding a user-supplied implementation of something if Apple's isn't available. Relevant runtime calls are objc_getClass, class_getClassMethod and class_addMethod.
In the rest of the code, whenever I want to perform an asynchronous URL connection I just write e.g.
[NSURLConnection sendAsynchronousRequest:request
queue:[self anyBackgroundOperationQueue]
completionHandler:
^(NSURLResponse *response, NSData *data, NSError *blockError)
{
if(blockError)
{
// oh dear; was it fatal?
}
if(data)
{
// hooray! You know, unless this was an HTTP request, in
// which case I should check the response code, etc.
}
/* etc */
}
So the rest of my code is just written to the iOS 5 API and neither knows nor cares that I have a shim somewhere else to provide that one microscopic part of the iOS 5 changes on iOS 4. And, as I say, when I stop supporting iOS 4 I'll just delete the shim from the project and all the rest of my code will continue not to know or to care.
I had similar code to supply an alternative partial implementation of NSJSONSerialization (which dynamically created a new class in the runtime and copied methods to it); the one adjustment you need to make is that references to NSJSONSerialization elsewhere will be resolved once at load time by the linker, which you don't really want. So I added a quick #define of NSJSONSerialization to NSClassFromString(#"NSJSONSerialization") in my precompiled header. Which is less functionally neat but a similar line of action in terms of finding a way to keep iOS 4 support for the time being while just writing the rest of the project to the iOS 5 standards.
There are both good and bad cases. Since you didn't mention anything in particular these examples will be all-over-the-place.
It's perfectly normal (good idea) to override framework methods when subclassing:
When subclassing NSView (from the AppKit.framework), it's expected that you override drawRect:(NSRect). It's the mechanism used for drawing views.
When creating a custom NSMenu, you could override insertItemWithTitle:action:keyEquivalent:atIndex: and any other methods...
The main thing when subclassing is whether or not your behaviour completes re-defines the old behaviour... or extends it (in which case your override eventually calls [super ...];)
That said, however, you should always stand clear of using (and overriding) any private API methods (those normally have an underscore prefix in their name). This is a bad idea.
You also should not override existing methods via categories. That's also bad. It has undefined behaviour.
If you're talking about categories, you don't override methods with them (because there is no way to call original method, like calling super when subclassing), but only completely replace with your own ones, which makes the whole idea mostly pointless. Categories are only useful for safely extending functionality, and that's the only use I have even seen (and which is a very good, an excellent idea), although indeed they can be used for dangerous things.
If you mean overriding by subclassing, that is not unique. But in Obj-C you can override everything, even private undocumented methods, not just what was declared 'overridable' like in other languages. Personally, I think it's nice, as I remember in Delphi and C++ I used to “hack” access to private and protected members to workaround an internal bug in framework. This is not a good idea, but at some moments it can be a life saver.
There is also method swizzling, but that's not standard language feature, that's a hack. Hacking undocumented internals is rarely a good idea.
And regarding “how can you explain why this can best be done with features in ObjC”, the answer is simple — Obj-C is dynamic, and this freedom is common to almost all dynamic languages (Javascript, Python, Ruby, Io, a lot more). Unless artificially disabled, every dynamic language has it.
Refer to the wikipedia page on dynamic languages for longer explanation and more examples. For example, an even more miraculous things possible in Obj-C and other dynamic languages is that an object can change it's type (class) in place, without recreation.
This question already has answers here:
What is reflection and why is it useful?
(23 answers)
Closed 6 years ago.
I was just curious, why should we use reflection in the first place?
// Without reflection
Foo foo = new Foo();
foo.hello();
// With reflection
Class cls = Class.forName("Foo");
Object foo = cls.newInstance();
Method method = cls.getMethod("hello", null);
method.invoke(foo, null);
We can simply create an object and call the class's method, but why do the same using forName, newInstance and getMthod functions?
To make everything dynamic?
Simply put: because sometimes you don't know either the "Foo" or "hello" parts at compile time.
The vast majority of the time you do know this, so it's not worth using reflection. Just occasionally, however, you don't - and at that point, reflection is all you can turn to.
As an example, protocol buffers allows you to generate code which either contains full statically-typed code for reading and writing messages, or it generates just enough so that the rest can be done by reflection: in the reflection case, the load/save code has to get and set properties via reflection - it knows the names of the properties involved due to the message descriptor. This is much (much) slower but results in considerably less code being generated.
Another example would be dependency injection, where the names of the types used for the dependencies are often provided in configuration files: the DI framework then has to use reflection to construct all the components involved, finding constructors and/or properties along the way.
It is used whenever you (=your method/your class) doesn't know at compile time the type should instantiate or the method it should invoke.
Also, many frameworks use reflection to analyze and use your objects. For example:
hibernate/nhibernate (and any object-relational mapper) use reflection to inspect all the properties of your classes so that it is able to update them or use them when executing database operations
you may want to make it configurable which method of a user-defined class is executed by default by your application. The configured value is String, and you can get the target class, get the method that has the configured name, and invoke it, without knowing it at compile time.
parsing annotations is done by reflection
A typical usage is a plug-in mechanism, which supports classes (usually implementations of interfaces) that are unknown at compile time.
You can use reflection for automating any process that could usefully use a list of the object's methods and/or properties. If you've ever spent time writing code that does roughly the same thing on each of an object's fields in turn -- the obvious way of saving and loading data often works like that -- then that's something reflection could do for you automatically.
The most common applications are probably these three:
Serialization (see, e.g., .NET's XmlSerializer)
Generation of widgets for editing objects' properties (e.g., Xcode's Interface Builder, .NET's dialog designer)
Factories that create objects with arbitrary dependencies by examining the classes for constructors and supplying suitable objects on creation (e.g., any dependency injection framework)
Using reflection, you can very easily write configurations that detail methods/fields in text, and the framework using these can read a text description of the field and find the real corresponding field.
e.g. JXPath allows you to navigate objects like this:
//company[#name='Sun']/address
so JXPath will look for a method getCompany() (corresponding to company), a field in that called name etc.
You'll find this in lots of frameworks in Java e.g. JavaBeans, Spring etc.
It's useful for things like serialization and object-relational mapping. You can write a generic function to serialize an object by using reflection to get all of an object's properties. In C++, you'd have to write a separate function for every class.
I have used it in some validation classes before, where I passed a large, complex data structure in the constructor and then ran a zillion (couple hundred really) methods to check the validity of the data. All of my validation methods were private and returned booleans so I made one "validate" method you could call which used reflection to invoke all the private methods in the class than returned booleans.
This made the validate method more concise (didn't need to enumerate each little method) and garuanteed all the methods were being run (e.g. someone writes a new validation rule and forgets to call it in the main method).
After changing to use reflection I didn't notice any meaningful loss in performance, and the code was easier to maintain.
in addition to Jons answer, another usage is to be able to "dip your toe in the water" to test if a given facility is present in the JVM.
Under OS X a java application looks nicer if some Apple-provided classes are called. The easiest way to test if these classes are present, is to test with reflection first
some times you need to create a object of class on fly or from some other place not a java code (e.g jsp). at that time reflection is useful.