I would like to be able to do some unit testing during development in order to catch potential errors when extending/changing the way a given web service (endpoint) works.
I have been looking at EasyMock, and this seems like a viable way to go - but!... I'm using maven (2.0.9) and would like to test e.g. with mvn test, but this requires that my backend is running or that I use EasyMock - which then requires that I can connect to a database (thus this needs some mocking as well). The web services I currently have all retrieve data from a backend base...
As I have 15 or so web services used by different parts of the organization in different versions I would very much like to be able to test that changes doesn't break older versions.
I cannot believe that I'm the first person to have this problem, so any hints, tips, or likewise would be much appreciated.
After comment-based talk :P, it seems that there's no problem actually. The key thing was to understand that some component's dependencies (like database) are just its real implementation dependencies and are not part of its interface. And mocking is about providing alternative implementation, to just satisfy a need for interaction.
In general, as you mentioned, all stuff you depend on in backend need to be mocked (or doubled in general) when unit testing, no matter what this stuff really is. If you depend on some external endpoint, you have to mock it. If you depend on RDBMS, you can mock it too, but probably better test double here would be fake instead of mock, so you can use some in-memory database (like HSQL or H2), assuming you're not using vendor-specific, native SQL in your code. In fact, you're still providing some own, usually simplified implementation of some interfaces, but nowadays you use mocking framework for this. Some time ago, developers write own, hand-crafted mock classes. Even today, it's sometimes really good idea to made own mock without help of mocking framework. Personally I encounter such special situation where this approach fits pretty well.
By the way, two more things. If you consider doing some integration testing as well, Spring WS since 2.0 version provides module spring-ws-test that supports this pretty well by providing really fluent API. For more info look at Spring WS docs, if you're interested. Second thing, if you're just starting with mocking in general, also consider using Mockito. In my opinion, it's really good as well. To be honest, EasyMock is my personal default choice for mocking lib, but I found Mockito similarly easy and powerful. As far as I know, it's prefered by many developers as well and nowadays it's probably more sexy :P.
Related
Let's suppose we designed data access layer as repository pattern with nhibernate support.
What I wonder is; while testing a repository what are we really testing??
Are we testing Orm does it's job right or the database can run queries as expected?
If I have a repository like; OrderRepository, why should i test Save,Update,FindbyId methods?
I think the only thing needs to be tested is Orm mappings the other things don't make sense to me.
Because Orms like Nhibernate,Entity Framework are mature frameworks so; why would I have to worry about that "context.Add()" or "session.Save()" working or not if mappings done right?
It is indeed possible that there may be no (or very little) return on that investment. Testing is like that sometimes.
It depends on how you define the value of the testing. For some, there's inherent value in chasing that elusive 100% coverage. If there's time to build those tests, great. 100% coverage is a nice warm fuzzy to have. But, as you imply, not all of the tests which brought one there are truly meaningful. Often the primary value in that case is the bragging rights of the 100% coverage metric. (Perhaps a picky client wants to see that number on a report somewhere, for example.)
Or perhaps the integration tests stand as a valuable tool outside the context of the ORM or whatever other tools may be used to build the DAL. If you consider the DAL as a black box from the perspective of a technical-but-non-developer resource, they may want a tool which can test any implementation of the DAL interface and ensure, end to end, that the implementation works in a live environment when properly configured.
Perhaps the entire integration test is really not testing the code per se, but from their perspective is testing that the database and application are mutually properly configured and talking to each other. That person doesn't care about ORMs or mappings, they care that a couple of separate system components on a UML diagram are working together to the extent of the system's specifications. Whether that component uses an ORM, uses manual SQL, or uses magical pixie dust makes no difference to that test.
It really comes down to which role on the team wants the test and what they're actually trying to test/validate.
I would like to test some JPA code with different implementations like Hibernate,EclipseLink,OpenJPA.
I am still looking for am elegant way of doing this but did not found one yet.
However I wonder that the implementors do not have some kind of tests that they need to execute in order to prove that their implementation is compliant with the specific standard (JPA2.1)
Do you know if there is a test compatibility kit like for JPA so i can inspire myself from there ?
Any suggestion of how i would run the tests easily switching between implementations is welcome.
I thought on using maven profiles...but running main classes might be not so nice.
Thanks
In some cases unit testing can be really difficult. Normally people say to only test your public API. But in some cases this is just not possible. If your public API depends on files or databases you can't unit test properly. So what do you do?
Because it's my first time TDD-ing, I'm trying to find "my style" for unit testing, since it seems there is just not the one way to do so. I found two approaches on this problem, that aren't flawless at all. On the one hand, you could try to friend your assemblies and test the features that are internal. On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests. This approach looks quite nice first but becomes more ugly as you try to transport data using these fakes.
Is there any "good" solution to this problem? Which of those is less flawed? Or is there even a third approach?
I made a couple of false starts in TDD, grappling with this exact same problem. For me the breakthrough came when I realized what my mentor meant when he said : "We don't want to test the framework." (In our case that was the .Net framework).
In your case it sounds as if you have some business logic that interfaces to files and databases. What I would do is to abstract the file and database logic in the thinnest layers possible. You can then use Mock (of fakes or stubs) to simulate the file and database layers. This will allow you to test scenarios like if-my-database-returns-this-kind-of-information-does-my-business-logic-handle-it-correctly? Likewise for file access you can test the code that figures out which file in which path to open and you can test that your logic would be able to pull apart the contents of any given file correctly and able to use it correctly.
If for example your file access layer consists of a single function that takes a path name and a file name and returns the contents of the file in a long string then you don't really need to test it because essentially you are making a single call to the framework/OS and there is not a lot that can go wrong there.
At the moment I am working on a system that wraps our database as a bunch of functions that return lists of POCO's. Easy to understand for the business layer and easy to simulate via mocks.
Working this way takes some getting used to but it is absolutely byoo-ti-full once it clicks in your mind.
Finally, from your question I guess that you are working with legacy code and trying to do TDD for a new component. This is quite a bit harder than doing TDD on a completely new development. If it is at all possible, try to do your first TDD attempts on new (or well isolated) systems. Once you have learnt the mechanics it would be a lot easier to introduce partially TDD'd bits to legacy systems.
If your public API depends on files or databases you can't unit test properly. So what do you do?
There is an abstraction level that can be used.
IFileSystem/ IFileStorage (for files)
IRepository/ IDataStorage (for databases)
Since this level is very thin its integration tests will be easy to write and maintain. All other code will be unit-test friendly because it is easy to mock interaction with filesystem and database.
On the one hand, you could try to friend your assemblies and test the features that are internal.
People face this problem when their classes violates single responsibility principle (SRP) and dependency injection (DI) is not used.
There is a good rule that classes should be tested via their public methods/properties only. If internal methods are used by others then it is acceptable to test them. Private or protected methods should not be made internal because of testing.
On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests.
Yes, interfaces are easy to mock because of limitations of mocking frameworks.
If you can create an instance (fake/stub) of a type then your dependency should not implement an interface.
Sometimes people use interfaces for their domain entities but I do not support them.
To simplify working with fakes there are two patterns used:
Object Mother
Test Data Builder
When I started writing unit tests I started with 'Object Mother'. Now I am using 'Test Data Builder's.
There are a lot of good ideas that can help you in the book Working Effectively with Legacy Code by Michael Feathers.
Don't let the hard stuff get in your way... If it's inherently hard to test due to db or file integration, just ignore it for the moment. Most likely you can refactor that hard to test stuff into easier to test stuff using mocks with Dependency Injection etc... Until then, test the easy stuff and get a good unit test suite built up... when you do the refactoring of the hard to test stuff, you will have a much higher confidence interval that it's not breaking anything else... And refactoring to make something more easily testable IS a good reason to refactor...
I'm using a twitter gem which basically accesses twitter and lets me grab tweets, timeline etc. Its really good but I have a lot of my code that uses the stuff it returns and I need to test it. The things the gem returns aren't exactly simple strings, there pretty complex objects (scary as well) so im left scratching my head.
So basically I'm looking for an answer, book, blog, open-source project that can show me the rights and wrongs of testing around external services.
answers that are either not language centric or ruby/rails centric would most greatly be appreciated.
What you are really talking about are two different kinds of testing that you would want to accomplish - unit tests and integration tests.
Unit tests will test the validity of the methods, independently of any external data. You should look into some sort of mocking framework, based on whatever language it is that you are using. You are basically looking to say, with the tests, something equivalent to "if these assumptions are qualified, then this test should yield..." The making framework will define your assumptions, in terms of saying that certain classes/objects are set in a particular way and can be assumed to be valid. These are the tests that will not rely on Twitter being alive, or the third part library/API being responsive.
Integration tests will perform tests live against the data source, consuming the library/API to perform actual actions. Where it gets tricky, since you are using a third party service, is in writing out to the service (i.e. if you are creating new Tweets). If you are, you could always create an account on Twitter that could be used just for write operations. Generally, if you were testing against a local database - for example - you could then, instead, use transactions to test similar operations; rolling back the transactions instead of committing them.
Here are a couple of non-language specific, high-level definitions:
Wikipedia (Software Testing)
Wikipedia (Mock Object)
I am from a .NET stack, so I won't pretend to know much about Ruby. A quick Google search, though, did reveal the following:
Mocha (Ruby Mocking Framework)
You can easily stub at the http layer using something like wiremock http://wiremock.org/ I've used this on a few projects now and it's quite powerful and fast. This will eliminate all the set up code of code based mocking - just fire up the jar with related mappings and bob's your uncle.
AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))?
If you do use AOP then please let us know which tools you use as well. Thanks!
Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases:
I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary.
I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions.
I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything.
This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things.
Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful.
Yes.
Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal.
One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc.
Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method.
Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions.
I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP.
Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not.
At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them.
So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare.
AOP and transaction demarcation is a match made in heaven. We use Spring AOP #Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else.
We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality.
First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project.
The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work.
I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned.
I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code
We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method.
Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand.
The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time.
We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application.
We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect.
Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location.
The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers.
Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html