How do you experienced developers comment these lines by Michael Feathers:
A test is not a unit test if:
It talks to the database
It communicates across the network
It touches the file system
It can't run at the same time as any of your other unit tests
You have to do special things to your environment (such as editing
config. files) to run it.
Now I was wondering if I should unit test my DAO classes...
Will I get more advantages or disadvantages by unit testing the DAO layer? Share your thoughts please.
Feathers isn't saying don't write such tests. He's saying they're not unit tests, because he defines a unit test as "small, they test a method or the interaction of a couple of methods. ... a "binary chop" that allows you to discover whether the problem is in your logic or in the things are you interfacing with." And he's right - the Agile/XP/Scrum intention of unit testing is to provide a fast red light/green light determination if a small piece of code is functioning correctly.
Related
I have an existing Spring MVC webapp, built with Ant, set up in Jenkins for CI builds.
I am getting nice code coverage reports from my unit tests with Cobertura.
I recently added some functional/UI tests with Selenium. Does anyone have suggestions for how I could get a single code coverage report from both functional and unit tests? Has anyone done this successfully?
My end goal is to count code coverage holistically, so each class/method can be tested with the technique that makes the most sense and I hope to get close to 100% across all forms of testing. A specific example: it might make more sense to cover controllers through end-to-end UI testing, when they don't have any real logic of their own to test in isolation. I would then still report the code as "covered".
I am not trying to start a debate about unit tests being good/bad or TDD vs. BDD - I am asking a question about how to accomplish my goal with a given set of technologies.
I think Grails handles this nicely, but I haven't figured out how to do this with a regular webapp (Spring MVC, Java EE/JSF, etc.)
So I was looking to a book and I dont really understand their classification:
Unit tests
Integration tests
Smoke and Sanity tests
System tests
Acceptance tests
I thought smoke test would be right after integration one? Also I thought that sanity means quick check of the application when new part is deployed.
Also the question: is this correct or should the smoke and sanity tests be in different order. If so, why?
Smoke tests should be performed before sanity tests - that is correct. The purpose of smoke tests is just to quickly check whether the SUT is runnable, it's interfaces and main components respond to the users actions. There is no deep insight into the app during these tests.
The sanity tests can be a subset of regression tests. Their main goal is to quickly test logic of the application in compliance with requirements provided. Should be done after each major change in the way some parts of system work. And simply if results are negative there is no point in going through more detailed tests. They should give us the information whether tested parts of system match the requirements and specification.
And now the thing is that sanity tests can be put into the unit test level as well as system test level. You can simply run a few unit tests specificly designed to check only basic of functionality and these can be than called sanity tests. The same applies to system test level. So there is no strict definition of where is the place for sanity tests. I believe you should not take it as granted because every case is different and context of tests and application should be taken into major consideration.
A Smoke Test is a Quick & Dirty test of the most important features usually done by someone other than the developer after unit & integration testing to see if there's a point in doing more specific/ rigorous testing.
Basic test of key features.
Only test happy path.
Major features.
For Example if you're smoke testing an API
Check responses are correct.
Test login with valid details.
Test main endpoints.
Check the correct response is returned.
Smoke Testing is the first & foremost Testing done by any QA personnel. This is done once the unit testing is completed by the developer.
The main agenda to perform Smoke testings is to believe your application can handle the positive flow at the Least. Once this done QA gradually proceeds with the following
1.Functional Testing
2.Link & Download Options
3.UI
4.System Testing
5.Regression for better results from previous builds.
Happy Testing :)
I am building my first iPhone app, and I want to get started with unit testing.
Been reading up on it and there are two sides to it. logictests and applicationtest.
logictest seems to me like regular unit testing.
applicationtesting sounds to me like gui-interaction testing.
Is that correct? Should i do both or is logictest sufficient?
I am considering just testing CRUD operations of objects in my logictest
I find Apple's distinction artificial and limiting. By using a different test framework (GTM in my case, or you might try GHUnit) you can just write tests without asking yourself, "Where does this test belong?" I write tests against view controllers that are not interaction tests.
I need to automate testing of my windows mobile application. My application does not have any UI. So, normal testing tools which works with random key strokes and mouse clicks will not work here. Are there any tools available for windows mobile to test only background processing?
You have a couple of options depending on what level of testing you want.
Integrated Test
An integrated test aims to test the application in the real world. Therefore you would create all of the "real" things and write code to specifically test to see if your conditions are met. I would believe however, if you're trying to test GPS then this would not be practical. As someone would actually have to move the device around.
Unit Test by mocking
I've done this before for GPS testing. The idea is that you SANDBOX the object being tested. You ensure that all external references (e.g. anything that isn't the object) are interfaces. You then MOCK these interfaces with "test-only" implementations.
For example I worked on a GPS test where we used an interface called: INmeaInterpreter to fire certain events which would be picked up by a class named PositioningService.
The default implementation was a 3rd party component.
However as INmeaInterpreter was an interface we could create an implementation that instead of using the REAL data uses (for example) an NMEA file to read from. This enabled us to test how the PositioningService worked in certain (and sometimes strange) scenarios.
I would then suggest mocking the other external references. The call to the database can just be a dummy object with a counter for the database call that is incremented if it is called. You could then write a test with an NMEA file that should result in a database call and then at the end of the unit test check that dummy object to see if that call occurred.
We did all the above with horrible MSTest but you could use any testing framework (I recommend NUnit). I'm not sure if there are options to specifically test on the device. We ran all of our tests on desktop as we'd split the code nicely so that device specific code was isolated and could easily be replaced with Desktop equivalents.
Obviously the only problem with unit tests is that they dont test the hardware.
I would recommend (depending on the size of the project and the team) to do BOTH types of testing but to place the larger emphasis on the unit tests (as they are easier to run/manage).
Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing?
This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers.
The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.
Currently I have a class called Program with a single public method called Run. With every test I start at the beginning of the program like the user would and then get to the desired point in the program where a particular feature would be available. I then use that feature in some way and verify it did what I want. I have a class called Commands that exposes different features of the program as methods. An instance of the Commands object is passed to the program and it eventually gets passed to every Form class. These will subscribe to events from the Commands class that are called by the methods of the command class(one matching event per method). The events are subscribed to by pointed to the method that is called when a certain part of the user interface is used, thus allowing the entire program to be drivable by my tests. If you call a method on the Command object for an event that is currently not subscribed to, a FeatureMissingException is thrown.
All of this works but I don't like the Command class. It is getting too large with too many responsibilities (every feature of the program). The Commands class is also a dependency magnet (all the Form classes have an instance of it but only subscribe to the events that represent features that can be activated through their UI).
It's called integration testing. Integration tests are much more difficult to make automated, and are very often done by hand. Many simpler tests can still be done using NUnit though - you don't have to do anything special, just don't use Mocks (like you should be doing for unit tests) so you can test how the modules actually fit together.
Context/specification is a good way of organizing these tests.
What you want to do is integration testing, like the other answer suggests. This will allows you to functional/feature testing. The most common framework for this for StoryQ or SpecFlow. This allows you to develop your tests in a BDD style and can be mostly be automated against the spec that you want.
Tools like Selenium allow you to do functional testing in a browser to do what the end user would do. All of these can be driven with NUnit since NUnit is purely a framework for running tests be them Unit tests to large functional tests