Dependencies in Acceptance Testing - nhibernate

Me and a co-worker are having a debate. We are on a craptastic legacy project and are slowly adding acceptance tests. He believes that we should be doing the work in the gui/watin then using a low level library to query the database directly for to get an 'end to end' test as he puts it.
We are using NHibernate and I advocate using gui/watin then those nhibernate objects to do the assertions in the acceptance testing. He dislikes the dependency of NHibernate in the test. My assertion was that we have/should have integration tests against the NHibernate objects to make sure they are working with DB the way we intend at which point there is no downside in using them in the acceptance test to assert proper operation. I also think his low level sql dependence will make the tests fragile and duplicate business logic in alot of cases.
Integration testing in our shop basically means its a single component with a dependency e.g. fileRepository/FileSystem Domain-NhibernateObject/Database. Acceptance testing means coming in through the GUI. Unit means all dependencies have/can be mocked/stubbed out and you've got a pure test in memory with only the method under test actually doing any real work. Let me know if my defs are off.
Anyway any articles/docs/parchment with opinions on this subject you can point me at would be appreciated.

The only reason you'd ever automate tests is to make things easier to change. If you weren't changing them, you could get away with manual testing. Tying the tests to the database will make the database much harder to change.
Tying them to the NHibernate objects won't help very much either, I'm afraid!
The users of your system won't be using the database or NHibernate. How do they get the benefit (or provide the benefit to other stakeholders)? How will they be able to tell that it's working well? If you can capture that in the Acceptance Tests, you'll be able to change the underlying code and data while still maintaining the value of your application. If someone generates reports from the data, why not generate the same reports and check that their contents are what you expect? If the data is read by another system, can you get a copy of that system and see what it outputs to its users?
Anyway, that's my opinion - keep acceptance tests as close to the business value as possible - and here's a blog post I wrote which might help. You could also try the Behavior Driven Development group on Yahoo, who have a fair bit of experience amongst them.
Oh, and doing integration tests to check that your (N)Hibernate bindings are good is an excellent idea. Saved us on a couple of projects.

Related

What is the purpose of integration tests?

Let's suppose we designed data access layer as repository pattern with nhibernate support.
What I wonder is; while testing a repository what are we really testing??
Are we testing Orm does it's job right or the database can run queries as expected?
If I have a repository like; OrderRepository, why should i test Save,Update,FindbyId methods?
I think the only thing needs to be tested is Orm mappings the other things don't make sense to me.
Because Orms like Nhibernate,Entity Framework are mature frameworks so; why would I have to worry about that "context.Add()" or "session.Save()" working or not if mappings done right?
It is indeed possible that there may be no (or very little) return on that investment. Testing is like that sometimes.
It depends on how you define the value of the testing. For some, there's inherent value in chasing that elusive 100% coverage. If there's time to build those tests, great. 100% coverage is a nice warm fuzzy to have. But, as you imply, not all of the tests which brought one there are truly meaningful. Often the primary value in that case is the bragging rights of the 100% coverage metric. (Perhaps a picky client wants to see that number on a report somewhere, for example.)
Or perhaps the integration tests stand as a valuable tool outside the context of the ORM or whatever other tools may be used to build the DAL. If you consider the DAL as a black box from the perspective of a technical-but-non-developer resource, they may want a tool which can test any implementation of the DAL interface and ensure, end to end, that the implementation works in a live environment when properly configured.
Perhaps the entire integration test is really not testing the code per se, but from their perspective is testing that the database and application are mutually properly configured and talking to each other. That person doesn't care about ORMs or mappings, they care that a couple of separate system components on a UML diagram are working together to the extent of the system's specifications. Whether that component uses an ORM, uses manual SQL, or uses magical pixie dust makes no difference to that test.
It really comes down to which role on the team wants the test and what they're actually trying to test/validate.

Is there value in maintaining my TDD tests when requirements change?

I've been using TDD for a while now and it works well most of the time. From my understanding, I write the test (red) and work the code (green). This serves as a great tool to focus on coding just what is required.
The application I'm currently working on has fairly loose user requirements to say the least! This can create the need to change the existing code base in trivial manner all the way up to redesigning full sections.
When I do this a lot of my tests fail ... understandably since the design has changed. What should I do with these old tests? I'm finding maintaining them can become an issue.
I suppose the core of my question is:
Is TDD used more to create a coding "map" to help focus you as a developer to write code and then some other testing paradigm is used in conjunction to ensure that everything "works" when the code is handed off? Or, do people use TDD as a full-stop-shop that can both help create cleaner code AND work as a full test suite, hence I'll need to maintain my full test suite
Tests written while doing TDD absolutely are valuable throughout the lifetime of an application.
The purpose of TDD is to allow you to build up your code test by test, so that it always meets the requirements you've implemented thus far (this is the "works" part), is fully tested and is well factored. When you're done you'll have a full regression test suite as a bonus, which is valuable. So for both proving what requirements you've implemented and for regression, it's valuable to keep your tests running.
If requirements change so fast that you can't maintain the test suite, you have a project management problem. Explain to your customers that you don't have enough time to ensure quality, or that they need to hire a test engineer.

CRUD Web App Automated Testing best practices

G'day,
I'm working with a fairly DB heavy web app and looking at setting up automated testing for it using Selenium. However, as an automated testing newbie, I don't know where to start.
How do you categorize your tests to make sure they're logically sound as well as complete?
How do you deal with the DB when testing? Build a fresh DB before every test and drop the tables after each test? Start with a test-db?
Just looking for some pointers to what best practices are in this regard.
Thanks,
In general...
If your main goal is to test database CRUD operations I would go at least 'one level down' and write some kind of integration tests that do not use the GUI for testing. The tests become a lot more focused on the actual CRUD operations if you take the GUI out.
How to deal with the database...
No matter whether you go with Selenium or integration tests it is a good idea that the tests do not depend on each other. This means setting up the database before each test and/or tearing them down to a clean/known state after the test. Maintaining tests that are written this way is a lot easier. For example, you can run a single test by itself.
For both of our integration and acceptance tests we use dbunit to achieve this. Easily setting up and tearing down DBs is not a new thing, there should be something available for your technology stack as well. (You did not mention the technologies you are using)
How to categorize the tests...
For CRUD operations I would make sure I test one thing and one thing only. For example, I have an Employee table. You can have a test suite that tests everything that has to do with an Employee, but a single tests should only test one thing. 'Save Employee successfully' should be a different test case from 'Attempt to save an Employee that already exists' or 'Delete Employee'.
EDIT: (Answer to the comment)
We basically kill the database and build it from scratch at the beginning of the testing. (Not sure how crucial this part is, but this makes sure that our db is consistent with what the code expects. We are using hibernate...)
Then for each test we have a different datasets to insert. So let's say again that we are testing Employee. If I want to test deleting an Employee I would insert a dataset that contained the smallest amount of information in the database to make this happen. Smaller datasets are easier to maintain. If you use the same dataset for all of your tests it will become very hard to change the code and change or add new tests.
We do use the same dataset for things that seem to require the same information. For example, you want to test 'Attempt to save Employee to database' and 'Delete Employee'. You might reuse one dataset for this.
I was wondering if building and
tearing down the DB for each test
would be expensive time and computing
wise?
I would not worry too much about this. Yes, it might add, let's say, 3-4 seconds to every test, but in the big picture, is this really important? It is more important that you have tests that aim for maintenance because your time as a developer is a lot more valuable then these tests taking 5 minutes to run instead of 3 minutes.
I don't know anything about Selenium, but I found a JavaRanch article that was really well-done. In the section titled "Unit Testing Mock Basics" the author shows a good method of creating mock objects to avoid any database calls. Obviously, you'll need some integration tests that involve DB calls, but for plain old unit tests, the outlined method works well.
Remember, running unit tests should be super fast.

does anyone have parasoft .test or jtest experience

First i have no experience on parasoft .test or jtest experience. I have read the datasheet that the product could automatically generate unit test.
but I am woundering how useful the auto generated unit test are. Does it really do not need any other effort by developer?
any experience sharing are welcome.
thanks a lot!
We used JTest for our product recently. We didn't use the standard product, we used the Eclipse Plugin. The standard product is built on the OSGI framework (read: it's like Eclipse), but you have to import and create your projects. We were already using Eclipse, so it made sense for us to simply use the plugin, which has all of the same capabilities.
While there are many things that JTest can do for you, there are also many irritating things about it. For example, Jtest's static analysis tool is what is really worthwhile, IMHO. It can look for lots of errors and has a pretty good reporting system. But, while unit test generation is okay, but I think I spent as much or more time fixing and enhancing the generated tests than I would have just making them myself. Administering Jtest is also somewhat complicated and involved.
The built-in mechanisms to make unit tests, stub objects, parameterized unit tests, etc. are not well documented. At least, my little brain couldn't make good use of them in the two years we used the product. However, a lot of their super awesome features (like GUI tracing, command-line interface, the Bug Detective, reporting system etc.) all require extra, very expensive licenses.
Really, Jtest just gives you an easy way to manage the execution of static and unit testing. But it's really expensive. I can't believe they charge thousands of dollars per license of that stuff. You'll also find that they will want to train you, which you almost need because the documentation is pretty bad. Which is odd, because the user's guide is like 900 pages long.
But here's a big hint: you can do it for free. If I had to do it over, I would have pushed hard for using these products (which, oddly enough, look and feel very similar to Jtest)
http://code.google.com/javadevtools/codepro/doc/index.html
I wouldn't get Jtest thinking that this will be a small something to add to your developer's routine. Jtest can become a huge time and process sink.
Jtest is very very useful.Yes it generates it own test cases which requires lot more efforts for fixing them.I use it in different form.I delete all the generated unnecessary test cases.I made one another file which create database connection and set various other parameters sets.Also after configuration the code will work without mocking if all of the code is ready and if it is not ready than you can stubs the required methods.
Static code analyzer is good(for checking null pointer exception)
Checking code conventions is very good.
Write your custom code guidlines as use cases and execute it on your code.
Code coverage.
Debug while testing.
The auto generated unit tests still needs a developer to decide what results are correct or not, so you have to sit down and do the job. A lot of the boiler plate code is of course auto generated, so a small time saver there. I haven't used it much, but did evaluate jtest for an earlier employer. Seemed like a great product, if I remember correctly. :)
Alas there will never be a silver bullet that addresses all unit testing requirements, but JTest & .Test (& C++Test for that matter) about as close as you will get. Uggwar is correct that the developer will still need to verify outcomes for the basic auto generated tests, however there is a whole lot more to it.
These tools can be used to create basic regression tests, these are there to tell you when something has changes, not whether what it is testing is right or wrong. You can also trace a running application and then generate JUnit/NUnit/CPPUnit tests that recreate what was going on in the application. These tend to be far more useful tests, which are used as regression tests for items of functionality.
Other functionality includes the ability to generate stubs, use spreadsheets as datasources and provide an object repository. There is a while lot more too ....
Give them a try.
http://www.parasoft.com

Automatic testing for web based projects

Recently I've came up with the question is it worth at all to spent development time to generate automatic unit test for web based projects? I mean it seems useless at some point because at some point those projects are oriented on interactions with users/clients, so you cannot anticipate the whole possible set of user action so you be able to check the correctness of content showed. Even regression test can hardly be done. So I'm very eager to know to know the opinion of other experienced developers.
Selenium have a good web testing framework
http://seleniumhq.org/
Telerik are also in the process of developing one for web app testing.
http://www.telerik.com/products/web-ui-test-studio.aspx
You cannot anticipate the whole
possible set of user action so you be
able to check the correctness of
content showed.
You can't anticipate all the possible data your code is going to be handed, or all the possible race conditions if it's threaded, and yet you still bother unit testing. Why? Because you can narrow it down a hell of a lot. You can anticipate the sorts of pathological things that will happen. You just have to think about it a bit and get some experience.
User interaction is no different. There are certain things users are going to try and do, pathological or not, and you can anticipate them. Users are just inputting particularly imaginative data. You'll find programmers tend to miss the same sorts of conditions over and over again. I keep a checklist. For example: pump Unicode into everything; put the start date after the end date; enter gibberish data; put tags in everything; leave off the trailing newline; try to enter the same data twice; submit a form, go back and submit it again; take a text file, call it foo.jpg and try to upload it as a picture. You can even write a program to flip switches and push buttons at random, a bad monkey, that'll find all sorts of fun bugs.
Its often as simple as sitting someone down who's unfamiliar with the software and watching them use it. Fight the urge to correct them, just watch them flounder. Its very educational. Steve Krug refers to this as "Advanced Common Sense" and has an excellent book called "Don't Make Me Think" which covers cheap, simple user interaction testing. I highly recommend it. It's a very short and eye opening read.
Finally, the client themselves, if their expectations are properly prepared, can be a fantastic test suite. Be sure they understand its a work in progress, that it will have bugs, that they're helping to make their product better, and that it absolutely should not be used for production data, and let them tinker with the pre-release versions of your product. They'll do all sorts of things you never thought of! They'll be the best and most realistic testing you ever had, FOR FREE! Give them a very simple way to report bugs, preferably just a one button box right on the application which automatically submits their environment and history; the feedback box on Hiveminder is an excellent example. Respond to their bugs quickly and politely (even if its just "thanks for the info") and you'll find they'll be delighted you're so responsive to their needs!
Yes, it is. I just ran into an issue this week with a web site I am working on. I just recently switched-out the data access layer and set up unit tests for my controllers and repositories, but not the UI interactions.
I got bit by a pretty obvious bug that would have been easily caught if I had integration tests. Only through integration tests and UI functionality tests do you find issues with the way different tiers of the application interact with one another.
It really depends on the structure and architecture of your web application. If it contains an application logic layer, then that layer should be easy to unit test with automating tools such as Visual Studio. Also, using a framework that has been designed to enable unit testing, such as ASP.NET MVC, helps alot.
If you're writing a lot of Javascript, there have been a lot of JS testing frameworks that have come around the block recently for unit testing your Javascript.
Other than that, testing the web tier using something like Canoo, HtmlUnit, Selenium, etc. is more a functional or integration test than a unit test. These can be hard to maintain if you have the UI change a lot, but they can really come in handy. Recording Selenium tests is easy and something you could probably get other people (testers) to help you create and maintain. Just know that there is a cost associated with maintaining tests, and it needs to be balanced out.
There are other types of testing that are great for the web tier - fuzz testing especially, but a lot of the good options are commercial tools. One that is open source and plugs into Rails is called Tarantula. Having something like that at the web tier is a nice to have run in a continuous integration process and doesn't require much in the form of maintenance.
Unit tests make sense in TDD process. They do not have much value if you don't do test-first development. However the acceptance test are a big thing for quality of the software. I'd say that acceptance test is a holy grail of the development. Acceptance tests show whether the application satisfies the requirements. How do I know when to stop developing the feature --- only when all my acceptance test pass. Automation of acceptance testing a big thing because I do not have to do it all manualy each time I make changes to the application. After months of development there can be hundreds of test and it becomes unfeasible (sometime impossible) to run all the test manually. Then how do I know if my application still works?
Automation of acceptance tests can be implemented with use of xUnit test frameworks, which makes a confusion here. If I create an acceptance test using phpUnit or httpUnit is it a unit test? My answer is no. It does not matter what tool I use to create and run test. Acceptance test is the one that show whether the features is working IAW requirements. Unit test show whether a class (or function) satisfies the developer's implementation idea. Unit test has no value for the client (user). Acceptance test has a lot of value to the client (and thus to developer, remember Customer Affinity)
So I strongly recommend creating automated acceptance tests for the web application.
The good frameworks for the acceptance test are:
Sahi (sahi.co.in)
Silenium
Simpletest (I't a unit-test framework for php, but includes the browser object that can be used for acceptance testing).
However
You have mentioned that web-site is all about user interaction and thus test automation will not solve the whole problem of usability. For example: testing framework shows that all tests pass, however the user cannot see the form or link or other page element due to accidental style="display:none" in the div. The automated tests pass because the div is present in the document and test framework can "see" it. But the user cannot. And the manual test would fail.
Thus, all web-applications needs manual testing. The automated test can reduce the test workload drastically (80%), but manual test are as well significant for the quality of the resulting software.
As for the Unit testing and TDD -- it make the code quality. It is beneficial to the developers and for the future of the project (i.e. for projects longer that a couple of month). However TDD requires skill. If you have the skill -- use it. If you don't consider gaining the skill, but mind the time it will take to gain. It usually takes about 3 - 6 month to start creating a good Unit tests and code. If you project will last more that a year, I recommend studding TDD and investing time in proper development environment.
I've created a web test solution (docker + cucumber); it's very basic and simple, so easy to understand and modify / improve. It lies in the web directory;
my solution: https://github.com/gyulaweber/hosting_tests