Handling test data when going from running Selenium tests in series to parallel - testing

I'd like to start running my existing Selenium tests in parallel, but I'm having trouble deciding on the best approach due to the way my current tests are written.
The first step in of most of my tests is to get the DB into a clean state and then populate it with the data needed for the rest of the test. While this is great to isolate tests from each other, if I start running these same Selenium tests in parallel on the same SUT, they'll end up erasing other tests' data.
After much digging, I haven't been able to find any guidance or best-practices on how to deal with this situation. I've thought of a few ideas, but none have struck me as particularly awesome:
Rewrite the tests to not overwrite other tests' data, i.e. only add test data, never erase -- I could see this potentially leading to unexpected failures due to the variability of the database when each test is run. Anything from a different ordering of tests to an ill-placed failure could throw off the other tests. This just feels wrong.
Don't pre-populate the database -- Instead, create all needed data via Selenium itself. This would most replicate real-world usage, but would also take significantly longer than loading data directly into the database. This would probably negate any benefits from parallelization depending on how much test data each test case needs.
Have each Selenium node test a different copy of the SUT -- This way, each test would be free to do as it pleases with the database, since we are assume that no other test is touching it at the same time. The downside is that I'd need to have multiple databases setup and, at the start of each test case, figure out how to coordinate which database to initialize and how to signal to the node and SUT that this particular test case should be using this particular database. Not awful, but not what I would love to do if there's a better way.
Have each Selenium node test a different copy of the SUT, but break up the tests into distinct suites, one suite per node, before run-time -- Also viable, but not as flexible since over time you'd want to keep going back and even the length of each suite as much as possible.
All in all, none of these seem like clear winners. Option 3 seems the most reasonable, but I also have doubts about whether that is even a feasible approach. After researching a bit, it looks like I'll need to write a custom test runner to facilitate running the tests in parallel anyways, but the parts regarding the initial test data still have me looking for a better way.
Anyone have any better ways of handling database initialization when running Selenium tests in parallel?
FWIW, the app and tests suite is in PHP/PHPUnit.
Update
Since it sounds like the answer I'm looking for is very project-dependent, I'm at least going to attempt to come up with my own solution and report back with my findings.

There's no easy answer and it looks like you've thought out most of it. Also worth considering is to rewrite the tests to use separately partitioned data - this may or may not work depending on your domain (e.g. a separate bank account per node, if it's a banking app). Your pre-population of the DB could be restricted to static reference data, or you could pre-populate the data for each separate 'account'. Again, depends on how easy this is to do for your data.
I'm inclined to vote for 3, though, because database setup is relatively easy to script these days and the hardware requirements probably aren't too high for a small test data suite.

Related

What tools exist for managing a large suite of test programs?

I apologize if this has been answered before, but I'm having trouble finding a tool that fits my needs.
I have a few dozen test programs, but each one can be run with a large number of parameters. I need to be able to automatically run sweeps of many of the parameters across all or some of the test programs. I have my own set of tools for running an individual test, which I can't really change, but I'm looking for a tool that would manage the entire suite.
Thus far, I've used a home-grown script for this. The main problem I run across is that an individual test program might take 5-10 parameters, each with several values. Although it would be easy to write something that would just do a nested for loop and sweep over every parameter combination, the difficulty is that not every combination of parameters makes sense, and not every parameter makes sense for every test program. There is no general way (i.e., that works for all parameters) to codify what makes sense and what doesn't, so the solutions I've tried before involve enumerating each sensible case. Although the enumeration is done with a script, it still leads to a huge cross-product of test cases which is cumbersome to maintain. We also don't want to run the giant cross-product of cases every time, so I have other mechanisms to select subsets of it, which gets even more cumbersome to deal with.
I'm sure I'm not the first person to run into a problem like this. Are there any tools out there that could help with this kind of thing? Or even ideas for writing one?
Thanks.
Adding a clarification ---
For instance, if I have parameters A, B, and C that each represent a range of values from 1 to 10, I might have a restriction like: if A=3, then only odd values of B are relevant and C must be 7. The restrictions can generally be codified, but I haven't found a tool where I could specify something like that. As for a home-grown tool, I'd either have to enumerate the tuples of parameters (which is what I'm doing) or put or implement something quite sophisticated to be able to specify and understand constraints like that.
We rolled our own, we have a whole test infrastructure. It manages the tests, has a number of built in features for allowing the tests to log results, the logs are managed by the test infrastructure to go into a searchable database for all kinds of report generation.
Each test has a class/structure that has information about the test, name of test, author, and a variety of other tags. When running a test suite you can run everything or run everything with a certain tag. So if you want to only test SRAM you can easily run only tests tagged sram.
Our tests are all considered either pass or fail. pass/fail criteria is determined by the author of the individual test, but the infrastructure wants to see either pass or fail. You need to define what your possible results are, as simple as pass/fail or you might want to add pass and keep going, pass but stop testing, fail but keep going, and fail and stop testing. Stop testing meaning if there are 20 tests scheduled and test 5 fails then you stop you dont go on to 6.
You need a mechanism to order the tests which could be alphabetical but it might benefit from a priority scheme (must perform the power on test before performing a test that requires the power to be on). It may also benefit from a random ordering some tests may be passing due to dumb luck because a test before them made something work, remove that prior test and this test fails. or vice versa this test passes until it is preceeded by a specific test and those two dont get along in that order.
To shorten my answer I dont know of an existing infrastructure, but I have built my own and worked with home built ones that were tailored to our business/lab/process. You wont hit a home run the first time, dont expect to. but try to predict a managable set of rules for individual tests, how many types of pass/fail return values it can return. The types of filters you want to put in place. The type of logging you may wish to do and where you want to store that data. then create the infrastructure and the mandantory shell/frame for each test, then individual testers have to work within that shell. Our current infrastructure is in python which lent itself to this nicely, and we are not restricted to only python based tests we can use C or python and the target can run whatever languages/programs it can run. Abstraction layers are good, we use a simple read/write of an address to access the unit under test, and with that we can test against a simulation of the target or against real hardware when the hardware arrives. We can access the hardware through a serial debugger, or jtag or pcie, and the majority of the tests dont know or care because the are on the other side of the abstraction.

What is a good method of doing TDD with legacy Delphi code having embedded SQL

I have to take some legacy Delphi code pointing to a database and make it support a new, better, database having a completly different schema. The updated database has the same data. It has a combination of stored procedures and embedded SQL.
Is there a good Test driven development technique that will help make sure I don't break anything? This code has amost no unit tests and I need to make changes to a lot of hard coded SQL.
Just running after every change sounds error prone and time consuming. I love the idea of doing TDD or BDD, just not sure how to do it.
It's good that you want to get into unit testing, but I'd like to caution you against taking it on over-zealously.
Adding unit tests to legacy code is a major undertaking, and it's almost always totally unfeasible to halt other work just to add test cases. Also, unless you already have experience in TDD, that learning curve itself can prove a troublesome hurdle to overcome.
However, if you persevere, and take things one step at a time, your efforts will be rewarded in the end.
The problems you're likely to encounter:
Legacy applications are usually very difficult to 'retro-fit' with test cases. This is because the code wasn't written with testability in mind.
Many routines are doing too many things, so tests have to consider large numbers of side-effects.
Code is not properly self-contained, so setting up pre-conditions for a test is a lot of work.
Entry points for testing/checking behaviour are often missing because they weren't needed for production code; and therefore weren't added in the first place.
Code often relies on global state somewhere. Either directly, or via Singleton's. This global state (regardless of where it lies) plays havoc on your test cases.
Unit testing of databases is inherently more difficult than other kinds of unit testing. The reason for this is that test cases don't like global state - and databases are effectively massive containers of global state. Problems manifest themselves in many ways:
If you're using IDENTITY columns, Auto Inc or number generators of any form: These either result in a specific difference between each test run, or you need a way to reset those numbers between tests.
Databases are slow. Once you've built up a large number of test cases it will be impractical to run all tests between every change. (One of my Db Test suites takes almost 10 minutes to run.)
If your database generates date/time values, these can also complicate testing. Especially if the database runs on a different machine.
Database testing is complicated by the fact that there are two aspects to the database: Its schema, and its data. So if you wish to test a new/changed stored procedure (part of the schema), it needs appropriate changes to the data and possibly to other aspects of the schema (such as tables/views).
Even without the above extra complications, there are the 'normal problems' you'll have to deal with.
Global state often crops up unexpectedly in some awkward places. Consider Now() which returns a TDateTime. It uses global state: the current date-time. If you have time/date based rules in your system, those rules may return different results depending on when your tests are run. Unless you find an effective way to deal with this challenge, you'll have a number of "erratic" test cases.
Writing test cases is a fundamentally different programming paradigm to what most developers are used to. It can be extremely difficult to break old habits. The style of test case code is almost declarative: Given this, When I do This, I expect this to have happened. Test cases need to be simple and clear about what they're trying to achieve.
The learning curve can be tricky. Initially you may find yourself taking 3 times as long to write code if unfamiliar with test cases. And even though it will eventually improve (possibly even to the point where you're faster than you used to be with unstructured and haphazard testing) - other people around you will likely express frustration. (Not cool if it's your boss.)
Hopefully I haven't discouraged you, I do have some practical advice:
As the saying goes "Don't bite off more than you can chew."
Be prepared to start out slow. For the time being, carry on with most of your work in a way that's familiar to you. But force yourself to write 1 or 2 test cases every day. As you get more comfortable, you can increase this number.
Try stick to the "tried and tested principles"
The TDD work flow is : first write the test and ensure the test fails. I know it is difficult to stick to the habit, but the principle serves a very important purpose. It's a level of confirmation that your test case proves the bug / missing feature. Far too often I've seen test case code that would pass with/without the production change - making the test somewhat useless.
For your database tests you'll need to establish a framework that works for you.
First, you'll need a mechanism of getting your database to a 'base-state'. One from which all your tests should be able to pass - no matter what order or how many times they are run. Typically this will involve some sort of Reset between tests (but it needs to be quite quick). Second, you'll need an easy way to update the schema of your database to what is expected by production code.
Initially you'll only want to test new features, or bug fixes.
Avoid the temptation to test everything. Over time, your test case coverage will increase. Once your framework and patterns have been established, then you might get a chance to start adding tests just to increase coverage.
Refactoring existing code.
As you become familiar with testing, you'll learn about the coding habits that make testing more difficult. You'll probably find many such problems in legacy code. Such code will not be testable as is. You may need to refactor your code before you can even test it. Obviously this is not ideal, because you'd rather have tests that always pass to prove that your changes haven't broken anything. A good book on refactoring will give you some techniques you can use that will change the structure of your code without changing its behaviour.
Testing existing code.
When writing a test for an existing routine, look at the code and determine each of the inputs that can effect different behaviour. E.g. When there's an if statement, something will cause the condition to evaluate to True, and something else to False. At a minimum, you'll want a test for each permutation.
In your place I would use DUnit to create a unit test project. For each of the entities I would write testing methods that would run the old and new sentences and then write methods to compare the results.
I would write a TTestCase class named, let´s say TMyTestCase, and add some helper methods to it, then would create my new test classes as subclasses from TMyTestCase.
The idea of the ancestor class is to provide common functionality that makes it easier to write the tests (the comparison methods, for intance) in order the enhance productivity and comfort.
You can start building a database simulator. Connect it instead of the old one and see what it needs to do. Lot of work though

Need of Integration testing

We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!

CRUD Web App Automated Testing best practices

G'day,
I'm working with a fairly DB heavy web app and looking at setting up automated testing for it using Selenium. However, as an automated testing newbie, I don't know where to start.
How do you categorize your tests to make sure they're logically sound as well as complete?
How do you deal with the DB when testing? Build a fresh DB before every test and drop the tables after each test? Start with a test-db?
Just looking for some pointers to what best practices are in this regard.
Thanks,
In general...
If your main goal is to test database CRUD operations I would go at least 'one level down' and write some kind of integration tests that do not use the GUI for testing. The tests become a lot more focused on the actual CRUD operations if you take the GUI out.
How to deal with the database...
No matter whether you go with Selenium or integration tests it is a good idea that the tests do not depend on each other. This means setting up the database before each test and/or tearing them down to a clean/known state after the test. Maintaining tests that are written this way is a lot easier. For example, you can run a single test by itself.
For both of our integration and acceptance tests we use dbunit to achieve this. Easily setting up and tearing down DBs is not a new thing, there should be something available for your technology stack as well. (You did not mention the technologies you are using)
How to categorize the tests...
For CRUD operations I would make sure I test one thing and one thing only. For example, I have an Employee table. You can have a test suite that tests everything that has to do with an Employee, but a single tests should only test one thing. 'Save Employee successfully' should be a different test case from 'Attempt to save an Employee that already exists' or 'Delete Employee'.
EDIT: (Answer to the comment)
We basically kill the database and build it from scratch at the beginning of the testing. (Not sure how crucial this part is, but this makes sure that our db is consistent with what the code expects. We are using hibernate...)
Then for each test we have a different datasets to insert. So let's say again that we are testing Employee. If I want to test deleting an Employee I would insert a dataset that contained the smallest amount of information in the database to make this happen. Smaller datasets are easier to maintain. If you use the same dataset for all of your tests it will become very hard to change the code and change or add new tests.
We do use the same dataset for things that seem to require the same information. For example, you want to test 'Attempt to save Employee to database' and 'Delete Employee'. You might reuse one dataset for this.
I was wondering if building and
tearing down the DB for each test
would be expensive time and computing
wise?
I would not worry too much about this. Yes, it might add, let's say, 3-4 seconds to every test, but in the big picture, is this really important? It is more important that you have tests that aim for maintenance because your time as a developer is a lot more valuable then these tests taking 5 minutes to run instead of 3 minutes.
I don't know anything about Selenium, but I found a JavaRanch article that was really well-done. In the section titled "Unit Testing Mock Basics" the author shows a good method of creating mock objects to avoid any database calls. Obviously, you'll need some integration tests that involve DB calls, but for plain old unit tests, the outlined method works well.
Remember, running unit tests should be super fast.

How many cycles are required to validate an automated script

I have one query. Maybe it is a silly question but still I need the answer to clear my doubts.
Testing is evaluating the product or application. We do testing to check whether there are any show stoppers or not, any issues that should not present.
We automate (script I am talking about) testcases from the present test cases. Once the test case is automated, how many cycle do we need to check the test that the script is running with no major errors and thus the script is reliable to run instead of manually executing the test cases.
Thanks in advance.
If the test script always fails when a test fails, you need to run the script only once. Running the script several times without changing the code will not give you additional safety.
You may discover that your tests depend on some external source that changes during the tests and thereby make the tests fail sometimes. Running the tests several times will not solve this issue, either. To solve it, you must make sure that the test setup really initializes all external factors in such a way that the tests always succeed. If you can't achieve this, you can't test reliably, so there is no way around this.
That said, tests can never make sure that your product is 100% correct or safe. They just make sure that your product is still as good (or better) as it was before all the changes your made since the last test. It's kind of having a watermark which tells you the least amount of quality that you can depend on. Anything above the watermark is speculation but below it (the part that your tests cover) is safe.
So by refining your tests, you can make your product better with every change. Without the automatic tests, every change has a chance to make your product worse. This means, without tests, your quality will certainly deteriorate while with tests, you can guarantee to maintain a certain amount of quality.
It's a huge field with no simple answer.
It depends on several factors, including:
The code coverage of your tests
How you define reliable