How to use automation for testing application involving highly complex calculations? - testing

I want to following things for testing a application involving complex calculations:
How to use test automation tools for testing calculations?(using automation tools like QTP or open source tools)
How to decide coverage while testing calculations, how to design test cases?
Thanks in advance,
Testmann

We had to test some really complex calculations in an application we built. To do this we used a tool call FitNesse, which is a wiki test harness (and open source too). It works really well when you provide it data in a table style format.
We had some code in C# that perform some VERY complex calculations. So what we did is wrote a test harness in FitNesse, and then we supplied it with a lot of test data. We worked very hard to cover all cases, so we utilized a sort of internal truth-table to ensure we were getting every possible combination of data input.
The FitNesse test harness has been invaluable to us as the complexity of the calculations has changed over time due to changing requirements. We've been able to ensure the correctness of the calculations because our FitNesse tests act as a very nice regression suite.

Sometimes, you have to estimate the expected conclusion, and then populate the test case from a run of the program.
It's not so much of a mortal sin, as long as you're convinced it's correct. Those tests will then let you know immediately if a code change breaks the code. Also, if you're testing a subset, it's not that big of a stretch of trust.
And for coverage? Cover every branch at least once (that is, any if or loop statements). Cover every threshold, both sides of it (for integer division that would be -1, 0, and 1 as denominators). Then add a few more in for good measure.

To test existing code, you should assume that the code is (mostly) correct. So you just give it some data, run it and record the result. Then use that recorded result in a test case.
When you do the next change, your output should change too and the test will fail. Compare the new result with what you'd have expected. If there is a discrepancy, then you're missing something -> write another test to figure out what is going on.
This way, you can build expertise about an unknown system.
When you ask for coverage, I assume that you can't create coverage data for the actual calculations. In this case, just make sure that all calculations are executed and feed them with several inputs. That should give you an idea how to proceed.

Related

What tools exist for managing a large suite of test programs?

I apologize if this has been answered before, but I'm having trouble finding a tool that fits my needs.
I have a few dozen test programs, but each one can be run with a large number of parameters. I need to be able to automatically run sweeps of many of the parameters across all or some of the test programs. I have my own set of tools for running an individual test, which I can't really change, but I'm looking for a tool that would manage the entire suite.
Thus far, I've used a home-grown script for this. The main problem I run across is that an individual test program might take 5-10 parameters, each with several values. Although it would be easy to write something that would just do a nested for loop and sweep over every parameter combination, the difficulty is that not every combination of parameters makes sense, and not every parameter makes sense for every test program. There is no general way (i.e., that works for all parameters) to codify what makes sense and what doesn't, so the solutions I've tried before involve enumerating each sensible case. Although the enumeration is done with a script, it still leads to a huge cross-product of test cases which is cumbersome to maintain. We also don't want to run the giant cross-product of cases every time, so I have other mechanisms to select subsets of it, which gets even more cumbersome to deal with.
I'm sure I'm not the first person to run into a problem like this. Are there any tools out there that could help with this kind of thing? Or even ideas for writing one?
Thanks.
Adding a clarification ---
For instance, if I have parameters A, B, and C that each represent a range of values from 1 to 10, I might have a restriction like: if A=3, then only odd values of B are relevant and C must be 7. The restrictions can generally be codified, but I haven't found a tool where I could specify something like that. As for a home-grown tool, I'd either have to enumerate the tuples of parameters (which is what I'm doing) or put or implement something quite sophisticated to be able to specify and understand constraints like that.
We rolled our own, we have a whole test infrastructure. It manages the tests, has a number of built in features for allowing the tests to log results, the logs are managed by the test infrastructure to go into a searchable database for all kinds of report generation.
Each test has a class/structure that has information about the test, name of test, author, and a variety of other tags. When running a test suite you can run everything or run everything with a certain tag. So if you want to only test SRAM you can easily run only tests tagged sram.
Our tests are all considered either pass or fail. pass/fail criteria is determined by the author of the individual test, but the infrastructure wants to see either pass or fail. You need to define what your possible results are, as simple as pass/fail or you might want to add pass and keep going, pass but stop testing, fail but keep going, and fail and stop testing. Stop testing meaning if there are 20 tests scheduled and test 5 fails then you stop you dont go on to 6.
You need a mechanism to order the tests which could be alphabetical but it might benefit from a priority scheme (must perform the power on test before performing a test that requires the power to be on). It may also benefit from a random ordering some tests may be passing due to dumb luck because a test before them made something work, remove that prior test and this test fails. or vice versa this test passes until it is preceeded by a specific test and those two dont get along in that order.
To shorten my answer I dont know of an existing infrastructure, but I have built my own and worked with home built ones that were tailored to our business/lab/process. You wont hit a home run the first time, dont expect to. but try to predict a managable set of rules for individual tests, how many types of pass/fail return values it can return. The types of filters you want to put in place. The type of logging you may wish to do and where you want to store that data. then create the infrastructure and the mandantory shell/frame for each test, then individual testers have to work within that shell. Our current infrastructure is in python which lent itself to this nicely, and we are not restricted to only python based tests we can use C or python and the target can run whatever languages/programs it can run. Abstraction layers are good, we use a simple read/write of an address to access the unit under test, and with that we can test against a simulation of the target or against real hardware when the hardware arrives. We can access the hardware through a serial debugger, or jtag or pcie, and the majority of the tests dont know or care because the are on the other side of the abstraction.

What is a good method of doing TDD with legacy Delphi code having embedded SQL

I have to take some legacy Delphi code pointing to a database and make it support a new, better, database having a completly different schema. The updated database has the same data. It has a combination of stored procedures and embedded SQL.
Is there a good Test driven development technique that will help make sure I don't break anything? This code has amost no unit tests and I need to make changes to a lot of hard coded SQL.
Just running after every change sounds error prone and time consuming. I love the idea of doing TDD or BDD, just not sure how to do it.
It's good that you want to get into unit testing, but I'd like to caution you against taking it on over-zealously.
Adding unit tests to legacy code is a major undertaking, and it's almost always totally unfeasible to halt other work just to add test cases. Also, unless you already have experience in TDD, that learning curve itself can prove a troublesome hurdle to overcome.
However, if you persevere, and take things one step at a time, your efforts will be rewarded in the end.
The problems you're likely to encounter:
Legacy applications are usually very difficult to 'retro-fit' with test cases. This is because the code wasn't written with testability in mind.
Many routines are doing too many things, so tests have to consider large numbers of side-effects.
Code is not properly self-contained, so setting up pre-conditions for a test is a lot of work.
Entry points for testing/checking behaviour are often missing because they weren't needed for production code; and therefore weren't added in the first place.
Code often relies on global state somewhere. Either directly, or via Singleton's. This global state (regardless of where it lies) plays havoc on your test cases.
Unit testing of databases is inherently more difficult than other kinds of unit testing. The reason for this is that test cases don't like global state - and databases are effectively massive containers of global state. Problems manifest themselves in many ways:
If you're using IDENTITY columns, Auto Inc or number generators of any form: These either result in a specific difference between each test run, or you need a way to reset those numbers between tests.
Databases are slow. Once you've built up a large number of test cases it will be impractical to run all tests between every change. (One of my Db Test suites takes almost 10 minutes to run.)
If your database generates date/time values, these can also complicate testing. Especially if the database runs on a different machine.
Database testing is complicated by the fact that there are two aspects to the database: Its schema, and its data. So if you wish to test a new/changed stored procedure (part of the schema), it needs appropriate changes to the data and possibly to other aspects of the schema (such as tables/views).
Even without the above extra complications, there are the 'normal problems' you'll have to deal with.
Global state often crops up unexpectedly in some awkward places. Consider Now() which returns a TDateTime. It uses global state: the current date-time. If you have time/date based rules in your system, those rules may return different results depending on when your tests are run. Unless you find an effective way to deal with this challenge, you'll have a number of "erratic" test cases.
Writing test cases is a fundamentally different programming paradigm to what most developers are used to. It can be extremely difficult to break old habits. The style of test case code is almost declarative: Given this, When I do This, I expect this to have happened. Test cases need to be simple and clear about what they're trying to achieve.
The learning curve can be tricky. Initially you may find yourself taking 3 times as long to write code if unfamiliar with test cases. And even though it will eventually improve (possibly even to the point where you're faster than you used to be with unstructured and haphazard testing) - other people around you will likely express frustration. (Not cool if it's your boss.)
Hopefully I haven't discouraged you, I do have some practical advice:
As the saying goes "Don't bite off more than you can chew."
Be prepared to start out slow. For the time being, carry on with most of your work in a way that's familiar to you. But force yourself to write 1 or 2 test cases every day. As you get more comfortable, you can increase this number.
Try stick to the "tried and tested principles"
The TDD work flow is : first write the test and ensure the test fails. I know it is difficult to stick to the habit, but the principle serves a very important purpose. It's a level of confirmation that your test case proves the bug / missing feature. Far too often I've seen test case code that would pass with/without the production change - making the test somewhat useless.
For your database tests you'll need to establish a framework that works for you.
First, you'll need a mechanism of getting your database to a 'base-state'. One from which all your tests should be able to pass - no matter what order or how many times they are run. Typically this will involve some sort of Reset between tests (but it needs to be quite quick). Second, you'll need an easy way to update the schema of your database to what is expected by production code.
Initially you'll only want to test new features, or bug fixes.
Avoid the temptation to test everything. Over time, your test case coverage will increase. Once your framework and patterns have been established, then you might get a chance to start adding tests just to increase coverage.
Refactoring existing code.
As you become familiar with testing, you'll learn about the coding habits that make testing more difficult. You'll probably find many such problems in legacy code. Such code will not be testable as is. You may need to refactor your code before you can even test it. Obviously this is not ideal, because you'd rather have tests that always pass to prove that your changes haven't broken anything. A good book on refactoring will give you some techniques you can use that will change the structure of your code without changing its behaviour.
Testing existing code.
When writing a test for an existing routine, look at the code and determine each of the inputs that can effect different behaviour. E.g. When there's an if statement, something will cause the condition to evaluate to True, and something else to False. At a minimum, you'll want a test for each permutation.
In your place I would use DUnit to create a unit test project. For each of the entities I would write testing methods that would run the old and new sentences and then write methods to compare the results.
I would write a TTestCase class named, let´s say TMyTestCase, and add some helper methods to it, then would create my new test classes as subclasses from TMyTestCase.
The idea of the ancestor class is to provide common functionality that makes it easier to write the tests (the comparison methods, for intance) in order the enhance productivity and comfort.
You can start building a database simulator. Connect it instead of the old one and see what it needs to do. Lot of work though

How to develop regression tests for a calculation engine

I'm on a team developing a financial information web app. We haven't written many automated tests for it yet, so we've decided to add regression tests to the most critical parts of our program. I'm very new to automated testing, though, so I'm not entirely sure how I should go about writing the tests.
This post is long, so here's the tl;dr question: How can I write a regression test that checks to see if certain calculation is working? I don't just want to test the calculation, though - I also want to know if any of the components the calculation depends on to give it its inputs break. I don't need to know which component broke in particular, just that something's not working. What approach should I use?
This is our situation: We developed the app using a layered architecture, like this:
Views
|
V
Logic Managers <--> Financial Calculation Engines
|
V
Data Accessors
|
V
Database
We've determined that the calculation engines are the parts of our program that most need to have a regression test suite. These components contain the calculations and algorithms that we use to process raw financial data into useful results. Their corresponding Managers use them by calling their public methods, which accept raw financial data as parameters. When the engine methods return, they send back an object that contains processed financial results. The managers, meanwhile, get the raw financial data from the data accessors, which in turn get data from the database.
We decided we want to know as soon as a financial calculation "breaks" so that we know the bug is somewhere in whichever pieces of the program have been touched since the last run of the tests. This would let us use continuous testing to protect us from having the engines producing wrong results and having no idea where to look.
When we thought about what this means, we realized that adding a unit test to each of the engines isn't enough. Let's say, for example, that an erroneous change to the data accessors means that they start pulling the wrong data. This data would then be sent up through the manager to the engine, which would produce the wrong results. However, the engine's algorithms themselves would still be working perfectly, so the unit test would still pass. This means that when we noticed the wrong numbers being generated, we would have no way of knowing when the bug was introduced, making it more difficult to track down and fix.
Instead, we would like to make regression tests that are able to pick up as soon as a bug appears anywhere that would cause the final results the engines output to be incorrect, even if the problem is that the wrong data is sent to the engines and not that the engines themselves have issues. When these tests fail, they wouldn't tell us where the problem is, but if we're continuously testing, we'll know as soon as a bug is checked in and have a small set of changes to look through to fix it.
So that's what we want to do. Unfortunately, we don't know how to create these tests. What approaches or patterns are useful for writing these types of regression tests?
Just a hint: you should check every part of the Financial Calculations Engine with the same inputs every time, and the objects returned should be identical every time.
Test separately the Data Accessors, with the same logic: same input, same output.
To do so, you need to mock some parts of the system (eg. mock the data accessors to always return the same set of data).
Having separate unit tests fore each part also locates the bug with more precision.
A couple of links to get into the idea:
http://www.ibm.com/developerworks/library/j-mocktest/index.html
http://www.slideshare.net/joewilson123/unit-testing-and-mocking
There are a lot of mocking frameworks around that can help you code the tests, like Mockito for Java projects.

Handling test data when going from running Selenium tests in series to parallel

I'd like to start running my existing Selenium tests in parallel, but I'm having trouble deciding on the best approach due to the way my current tests are written.
The first step in of most of my tests is to get the DB into a clean state and then populate it with the data needed for the rest of the test. While this is great to isolate tests from each other, if I start running these same Selenium tests in parallel on the same SUT, they'll end up erasing other tests' data.
After much digging, I haven't been able to find any guidance or best-practices on how to deal with this situation. I've thought of a few ideas, but none have struck me as particularly awesome:
Rewrite the tests to not overwrite other tests' data, i.e. only add test data, never erase -- I could see this potentially leading to unexpected failures due to the variability of the database when each test is run. Anything from a different ordering of tests to an ill-placed failure could throw off the other tests. This just feels wrong.
Don't pre-populate the database -- Instead, create all needed data via Selenium itself. This would most replicate real-world usage, but would also take significantly longer than loading data directly into the database. This would probably negate any benefits from parallelization depending on how much test data each test case needs.
Have each Selenium node test a different copy of the SUT -- This way, each test would be free to do as it pleases with the database, since we are assume that no other test is touching it at the same time. The downside is that I'd need to have multiple databases setup and, at the start of each test case, figure out how to coordinate which database to initialize and how to signal to the node and SUT that this particular test case should be using this particular database. Not awful, but not what I would love to do if there's a better way.
Have each Selenium node test a different copy of the SUT, but break up the tests into distinct suites, one suite per node, before run-time -- Also viable, but not as flexible since over time you'd want to keep going back and even the length of each suite as much as possible.
All in all, none of these seem like clear winners. Option 3 seems the most reasonable, but I also have doubts about whether that is even a feasible approach. After researching a bit, it looks like I'll need to write a custom test runner to facilitate running the tests in parallel anyways, but the parts regarding the initial test data still have me looking for a better way.
Anyone have any better ways of handling database initialization when running Selenium tests in parallel?
FWIW, the app and tests suite is in PHP/PHPUnit.
Update
Since it sounds like the answer I'm looking for is very project-dependent, I'm at least going to attempt to come up with my own solution and report back with my findings.
There's no easy answer and it looks like you've thought out most of it. Also worth considering is to rewrite the tests to use separately partitioned data - this may or may not work depending on your domain (e.g. a separate bank account per node, if it's a banking app). Your pre-population of the DB could be restricted to static reference data, or you could pre-populate the data for each separate 'account'. Again, depends on how easy this is to do for your data.
I'm inclined to vote for 3, though, because database setup is relatively easy to script these days and the hardware requirements probably aren't too high for a small test data suite.

Best practices for TDD and reporting

I am trying to become more familiar with test driven approaches. One drawback for me is that a major part of my code is generated context for reporting (PDF documents, chart images). There is always a complex designer involved and there is no easy test of correctness. No chance to test just fragments!
Do you know TDD practices for this situation?
Some applications or frameworks are just inheritently unit test-unfriendly, and there's really not a lot you can do about it.
I prefer to avoid such frameworks altogether, but if absolutely forced to deal with such issues, it can be helpful to extract all logic into a testable library, leaving only declarative code behind in the framework.
The question I ask myself in these situations is "how do I know I got it right"?
I've written a lot of code in my career, and almost all of it didn't work the first time. Almost every time I've gone back and changed code for a refactoring, feature change, performance, or bug fix, I've broken it again. TDD protects me from myself (thank goodness!).
In the case of generated code, I don't feel compelled to test the code. That is, I trust the code generator. However, I do want to test my inputs to the code generators. Exactly how to do that depends on the situation, but the general approach is to ask myself how I might be getting it wrong, and then to figure out how to verify that I got it right.
Maybe I write an automated test. Maybe I inspect something manually, but that's pretty risky. Maybe something else. It depends on the situation.
To put a slightly different spin on answers from Mark Seemann and Jay Bazuzi:
Your problem is that the reporting front-end produces a data format whose output you cannot easily inspect in the "verify" part of your tests.
The way to deal with this kind of problem is to:
Have some very high-level integration tests that superficially verify that your back-end code hooks correctly into your front-end code. I usually call those tests "smoke tests", as in "if I turn on the power and it smokes, it's bad".
Find a different way to test your back-end reporting code. Either test an intermediate output data structure, or implement an alternate output front-end that is more test-friendly, HTML, plaintext, whatever.
This similar to the common problem of testing web apps: it is not possible to automatically test that "the page looks right". But it is sufficient to test that the words and numbers in the page data are correct (using a programmatic browser surch as mechanize and a page scraper), and have a few superficial functional tests (with Selenium or Windmill) if the page is critically dependent on Javascript.
You could try using a web service for your reporting data source and test that, but you are not going to have unit tests for the rendering. This is the exact same problem you have when testing views. Sure, you can use a web testing framework like Selenium, but you probably won't be practicing true TDD. You'll be creating tests after your code is done.
In short, use common sense. It probably does not make sense to attempt to test the rendering of a report. You can have manual test cases that a tester will have to go through by hand or simply check the reports yourself.
You might also want to check out "How Much Unit Test Coverage Do You Need? - The Testivus Answer"
You could use Acceptance Test driven Development to replace the unit-tests and have validated reports for well known data used as references.
However this kind of test does not give a fine grained diagnostic as unit-tests do, they usually only provide a PASS/FAIL result, and, should the reports change often, the references need to be regenerated and re-validated as well.
Consider extracting the text from the PDF and checking it. This won't give you formatting, however. Some pdf extraction programs can pull out the images if the charts are in the pdf.
Faced with this situation, I try two approaches.
The Golden Master approach. Generate the report once, check it yourself, then save it as the "golden master". Write an automated test to compare its output with the golden master, and fail when they differ.
Automate the tests for the data, but check the format manually. I automate checks for the module that generates the report data, but to check the report format, I generate a report with hardcoded values and check the report by hand.
I strongly encourage you not to generate the full report just to check the correctness of the data on the report. When you want to check the report (not the data), then generate the report; when you want to check the data (not the format), then only generate the data.