How many cycles are required to validate an automated script - testing

I have one query. Maybe it is a silly question but still I need the answer to clear my doubts.
Testing is evaluating the product or application. We do testing to check whether there are any show stoppers or not, any issues that should not present.
We automate (script I am talking about) testcases from the present test cases. Once the test case is automated, how many cycle do we need to check the test that the script is running with no major errors and thus the script is reliable to run instead of manually executing the test cases.
Thanks in advance.

If the test script always fails when a test fails, you need to run the script only once. Running the script several times without changing the code will not give you additional safety.
You may discover that your tests depend on some external source that changes during the tests and thereby make the tests fail sometimes. Running the tests several times will not solve this issue, either. To solve it, you must make sure that the test setup really initializes all external factors in such a way that the tests always succeed. If you can't achieve this, you can't test reliably, so there is no way around this.
That said, tests can never make sure that your product is 100% correct or safe. They just make sure that your product is still as good (or better) as it was before all the changes your made since the last test. It's kind of having a watermark which tells you the least amount of quality that you can depend on. Anything above the watermark is speculation but below it (the part that your tests cover) is safe.
So by refining your tests, you can make your product better with every change. Without the automatic tests, every change has a chance to make your product worse. This means, without tests, your quality will certainly deteriorate while with tests, you can guarantee to maintain a certain amount of quality.

It's a huge field with no simple answer.
It depends on several factors, including:
The code coverage of your tests
How you define reliable

Related

Can it be considered bad practice to rely on automated tests running in order?

In my acceptance test suites specifically I see a lot of tests designed to run in a particular order (top to bottom) which in some ways makes sense for testing a particular flow, but I've also heard this is bad practice. Can anyone shed some light on the advantages and drawbacks here?
In majority situations if you rely on the order, there is something wrong. It's better to fix this because:
Tests should be independent to be able to run them separately (you should be able to run just 1 test).
Test-running tools often don't guarantee the order. Even if today it's a particular sequence, tomorrow you could add some configuration to the runner and the order will change.
It's hard to determine what's wrong from the test reports since you see a lot of failures while there is only 1 test that failed.
Again - from the test report tools it's not going to be easy to track the steps of the tests because these steps are assigned to different tests.
You won't be able to run them in parallel if you'd need to (hopefully you don't).
If you want to share logic - create reusable classes or methods (see this).
PS: I'd call these System Tests, not Acceptance Tests - you can write acceptance tests on unit or component levels too.

What tools exist for managing a large suite of test programs?

I apologize if this has been answered before, but I'm having trouble finding a tool that fits my needs.
I have a few dozen test programs, but each one can be run with a large number of parameters. I need to be able to automatically run sweeps of many of the parameters across all or some of the test programs. I have my own set of tools for running an individual test, which I can't really change, but I'm looking for a tool that would manage the entire suite.
Thus far, I've used a home-grown script for this. The main problem I run across is that an individual test program might take 5-10 parameters, each with several values. Although it would be easy to write something that would just do a nested for loop and sweep over every parameter combination, the difficulty is that not every combination of parameters makes sense, and not every parameter makes sense for every test program. There is no general way (i.e., that works for all parameters) to codify what makes sense and what doesn't, so the solutions I've tried before involve enumerating each sensible case. Although the enumeration is done with a script, it still leads to a huge cross-product of test cases which is cumbersome to maintain. We also don't want to run the giant cross-product of cases every time, so I have other mechanisms to select subsets of it, which gets even more cumbersome to deal with.
I'm sure I'm not the first person to run into a problem like this. Are there any tools out there that could help with this kind of thing? Or even ideas for writing one?
Thanks.
Adding a clarification ---
For instance, if I have parameters A, B, and C that each represent a range of values from 1 to 10, I might have a restriction like: if A=3, then only odd values of B are relevant and C must be 7. The restrictions can generally be codified, but I haven't found a tool where I could specify something like that. As for a home-grown tool, I'd either have to enumerate the tuples of parameters (which is what I'm doing) or put or implement something quite sophisticated to be able to specify and understand constraints like that.
We rolled our own, we have a whole test infrastructure. It manages the tests, has a number of built in features for allowing the tests to log results, the logs are managed by the test infrastructure to go into a searchable database for all kinds of report generation.
Each test has a class/structure that has information about the test, name of test, author, and a variety of other tags. When running a test suite you can run everything or run everything with a certain tag. So if you want to only test SRAM you can easily run only tests tagged sram.
Our tests are all considered either pass or fail. pass/fail criteria is determined by the author of the individual test, but the infrastructure wants to see either pass or fail. You need to define what your possible results are, as simple as pass/fail or you might want to add pass and keep going, pass but stop testing, fail but keep going, and fail and stop testing. Stop testing meaning if there are 20 tests scheduled and test 5 fails then you stop you dont go on to 6.
You need a mechanism to order the tests which could be alphabetical but it might benefit from a priority scheme (must perform the power on test before performing a test that requires the power to be on). It may also benefit from a random ordering some tests may be passing due to dumb luck because a test before them made something work, remove that prior test and this test fails. or vice versa this test passes until it is preceeded by a specific test and those two dont get along in that order.
To shorten my answer I dont know of an existing infrastructure, but I have built my own and worked with home built ones that were tailored to our business/lab/process. You wont hit a home run the first time, dont expect to. but try to predict a managable set of rules for individual tests, how many types of pass/fail return values it can return. The types of filters you want to put in place. The type of logging you may wish to do and where you want to store that data. then create the infrastructure and the mandantory shell/frame for each test, then individual testers have to work within that shell. Our current infrastructure is in python which lent itself to this nicely, and we are not restricted to only python based tests we can use C or python and the target can run whatever languages/programs it can run. Abstraction layers are good, we use a simple read/write of an address to access the unit under test, and with that we can test against a simulation of the target or against real hardware when the hardware arrives. We can access the hardware through a serial debugger, or jtag or pcie, and the majority of the tests dont know or care because the are on the other side of the abstraction.

What is a good method of doing TDD with legacy Delphi code having embedded SQL

I have to take some legacy Delphi code pointing to a database and make it support a new, better, database having a completly different schema. The updated database has the same data. It has a combination of stored procedures and embedded SQL.
Is there a good Test driven development technique that will help make sure I don't break anything? This code has amost no unit tests and I need to make changes to a lot of hard coded SQL.
Just running after every change sounds error prone and time consuming. I love the idea of doing TDD or BDD, just not sure how to do it.
It's good that you want to get into unit testing, but I'd like to caution you against taking it on over-zealously.
Adding unit tests to legacy code is a major undertaking, and it's almost always totally unfeasible to halt other work just to add test cases. Also, unless you already have experience in TDD, that learning curve itself can prove a troublesome hurdle to overcome.
However, if you persevere, and take things one step at a time, your efforts will be rewarded in the end.
The problems you're likely to encounter:
Legacy applications are usually very difficult to 'retro-fit' with test cases. This is because the code wasn't written with testability in mind.
Many routines are doing too many things, so tests have to consider large numbers of side-effects.
Code is not properly self-contained, so setting up pre-conditions for a test is a lot of work.
Entry points for testing/checking behaviour are often missing because they weren't needed for production code; and therefore weren't added in the first place.
Code often relies on global state somewhere. Either directly, or via Singleton's. This global state (regardless of where it lies) plays havoc on your test cases.
Unit testing of databases is inherently more difficult than other kinds of unit testing. The reason for this is that test cases don't like global state - and databases are effectively massive containers of global state. Problems manifest themselves in many ways:
If you're using IDENTITY columns, Auto Inc or number generators of any form: These either result in a specific difference between each test run, or you need a way to reset those numbers between tests.
Databases are slow. Once you've built up a large number of test cases it will be impractical to run all tests between every change. (One of my Db Test suites takes almost 10 minutes to run.)
If your database generates date/time values, these can also complicate testing. Especially if the database runs on a different machine.
Database testing is complicated by the fact that there are two aspects to the database: Its schema, and its data. So if you wish to test a new/changed stored procedure (part of the schema), it needs appropriate changes to the data and possibly to other aspects of the schema (such as tables/views).
Even without the above extra complications, there are the 'normal problems' you'll have to deal with.
Global state often crops up unexpectedly in some awkward places. Consider Now() which returns a TDateTime. It uses global state: the current date-time. If you have time/date based rules in your system, those rules may return different results depending on when your tests are run. Unless you find an effective way to deal with this challenge, you'll have a number of "erratic" test cases.
Writing test cases is a fundamentally different programming paradigm to what most developers are used to. It can be extremely difficult to break old habits. The style of test case code is almost declarative: Given this, When I do This, I expect this to have happened. Test cases need to be simple and clear about what they're trying to achieve.
The learning curve can be tricky. Initially you may find yourself taking 3 times as long to write code if unfamiliar with test cases. And even though it will eventually improve (possibly even to the point where you're faster than you used to be with unstructured and haphazard testing) - other people around you will likely express frustration. (Not cool if it's your boss.)
Hopefully I haven't discouraged you, I do have some practical advice:
As the saying goes "Don't bite off more than you can chew."
Be prepared to start out slow. For the time being, carry on with most of your work in a way that's familiar to you. But force yourself to write 1 or 2 test cases every day. As you get more comfortable, you can increase this number.
Try stick to the "tried and tested principles"
The TDD work flow is : first write the test and ensure the test fails. I know it is difficult to stick to the habit, but the principle serves a very important purpose. It's a level of confirmation that your test case proves the bug / missing feature. Far too often I've seen test case code that would pass with/without the production change - making the test somewhat useless.
For your database tests you'll need to establish a framework that works for you.
First, you'll need a mechanism of getting your database to a 'base-state'. One from which all your tests should be able to pass - no matter what order or how many times they are run. Typically this will involve some sort of Reset between tests (but it needs to be quite quick). Second, you'll need an easy way to update the schema of your database to what is expected by production code.
Initially you'll only want to test new features, or bug fixes.
Avoid the temptation to test everything. Over time, your test case coverage will increase. Once your framework and patterns have been established, then you might get a chance to start adding tests just to increase coverage.
Refactoring existing code.
As you become familiar with testing, you'll learn about the coding habits that make testing more difficult. You'll probably find many such problems in legacy code. Such code will not be testable as is. You may need to refactor your code before you can even test it. Obviously this is not ideal, because you'd rather have tests that always pass to prove that your changes haven't broken anything. A good book on refactoring will give you some techniques you can use that will change the structure of your code without changing its behaviour.
Testing existing code.
When writing a test for an existing routine, look at the code and determine each of the inputs that can effect different behaviour. E.g. When there's an if statement, something will cause the condition to evaluate to True, and something else to False. At a minimum, you'll want a test for each permutation.
In your place I would use DUnit to create a unit test project. For each of the entities I would write testing methods that would run the old and new sentences and then write methods to compare the results.
I would write a TTestCase class named, let´s say TMyTestCase, and add some helper methods to it, then would create my new test classes as subclasses from TMyTestCase.
The idea of the ancestor class is to provide common functionality that makes it easier to write the tests (the comparison methods, for intance) in order the enhance productivity and comfort.
You can start building a database simulator. Connect it instead of the old one and see what it needs to do. Lot of work though

Handling test data when going from running Selenium tests in series to parallel

I'd like to start running my existing Selenium tests in parallel, but I'm having trouble deciding on the best approach due to the way my current tests are written.
The first step in of most of my tests is to get the DB into a clean state and then populate it with the data needed for the rest of the test. While this is great to isolate tests from each other, if I start running these same Selenium tests in parallel on the same SUT, they'll end up erasing other tests' data.
After much digging, I haven't been able to find any guidance or best-practices on how to deal with this situation. I've thought of a few ideas, but none have struck me as particularly awesome:
Rewrite the tests to not overwrite other tests' data, i.e. only add test data, never erase -- I could see this potentially leading to unexpected failures due to the variability of the database when each test is run. Anything from a different ordering of tests to an ill-placed failure could throw off the other tests. This just feels wrong.
Don't pre-populate the database -- Instead, create all needed data via Selenium itself. This would most replicate real-world usage, but would also take significantly longer than loading data directly into the database. This would probably negate any benefits from parallelization depending on how much test data each test case needs.
Have each Selenium node test a different copy of the SUT -- This way, each test would be free to do as it pleases with the database, since we are assume that no other test is touching it at the same time. The downside is that I'd need to have multiple databases setup and, at the start of each test case, figure out how to coordinate which database to initialize and how to signal to the node and SUT that this particular test case should be using this particular database. Not awful, but not what I would love to do if there's a better way.
Have each Selenium node test a different copy of the SUT, but break up the tests into distinct suites, one suite per node, before run-time -- Also viable, but not as flexible since over time you'd want to keep going back and even the length of each suite as much as possible.
All in all, none of these seem like clear winners. Option 3 seems the most reasonable, but I also have doubts about whether that is even a feasible approach. After researching a bit, it looks like I'll need to write a custom test runner to facilitate running the tests in parallel anyways, but the parts regarding the initial test data still have me looking for a better way.
Anyone have any better ways of handling database initialization when running Selenium tests in parallel?
FWIW, the app and tests suite is in PHP/PHPUnit.
Update
Since it sounds like the answer I'm looking for is very project-dependent, I'm at least going to attempt to come up with my own solution and report back with my findings.
There's no easy answer and it looks like you've thought out most of it. Also worth considering is to rewrite the tests to use separately partitioned data - this may or may not work depending on your domain (e.g. a separate bank account per node, if it's a banking app). Your pre-population of the DB could be restricted to static reference data, or you could pre-populate the data for each separate 'account'. Again, depends on how easy this is to do for your data.
I'm inclined to vote for 3, though, because database setup is relatively easy to script these days and the hardware requirements probably aren't too high for a small test data suite.

When/how frequently should I test?

As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time.
My question is what rhythm do you like to get into when working on large projects, and where testing fits into it.
Well, if you want to follow the TDD guys, before you start to code ;)
I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle.
Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work..
EDIT:
I thought I should probably expand on this, this is my basic "working process"...
Plan what I want from the code,
possible object design, whatever.
Create my first class, add a huge comment to the top outlining
what my "vision" for the class is.
Outline the basic test scenarios.. These will basically
become the unit tests.
Create my first method.. Also writing a short comment explaining
how it is expected to work.
Write an automated test to see if it does what I expect.
Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5).
I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues.
If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it..
I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine..
Before you check the code in.
First and often.
If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough.
In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test.
Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests.
This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application.
Following this you then need to test it in the main application with the combination of existing features.
This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too.
Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger.
At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling.
To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways.
Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production.
If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at.
A good key to remember is
"Test early, test often and test again, when you think you are done"
When to test? When it's important that the code works correctly!
When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it.
On a larger project, I write tests before I write a class and I run the tests after every change to that class.
I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to.
Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly.
I guess I would say: "Test often."
I've only recently added unit testing to my regular work flow but I write unit tests:
to express the requirements for each new code module (right after I write the interface but before writing the implementation)
every time I think "it had better ... by the time I'm done"
when something breaks, to quantify the bug and prove that I've fixed it
when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks...
I run the tests on most builds, and always before running the code.
Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one.
I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat.
The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.).
Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures.
We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written.
I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor.
Red: Write your test so that it fails. That way you know the test is asserting against the correct variable.
Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away.
Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you.
This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here...
Lots of great answers here!
I try to test at the lowest level that makes sense:
If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm.
Test each function.
Exercise each branch at least once.
Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors.
Test various combinations of valid and invalid inputs.
Look for situations that might break the code, and test them.
Test each module with the same strategy as above.
Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly.
Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc.
In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.*
**I know, -1 for the gratuitous buffer-pointer reference!*