Ordering tests in TFS 2012 - testing

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?

My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.

Related

How to compare value from previous test run with current?

I'm using robotframework and Selenium via Selenium2Library
I would like to test if value extracted from DOM element changed and is different than one checked in previous test run.
I'm thinking about using Robotframework-MongoDB-Library or other database. Next step would be adding custom mini-library for saving and retrieving extracted value for test cases.
In first test run all this kind of test will be marked as failed but next runs theoretically should work correctly.
I'm not experienced in testing field, is this right approach? If not then how can I execute this kind of tests?
This is a bad practice, as on the 2nd run (which will pass) you don't really know if that DOM is actually correct as it might be a persistent issue.
The idea is that tests are reproducible, so when something fails, you can reproduce the reason why they failed.
Also, this approach might cause an interesting behaviour change in your team: When the tests fail, re-run them until they pass, and don't bother looking at why they failed (I would bet good money on this :)).
Something you might want to do is to refine your test, so you only check the bits that are important, rather than the whole DOM (or a big chunk of it)

How a test framework behave when we add a new test case in the middle of the existing test cases?

I have a questions on automation framework, Suppose i have 1000 test cases. I am adding a new test case in the middle.
e.g. I have 1000 test cases. I am adding a test case in the middle (501th). What are some of the issues i may faced in the framework?
-- I am expecting it may break the execution order if all 1000 TCs have some dependencies among themselves. Apart from this issue i am not able to figure out any other possible issues, please help me in identifying the issues that can cause problem in execution of all the TCs here.
You should never rely on the execution order of test cases.
Note that JUnit does not execute the test cases in the declared order - unless you use annotation #FixMethodOrder(MethodSorters.NAME_ASCENDING). Neither does testNG by default. Consequently, it does not really matter at which position you add the new test case.
Besides the changed execution order, you might encounter side effects if you
change static variables which are used by other test cases as well
change data in the database
create, change or delete files
close connections which are also used by other test cases

Setup of common objects when trial starts that are maintained across all tests

Currently writing twisted trial tests for an multi-component order flow system that are run together in a single package.
Each test involves calls to external OS proxy objects that are used to regulate traffic - these are common across all tests being run in a package, but across different environments and executions, different ports/ip addresses may be assigned.
Using the test setUp and tearDown methods work, but require constant setting up of connections/port assignments for each test with uncertain wait times for ports to clear.
Is there a way to set up these objects when trial starts up before running the first test, maintain these objects and allow inspection of those object variables, and then allow a teardown on completion of the trial package containing the tests?
You probably don't need to do the set up when trial starts up; rather, you need to do the set up when trial runs the first test that depends on the given fixture. Since trial runs a global reactor, you can use that for your final tear-down before Trial is done.
There's an example of this in the way Calendar Server sets up a Postgres database for testing.
Use testresources:
testresources is attempting to extend unittest with a clean and simple api to provide test optimisation where expensive common resources are needed for test cases

Should I commit having test still to pass(failing)?

Our rails development team tries to follow Continuous Integration. We have decided to adopt a policy of only committing features whose tests pass. Is that a good way to go on? Should I delay integrating with other one's features until my tests pass(Even if the partial part of the feature works ok)? Thanks in advance
The tests should pass--if you're running a CI server it'll just spam people with emails until they do. Without a CI server everyone else will have to figure out if those tests are "supposed" to fail. Boo.
Another option is to only check in tests for actually-written features; if you're using tests as an executable specification they wouldn't all pass until the entire app was done and nobody would be able to check anything in ever.
You may also be able to mark tests as "pending" or indicate they should be skipped, but remembering to un–pend/-skip them is often problematic.
The tests SHOULD PASS that's the reason why you are writing them in the first place, if for some reason one or more tests do not pass, it indicates that something went wrong (obviously) and you and your team should be working on the solution.
If the code were committed with test failures, spam mails blaming the programmer who did it, this way the next time he will pay more attention before committing code
I have heard one way to avoid committing code with test failures but I have not personally tested, it involves to have two repositories (it could be a branch), the theory behind is:
The developers commits will target a branch, the purpose of this branch is just to guarantee that all tests pass, you should configure your CI server to build and run tests from this branch
When all the tests pass in the branch, a merge should be done to the trunk, since everyone should be working on this branch the merge should be transparent and automatic
I repeat I have not tested this approach and in my opinion it involves more problems than it solves
Another alternative could be to add a hook to the commit event in your VCS and force to run all tests but this could be time consuming just to perform a single commit
As additional info you could check this response
https://stackoverflow.com/a/7110774/1268570
I would wait personally to the test passes before I intergrate other features.

Grails integration tests failing in a (seemingly) random and non-repeatable way

We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)