Apache POI -Read from Excel Exceptions in Selenium - selenium

If an exception occurs while fetching the data from the excel, will the execution stops? Only the current test case or all the test cases?

TestNG behave differently for exceptions appearing on different stages, so it depends.
Basically, no matter which exception appeared (unless testng SkipException, but it's the edge case, so i miss this), you might get the next:
Before configurations
For this case all dependent test and configuration methods will be skipped (unless some of them have alwaysRun=true annotation attribute).
Test method
You'll get this test failed. Also all the tests, which depends on this method will be skipped.
After configurations
Usually this do not affect your test results, but may fail the build, even all the tests passed. And also after confirmation failures may potentially affect some ongoing tests, if they expect something (but this is not related to TestNG functionality).
DataProvider
All the related tests will be skipped, everything else will not be affected.
Test Class constructor
This will broke your run, no tests will be executed.
Factory method (need to recheck)
I don't remember the behaviour. This might fail the whole launch or just some test classes. But exception here is a serious issue, try to avoid.
TestNG Listeners
This will broke the whole your test launch. Try to implement them error-free, surrounding with try/catches.

Related

How a test framework behave when we add a new test case in the middle of the existing test cases?

I have a questions on automation framework, Suppose i have 1000 test cases. I am adding a new test case in the middle.
e.g. I have 1000 test cases. I am adding a test case in the middle (501th). What are some of the issues i may faced in the framework?
-- I am expecting it may break the execution order if all 1000 TCs have some dependencies among themselves. Apart from this issue i am not able to figure out any other possible issues, please help me in identifying the issues that can cause problem in execution of all the TCs here.
You should never rely on the execution order of test cases.
Note that JUnit does not execute the test cases in the declared order - unless you use annotation #FixMethodOrder(MethodSorters.NAME_ASCENDING). Neither does testNG by default. Consequently, it does not really matter at which position you add the new test case.
Besides the changed execution order, you might encounter side effects if you
change static variables which are used by other test cases as well
change data in the database
create, change or delete files
close connections which are also used by other test cases

Ordering tests in TFS 2012

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.

Grails integration tests failing in a (seemingly) random and non-repeatable way

We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)

Running GWT 2.4 Tests With JUnitCore

I have several GWTTestCases in my test suite, and I'm currently using a homegrown testing script which is written in Java that runs tests as follows:
for(Class<?> testClass : allTestClasses) {
final JUnitCore core = new JUnitCore();
final Result result = core.run(testClass);
}
Now, the first GWT test will pass and all subsequent tests will fail. It doesn't matter which test runs first, and I can run the tests successfully from the command line.
Looking through the logs, the specific error is typically like:
java.lang.RuntimeException: deepthought.test.JUnit:package.GwtTestCaseClass.testMethod: could not instantiate the requested class
I think it has something to do with GWTTestCase static state, but am unsure. If I do one run where I pass all the testClasses to the core, they all pass, and then subsequently any individual test will pass.
My guess is that gwt compiles and caches the tests you are running, and then stores them based on the module. But in this case, the compiler misses my other test cases, because it doesn't see a dependency to them. Then for the next test, it comes back to the cache, hits it and fails to find the test I want.
Any thoughts on a workaround, other than just passing all the tests in at once?
The workaround I discovered is to first add all the GWTTestCase classes to a GWTTestSuite, which you can then throw away. You don't incur the cost of compilation at this point, but it somehow makes GWT aware of all the test cases, and so when you compile the first one...they all get compiled.
If you ask me, this is a GWT bug.

How to stop further execution of Tests within a TestFixture if one of them fails in NUnit?

I want to stop further execution of Tests within a TestFixture if one of them fails in NUnit.
Of course the common and advised practice is to make tests independent of each other. However, the case I would like to use NUnit for, requires that all tests and test fixtures following one that failed, are not executed. In other words, test failure causes the whole NUnit execution to stop (or proceeds with the next [TestFixture] but both scenarios should be configurable).
The simple, yet not acceptable solution, would be to force NUnit termination by sending a signal of some kind to the NUnit process.
Is there a way to do this in an elegant way?
I believe you can use NAnt to do this. Specifically, the nunit or nunit2 tasks have a haltonfailure parameter that allows the test run to stop if a test fails.