I am running some UI tests using WebDriver and MSpec. I added a check in Cleanup that no JavaScript errors were raised. But, throwing an exception in here doesn't fail the tests. How can I get this to work? I need to fail any test, and don't really want to do this separately in each test.
If I remember correctly, there isn't really a way to do this in a cleanup. Cleanups happen after tests, so it would be too late to fail them. As a matter of principle, it may be better to write the assertion for it not raising any javascript errors as its own spec at the end of each of them.
Even if it can be done from the Cleanup code, it should not be done that way.
Reason: How would you know which of the multiple tests that you have failed?
Related
If an exception occurs while fetching the data from the excel, will the execution stops? Only the current test case or all the test cases?
TestNG behave differently for exceptions appearing on different stages, so it depends.
Basically, no matter which exception appeared (unless testng SkipException, but it's the edge case, so i miss this), you might get the next:
Before configurations
For this case all dependent test and configuration methods will be skipped (unless some of them have alwaysRun=true annotation attribute).
Test method
You'll get this test failed. Also all the tests, which depends on this method will be skipped.
After configurations
Usually this do not affect your test results, but may fail the build, even all the tests passed. And also after confirmation failures may potentially affect some ongoing tests, if they expect something (but this is not related to TestNG functionality).
DataProvider
All the related tests will be skipped, everything else will not be affected.
Test Class constructor
This will broke your run, no tests will be executed.
Factory method (need to recheck)
I don't remember the behaviour. This might fail the whole launch or just some test classes. But exception here is a serious issue, try to avoid.
TestNG Listeners
This will broke the whole your test launch. Try to implement them error-free, surrounding with try/catches.
I have some tests in Capybara.
Specifically I have two 'describe' methods.
These two test sometimes run fine, but sometimes they fail and I don't understand why as I don't change them.
This makes my testing environment completely unreliable.
Does somebody suggest what can be the reason?
I mean, I think that sometimes some queries like expect.to have_css() run before that the page is completely loaded. Is that possible?
Luca
What's your timeout set to? You can override it with.
using_wait_time(30) do
expect(page).to have_css('selector')
end
We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)
The only solution I seem to be able to find for testing for exceptions is using SenTestingKit's STAssertThrows and STAssertThrowsSpecific, however in both cases when the exception is thrown the application under test hangs until I manually ask it to continue. Surely the exceptions should be swallowed by the testing framework? And if not, does anyone have a beter suggestion for testing exceptions?
I was going to delete this question, but here is the solution for anyone else who finds themselves in a the same situation:
The reason that the application was breaking was that I had an Exception Breakpoint set up. This breaks as soon as an exception is raised, not when it bubbles up, so it was actually being halted before it had even got as far as my assertion. I just need to toggle off breakpoints (or just the exception breakpoint) when I am running tests.
I'm just getting started with Pex and running into an issue as described in the title. It seems that any parameterized tests generated by Pex or added by hand will create failing test cases for any inputs that cause an exception to be raised. Is there a way for me to indicate that certain inputs should raise exceptions, and therefore not cause a specific test to fail?
If you select the failing in test there is an 'Allow Exception' option which tells pex that the exception is correct behaviour.