We are working on migrating Rails application from 2.3 to 3. The tests are written using minitest. There are many errors in the tests itself so as we fix some errors, we have more number of assertions to test. We would have correct number of assertions if we fix all errors.
But is there any way we can get total number of assertions written even if there are errors in tests?
Each test will stop at the first assertion that fails. So if you have more than one assertion in a test the number will grow as the earlier assertions start to pass.
The number of assertions shows is the number of assertions that were executed, not the number in the test code. There is no way that I am aware of to get the number of assertions in the test code, regardless of passing or not. Why do you want this? What are you hoping to learn from this?
Related
In Selenium I often find myself making tests like ...
// Test #1
login();
// Test #2
login();
goToPageFoo();
// Test #3
login();
goToPageFoo();
doSomethingOnPageFoo();
// ...
In a unit testing environment, you'd want separate tests for each piece (ie. one for login, one for goToPageFoo, etc.) so that when a test fails you know exactly what went wrong. However, I'm not sure this is a good practice in Selenium.
It seems to result in a lot of redundant tests, and the "know what went wrong" problem doesn't seem so bad since it's usually clear what went wrong by looking at the what step the test was on. And it certainly takes longer to run a bunch of "build up" tests than it takes to run just the last ("built up") test.
Am I missing anything, or should I just have a single long test and skip all the shorter ones building up to it?
I have built a large test suite in Selenium using a lot of smaller tests (like in your code example). I did it for exactly the same reasons you did. To know "what went wrong" on a test failure.
This is a common best practice for standard unit tests, but if I had to do it over again, I would go mostly with the second approach. Larger built-up tests with some smaller tests when needed.
The reason is that Selenium tests take an order of magnitude longer than standard unit tests to run, particularly on longer scenarios. This makes the whole test suite unbearably long with most of the time being spent on running the same redundant code over and over again.
When you do get an error, say in a step that is repeated at the beginning of 20+ different tests, it does not really help to know you got the same error 20+ times. My test runner runs my test out of order so my first error isn't even on the first incremental test of the "build-up" series so I end up looking at the first test failure and it's error message to see where the failure came from. The same thing I would do with if I had used larger "built-up" tests.
I'm generating unit tests with EvoSuite and would like to get as close to 100% code coverage from the resulting unit tests as possible. What are the best command line options/parameters to set to accomplish this?
EvoSuite comes with its parameters already tuned. if you want to improve coverage further, you d need to increase the allotted time for test generation (eg by using -Dsearch_budget parameter), although that cannot guarantee 100% coverage. For more info, see http://www.evosuite.org/documentation/commandline/
I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.
We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)
We currently have a situation where several tests are failing. Someone is working on this, but it is not me. I have been tasked with other work. So I plan on running the tests in NUnit before I begin my work so I have a base line of failing tests and what the failure message is. I would like to use this result to verify that those tests fail with the exact same failure result while testing my own code. are there any resources that would allow me to do this?
update
I'm aware of the ExpectedException attribute. However that will not work for the tests that are failing the test condition. Also there are thousands of tests of which only about 100 tests are failing. I was hoping for something that would compare the two test runs and show me the differences.
I'd throw an
[Ignore("SomeCustomStringICanFindLater")]
attribute on the failing tests until they are fixed.
See IgnoreAttribute.
And try to convince your manager that a broken build should be everyone's top priority.
After doing some research while waiting on an answer. I found that the console runner produces xml output. I can use a diff tool to compare the two test runs and see which tests failed differently than the baseline test run.