Occasionally, I'll find tests written by others that relies on specific text to be on a page (ie. a success message, an empty warning, etc.)
I find these distasteful, and usually will replace them with either a test for a specific selector (ie. #success-message or .error) or an I18n value (ie. I18n.t('foobar.success') or I18n.t('form.error.missing_error'))
The latter seems more future proof, since if the copy changes then my tests won't fail. However, some have argued that if you accidentally change the message, then it won't be caught as a failure.
Is there a standard practice when utilizing these sorts of things that I'm not aware of?
I do both in my tests: check selectors and check copy. Its good if tests fail due to copy changes. Its a bit more maintenance but the people who change copy should also be running tests or you should have a continuous integration setup to notify immediately if tests fail.
It gets a bit more complicated if you start running a/b or multivariant tests but it isn't insurmountable...Like I said, just more maintenance.
IMO, the tradeoff of maintenance vs confidence in code coverage is well worth it.
Related
I'm using robotframework and Selenium via Selenium2Library
I would like to test if value extracted from DOM element changed and is different than one checked in previous test run.
I'm thinking about using Robotframework-MongoDB-Library or other database. Next step would be adding custom mini-library for saving and retrieving extracted value for test cases.
In first test run all this kind of test will be marked as failed but next runs theoretically should work correctly.
I'm not experienced in testing field, is this right approach? If not then how can I execute this kind of tests?
This is a bad practice, as on the 2nd run (which will pass) you don't really know if that DOM is actually correct as it might be a persistent issue.
The idea is that tests are reproducible, so when something fails, you can reproduce the reason why they failed.
Also, this approach might cause an interesting behaviour change in your team: When the tests fail, re-run them until they pass, and don't bother looking at why they failed (I would bet good money on this :)).
Something you might want to do is to refine your test, so you only check the bits that are important, rather than the whole DOM (or a big chunk of it)
I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text
Are there specific techniques to consider when refactoring the non-regression tests? The code is usually pretty simple, but it's obviously not included into the safety net of a test suite...
When building a non-regression test, I first ensure that it really exhibits the issue that I want to correct, of course. But if I come back later to this test because I want to refactor it (e.g. I just added another very similar test), I usually can't put the code-under-test back in a state where it was exhibiting the first issue. So I can't be sure that the test, once refactored, is still exercising the same paths in the code.
Are there specific techniques to deal with this issue, except being extra careful?
It's not a big problem. The tests test the code, and the code tests the tests. Although it's possible to make a clumsy mistake that causes the test to start passing under all circumstances, it's not likely. You'll be running the tests again and again, so the tests and the code they test gets a lot of exercise, and when things change for the worse, tests generally start failing.
Of course, be careful; of course, run the tests immediately before and after refactoring. If you're uncomfortable about your refactoring, do it in a way that allows you to see the test working (passing and failing). Find a reliable way to fail each test before the refactoring, and write it down. Get to green - all tests passing - then refactor the test. Run the tests; still green? Good. (If not, of course, get green, perhaps by starting over). Perform the changes that made the original unrefactored tests fail. Red? Same failure as before? Then reinstate the working code, and check for green again. Check it in and move onto your next task.
Try to include not only positive cases in your automated test, but also negative cases (and a proper handler for them).
Also, you can try to run your refactored automated test with breakpoints and supervise through the debugger that it keeps on exercising all the paths you intended it to exercise.
I'm writing a program that will continuously process files placed into a hot folder.
This program should have 100% uptime with no admin intervention. In other words it should not fail on "stupid" errors. i.e. Someone deletes the output directory it should simply recreate it and move on.
What I'm thinking about doing is to code the entire program and then go through and look for "error points" and then add code to handle the error.
What I'm trying to avoid is adding erroneous or unnecessary error handling or even building error handling into the control flow of the program (i.e. the error handling controls the flow of the program). Well perhaps it could control the flow to a certain extent, but that would constitute bad design (subjective).
What are some methodologies for "error proofing" a "critical" process?
If your process must be error-proof and have no admin intervention, you must handle all possible errors. If you leave any chance of stopping the program, it will happen (Murphy's Law) and you will not know.
Even handling all possible errors, I think you'll need some logging and even a monitor with (mail?) alerts to be sure your process is always running fine.
The most important thing to do is to document your assumptions in the form of unit tests. You should write a test that violates each assumption, and then prove that your program successfully recovers or takes action to make this state true again.
To use your example, if someone could delete the critical folder, make a test that simulates this and then show that your program handles this case without crashing.
Unit testing.
On technique for thorough analysis is a HAZOP study, where for each part of the process you consider keywords for that process. For a chemical in a process plant, these might be 'more' 'less', 'missing', 'hotter' 'colder' 'leak' 'pressure' and so-one.
When applying HAZOP to software, you would consider keywords appropriate to the objects in your software.
For example, for a reading a file you might consider 'more' to be buffer overrun, 'less' missing data, 'missing' not existing, 'leak' lack of file handles, and so on.