The checks Selenium performs usually come in two flavours: assertFoo and verifyFoo. I understand that assertFoo fails the whole testcase whereas verifyFoo just notes the failure of that check and lets the testcase carry on.
So with verifyFoo I can get test results for multiple conditions even if one of them fails. On the other hand, one failing check for me is enough to know, that my edits broke the code and I have to correct them anyway.
In which concrete situations do you prefer one of the two ways of checking over the other? What are your experiences that motivate your view?
I would use an assert() as an entry point (a "gateway") into the test. Only if the assertion passes, will the verify() checks be executed. For instance, if I'm checking the contents of a window resulting from a series of actions, I would assert() the presence of the window, and then verify() the contents.
An example I use often - checking the estimates in a jqgrid: assert() the presence of the grid, and verify() the estimates.
I've come across a few problems which were overcome by using
assert*()
instead of
verify*()
For example, in form validations if you want to check a form element, the use of verifyTrue(...); will just pass the test even if the string is not present in the form.
If you replace assert with verify, then it works as expected.
I strongly recommend to go with using assert*().
If you are running Selenium tests on a production system and want to make sure you are logged-in as a test user e.g., instead of your personal account, it is a good idea to first assert that the right user is logged in before triggering any actions that would have unintended effects, if used by accident.
Usually you should stick to one assertion per test case, and in this case the difference boils down to any tear-down code which must be run. But you should probably put this in an #After method anyway.
I've had quite a few problems with the verify*() methods in SeleneseTestBase (e.g. they use System.out.println(), and com.thoughtworks.selenium.SeleneseTestBase.assertEquals(Object, Object) just doesn't do what you expect) so I've stopped using them.
Related
Is it possible for a program cannot find the failure by using dynamic testing, but have fault? any simple example?
Please help! thanks.
Yes. Testing can only prove the absence of bugs for what you tested. Dynamic testing cannot cover all possible inputs and outputs in all environments with all dependencies.
First is to simply not test the code in question. This can be verified by checking the coverage of your test. Even if you achieve 100% coverage there can still be flaws.
Next is to not check all possible types and ranges of inputs. For example, if you have a function that scans for a word in a string, you need to check for...
The word at the start of the string.
The word at the end of the string.
The word in the middle of the string.
A string without the word.
The empty string.
These are known as boundary conditions and include things like:
0
Negative numbers
Empty strings
Null
Extremely large values
Decimals
Unicode
Empty files
Extremely large files
If the code in question keeps state, maybe in an object, maybe in global variables, you have to test that state does not become corrupted or interfere with subsequent runs.
If you're doing parallel processing you must test any number of possibilities for deadlocks or corruption resulting from trying to do the same thing at the same time. For example, two processes trying to write to the same file. Or two processes both waiting for a lock on the same resource. Do they lock only what they need? Do they give up their locks ASAP?
Once you test all the ways the code is supposed to work, you have to test all the ways that it can fail, whether it fails gracefully with an exception (instead of garbage), whether an error leaves it in a corrupted state, and so on. How does it handle resource failure, like failing to connect to a database? This becomes particularly important working with databases and files to ensure a failure doesn't leave things partially altered.
For example, if you're transferring money from one account to another you might write:
my $from_balance = get_balance($from);
my $to_balance = get_balance($to);
set_balance($from, $from_balance - $amount);
set_balance($to, $to_balance + $amount);
What happens if the program crashes after the first set_balance? What happens if another process changes either balance between get_balance and set_balance? These sorts of concurrency issues must be thought of and tested.
There's all the different environments the code could run in. Different operating systems. Different compilers. Different dependencies. Different databases. And all with different versions. All these have to be tested.
The test can simply be wrong. It can be a mistake in the test. It can be a mistake in the spec. Generally one tests the same code in different ways to avoid this problem.
The test can be right, the spec can be right, but the feature is wrong. It could be a bad design. It could be a bad idea. You can argue this isn't a "bug", but if the users don't like it, it needs to be fixed.
If your testing makes use of a lot of mocking, your mocks may not reflect how thing thing being mocked actually behaves.
And so on.
For all these flaws, dynamic testing remains the best we've got for testing more than a few dozen lines of code.
I am writing some integration tests for some legacy code. To ensure the functions behave as expected, I need to setup the fake data, invoke the testing APIs, then clean up the data.
Due to policy reason, we can only access the database via tools like Hibernate and MyBatis, never direct connection. However, our delete() method on the DAOs is always of the soft-deletion style (ie, turn on the is_delete flag.) So the clean-up actually just turns on the is_delete flag, and the fake data is still there!
So, should I add a "real-delete" method on the DAOs for the integration tests, or there's a better way to deal with this problem?
There is nothing wrong with adding a real delete method - after all, the point of an integration test is to test all the components together in an effort to emulate they way they will actually be used.
I would just make sure that if you do this, you first add records that you know will not be duplicates. Then you can assert that those records are present in the database, delete them, and assert that they are no longer present. That way you ensure that your test never deletes real data.
I have several tests to run and all of them share a certain number x of initial actions (say login, fill form fields, click buttons, etc.), then they diverge.
Is it possible to let the browser execute the first x actions just once, save the current state and then execute all the test separately (in parallel if possible), each one with a separate browser instance?
Thanks
You should try to avoid duplicating effort in your tests. However, you must aim for consistency above all, and maintainability is probably just as important.
What that means is that using the browser in a way a real user wouldn't (I think your state-saving idea counts) is very risky for consistency, and may fail to give you the meaningful results you need.
Another alternative - a 'monolithic' test that attempts to cover multiple scenarios within one user session - is also problematic, because it's slower to run and slower to write and debug.
To be honest I think the idea of "browser state" is one that isn't a good fit for the real web.
My suggestion is to run dedicated, self-contained, clean tests - even if they do duplicate things like login/registration forms. However, if it is important to minimise the length of your test runs, try running them in parallel: ideally on multiple VMs, or via Selenium Grid.
I read Bob Martin's brilliant article on how "Given-When-Then" can actual be compared to an FSM. It got me thinking. Is it OK for a BDD test to have multiple "When"s?
For eg.
GIVEN my system is in a defined state
WHEN an event A occurs
AND an event B occurs
AND an event C occurs
THEN my system should behave in this manner
I personally think these should be 3 different tests for good separation of intent. But other than that, are there any compelling reasons for or against this approach?
When multiple steps (WHEN) are needed before you do your actual assertion (THEN), I prefer to group them in the initial condition part (GIVEN) and keep only one in the WHEN section. This kind of shows that the event that really triggers the "action" of my SUT is this one, and that the previous one are more steps to get there.
Your test would become:
GIVEN my system is in a defined state
AND an event A occurs
AND an event B occurs
WHEN an event C occurs
THEN my system should behave in this manner
but this is more of a personal preference I guess.
If you truly need to test that a system behaves in a particular manner under those specific conditions, it's a perfectly acceptable way to write a test.
I found that the other limiting factor could be in an E2E testing scenario that you would like to reuse a statement multiple times. In my case the BDD framework of my choice(pytest_bdd) is implemented in a way that a given statement can have a singular return value and it maps the then input parameters automagically by the name of the function that was mapped to the given step. Now this design prevents reusability whereas in my case I wanted that. In short I needed to create objects and add them to a sequence object provided by another given statement. The way I worked around this limitation is by using a test fixture(which I named test_context), which was a python dictionary(a hashmap) and used when statements that don't have same singular requirement so the '(when)add object to sequence' step looked up the sequence in the context and appended the object in question to it. So now I could reuse the add object to sequence action multiple times.
This requirement was tricky because BDD aims to be descriptive. So I could have used a single given statement with the pickled memory map of the sequence object that I wanted to perform test action on. BUT would it have been useful? I think not. I needed to get the sequence constructed first and that needed reusable statements. And although this is not in the BDD bible I think in the end it is a practical and pragmatic solution to a very real E2E descriptive testing problem.
Is there a way to modularize JMeter tests.
I have recorded several use cases for our application. Each of them is in a separate thread group in the same test plan. To control the workflow I wrote some primitives (e.g. postprocessor elements) that are used in many of these thread groups.
Is there a way not to copy these elements into each thread group but to use some kind of referencing within the same test plan? What would also be helpful is a way to reference elements from a different file.
Does anybody have any solutions or workarounds. I guess I am not the only one trying to follow the DRY principle...
I think this post from Atlassian describes what you're after using Module controllers. I've not tried it myself yet, but have it on my list of things to do :)
http://blogs.atlassian.com/developer/2008/10/performance_testing_with_jmete.html
Jared
You can't do this with JMeter. The UI doesn't support it. The Workbench would be a perfect place to store those common elements but it's not saved in JMX.
However, you can parameterize just about anything so you can achieve similar effects. For example, we use the same regex post processor in several thread groups. Even though we can't share the processor, the whole expression is a parameter defined in the test plan, which is shared. We only need to change one place when the regex changes.
They are talking about saving Workbench in a future version of Jmeter. Once that's done, it's trivial to add some UI to refer to the element in Workbench.
Module controllers are useful for executing the same samples in different thread groups.
It is possible to use the same assertions in multiple thread groups very easily.
At your Test Plan level, create a set of User Defined variables with names like "Expected_Result_x". Then, in your response assertion, simply reference the variable name ${Expected_Result_x}. You would still need to add the assertion manually to every page you want a particular assertion on, but now you only have to change it one place if the assertion changes.