What is the practical benefit to putting t.Parallel() at the top of my tests? - testing

The go testing package defines a Parallel() function:
Parallel signals that this test is to be run in parallel with (and only with) other parallel tests.
However when I searched the tests written for the standard library, I found only a few uses of this function.
My tests are pretty fast, and generally don't rely on mutating shared state, so I've been adding this, figuring it would lead to a speedup. But the fact it's not used in the standard library gives me pause. What is the practical benefit to adding t.Parallel() to your tests?

This thread (in which t.Parallel is conceived and discussed) indicates that t.Parallel() is intended to be used for slow tests only; average tests are so fast that any gain from parallel execution would be negligible.
Here are some quotes (only from Russ, but there wasn't much opposition):
Russ Cox [link]:
There is some question about what the right default is.
Most of our tests run so fast that parallelizing them is
unnecessary. I think that's probably the right model,
which would suggest parallelizing is the exception,
not the rule.
As an exception, this can be accommodated having a
t.Parallel()
method that a test can call to declare that it is okay
to run in parallel with other tests.
Russ Cox again [link]:
the non-parallel tests should be fast. they're inconsequential.
if two tests are slow, then you can t.Parallel them if it bothers you.
if one test is slow, well, one test is slow.

This seems to have been brought up first on the golang-dev group.
The initial request states:
"I'd like to add an option to run tests in parallel with gotest.
My motivation comes from running selenium tests where each test is pretty much independent from each other but they take time."
The thread contains the discussion of the practical benefits.
It's really just for allowing you to run unrelated, long-running tests at the same time. It's not really used in the standard library as almost all of the functionality needs to be as fast as possible (some crypto exceptions etc.)
There was further discussion here and the commit is here

Related

Executing Feature in isolation, but contained Scenarios in parallel

I have a large-ish and rapidly growing body of karate tests and use the parallel execution to speed up processing, which basically works great in different configurations:
Normal parallel execution (vast majority of tests)
Sequential execution of Scenarios within a Feature (parallel=false) for very few special cases
Completely sequential execution (via separate single-threaded runner, triggered by custom #sequential tag) for things that modify configuration settings, global lookups etc
There's however also a parameterized (Scenario Outline) feature for the basic functionality of many types of global lookups. Currently it runs in the "completely sequential" mode because it affects other tests. But actually the scenarios inside that feature could be executed in parallel (they don't affect each other) as long as the Feature as a whole is executed in isolation (because the tests do affect other Features).
So - is there a way to implement "sequential Features with parallel Scenarios" execution? I admit that this is likely a niche case, but it would speed up tests execution quite a bit in my case.
... and posting this question already got the ideas flowing and pointed me to a possible way to implement this:
private static void runLocalParallel(Builder<?> builder) {
final List<Feature> features = builder.tags("#local_parallel").resolveAll();
for (Feature feature : features) {
builder.features(feature).parallel(8);
}
}
This identifies all features tagged with #local_parallel, iterates over them and executes a parallel runner for each individually. Result handling, report output etc still needs to be implemented in an elegant manner, but that's doable as well.
Yes, indeed an edge case - but it has come up a few times. We've wondered about a way to "bucketize" threads, which means we can do things like say that certain tags have to be run only on particular thread. Come to think of it, that's a good feature request, so I opened one, feel free to comment. https://github.com/karatelabs/karate/issues/2235
In theory if you write some Java glue code that holds a lock, you can call that code before entering any "critical" feature. I haven't tried it, but may be worth experimenting.

Whats wrong in using Hard Waits in UI Automation scripts?

I always keep on getting questions where people complaining explicit
or fluent waits not working all the time so looking for some other
solution. Well in most of the cases after putting a Hard Wait of 2 or
3 sec problem gets resolved.
But still the decision to make the scripts less faster does not makes
your scripts less reliable after using hard waits of few seconds if
they are really required is what I think.
I know the solution to my answer is not any code but want to really understand are they any serious hazards of using hard waits in Automation scripts?
For me they are equally important to that of explicit and fluent waits.
Hard or implicit waits are undesirable because they add uneccassary execution time to a automation suite. If your test suite is small, is not run regularly or you are not under any time restraints then maybe implicit waits are acceptable.
In most real world scenarios quick test execution turnaround is highly desirable and adding a few 2/3 second waits adds up pretty quickly if you are executing tests on every commit to a branch or have a large test suite.
I would argue that if explicit waits are not working properly then they are either:
poorly written
not given enough time
The former is more likely.
The hazard of using "Hard wait" or Thread.Sleep() more precisely is that you're unsure if your condition is met. In UI testing you mostly add sleeps so the UI will fulfill some condition such as Page to load, Element to appear / be enabled etc...
When you use Thread.Sleep() you need to set it to the maximum time so you'll be sure that your condition is met (even than you can't be sure about 'stress' situations - network load etc.). So you get wasting of your time and resources which make your automation less scalable and less agile (another hazard for some...)
So I would say if you can't add a condition to your wait Thread.Sleep() is fine (though less readable), otherwise it's simply a bad practice.

Making Selenium Tests Stable

There is an issue I have with automation.. Not that I do not agree that it is absolutely the best way to test an application, but in the sense that achieving stability is extremely hard to do. How do you guys make your tests stable? Of course I add explicit waits and the occasional thread.sleep(), but they are never 100% stable.. The point of this thread is to post some tips and tricks that you guys have discovered that have made an impact on your automated tests
You should try to avoid using thread.sleep(), the reason why is when you get to the point having big test suite you will waste your time on waiting. Instead learn how to use Explicit and Implicit Waits.
Community experts recommend to use explicit waits more often, it would allow you to wait for specific action to happen, and once it happen WebDriver will continue to work without wasting any more time.
Even though there is some more advance tips and tricks written by Mark Collin in Mastering Selenium WebDriver book
Sometimes you can fail tests on purpose and catch exceptions then based on it make decision, learn how to use "try catch", I don't think it is a good practice, but I have seen test engineers(including myself) use it a lot.
I would recommend looking at Selenide because if you don't want to go through the effort of figuring out how to make your own framework stable, you can just use the Selenide framework to get yourself going and then you wont need to worry about waits any more.
Of course, there is some value in creating your own framework, expecially if your doing test driven development and want to unit test your framework for Sonar code coverage. But, if your not at that level, using Selenide is what I would recommend for the biggest impact on your success.

Need of Integration testing

We have Eclipse UI in the frontend and have a non Java based backend.
We generally write Unit tests separately for both frontend and backend.
Also we write PDE tests which runs Eclipse UI against a dummy backend.
My question is do we need to have integration tests which test end to end.
One reason i might see these integration tests are useful are when i upgrade my frontend /backend i can run end to end tests and i find defects.
I know these kind of questions are dependent on particular scenario.
But would like to what is the general and best practice followed by all here.
cheers,
Saurav
As you say, the best approach is dependant on the application. However, in general it is a good idea to have a suite of integration tests that can test your application end-to-end, to pick up any issues that may occur when you upgrade only one layer of the application without taking those changes into account in another layer. This sounds like it would be definitely worthwhile in your case, given that you have system components written in different languages, which naturally creates more chance of issues arising due added complexity around the component interfaces.
One thing to be aware of when writing end-to-end integration tests (which some would call system tests) is that they tend to be quite fragile when compared to unit tests, which is a combination of a number of factors, including:
They require multiple components to be available for the tests, and for the communication between these components to be configured correctly.
They exercise more code than a unit test, and therefore there are more things that can go wrong that can cause them to fail.
They often involve asynchronous communication, which is more difficult to write tests for than synchronous communication.
They often require complex backend data setup before you can drive tests through the entire application.
Because of this fragility, I would advise trying to write as few tests as possible that go through the whole stack - the focus should be on covering as much functionality as possible in the fewest tests possible, with a bias towards your most important functional use-cases. A good strategy to get started would be:
Pick one key use-case (which ideally touches as many components in the application as possible), and work on getting an end-to-end test for this (even just having this single test will bring a lot of value). Focus on making this test as realistic as possible (i.e. use a production-like deployment), as reliable as possible, and as automated as possible (ideally it should run as part of continuous integration). Even just having this single test brings a lot of value.
Build out tests for other use-cases one test at a time, again focusing on your most important use-cases at first.
This approach will help to ensure that your end-to-end tests are of high quality, which is vital for their long-term health and usefulness. Too many times I have seen people try to introduce a comprehensive suite of such tests to an application, but ultimately fail because the tests are fragile & unreliable, people lose faith in them, don't run or maintain them, and eventually they forget they even had the tests in the first place.
Good luck and have fun!

New WebDriver instance per test method?

What's the best practice fort creating webdriver instances in Selenium-webdriver? Once per test method, per test class or per test run?
They seem to be rather (very!) expensive to spin up, but keeping it open between tests risks leaking information between test methods.
Or is there an alternative - is a single webdriver instance a single browser window (excluding popups), or is there a method for starting a new window/session from a given driver instance?
Thanks
Matt
I've found that reusing browser instances between test methods has been a huge time saver when using real browsers, e.g. Firefox. When running tests with HtmlUnitDriver, there is very little benefit.
Regarding the danger of indeterministic tests, it's a trade-off between totally deterministic tests and your time. Integration tests often involve trade-offs like these. If you want totally deterministic integration tests you should also be worrying about clearing the database/server state in between test runs.
One thing that you definitely should do if you are going to reuse browser instances is to clear/store the cookies between runs.
driver.manage().deleteAllCookies();
I do that in a tearDown() method. Also if your application stores any data on the client side you'd need to clear that (maybe via JavascriptExecutor). To your application which is under test, it should look like a completely unrelated request after doing this, which really minimizes the risk of indeterministic behaviour.
If your goal of automated integration testing is to have reproducible tests, then I would recommend a new webdriver instance for every test execution.
Each test should stand alone, independent from any other test or side-effects.
Personally the only thing I find more frustrating than a hard to reproduce bug, is a non-deterministic test that you don't trust.
(This becomes even more crucial for managing the test data itself, particularly when you look at tests which can modify persistent application state, like CRUD operations.)
Yes, additional test execution time is costly, but it is better then spending the time debugging your tests.
Some possible solutions to help offset this penalty is to roll your testing directly into your build process, going beyond Continuous Build to Continuous Integration approach.
Also try to limit the scope of your integration tests. If you have a lot of heavy integration tests, eating up execution time, try to refactor. Instead, increase the coverage of your more lightweight unit tests of the underlying service calls (where your business logic is).