Is there a way to control how tests are distributed with pytest --dist loadfile and loadscope - selenium

I am trying to figure out if there is a way I can control/ understand how the tests are distributed between different workers when I use the --dist=loadfile or loadscope feature.
The structure of my project is
tests
tests_a.py
.
.
.
.
.
tests_h.py
And each test module has a random number of tests defined in it. Let's say I have a particular test in tests_b.py that I want to run at the end of test suite and marked it by using #pytest.mark.last.
I also have the mechanism in that particular test implemented where if it is run by the worker and there are other workers running other tests, it will wait until they all are done. (By using --dashboard function where it updates number of tests are untested)
The problem I am facing is sometimes tests are randomly assigned to different workers and all the other workers complete the tests as expected but the worker working on tests_b.py which has the tearDown()(the function that I want to happen at the very end) runs the test even though there are other modules it needs to complete (lets say tests_f.py is assigned to this same worker). All other workers complete running and close. But this worker runs the tearDown() which waits for all other tests to be completed and is stuck in a loop because the other untested tests are assigned to this same worker and they never get executed. Is there a way to actually make this particular test to run at the end of all the tests that are assigned to the worker?
Thank you

Related

How to run TestCafe tests in parallel in CI by specifying the metadata

As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.

Forcing integration tests to run one at a time in a jenkins pipeline

I have a small collection of integration tests that utilize selenium in a class. The idea is that these tests run every time there is a merge to the codebase, with the merge proceeding through the pipeline and having a series of tests running against the new code.
The thing is, these selenium tests have to run one at a time. They're using the browser to log into a website, and the account will just log out if more than one person tries to log into the account at once, it'll just log out, and the test will obviously fail, so I need these tests to run one at a time. I've tried using the #NotThreadSafe annotation, doesn't seem to have changed anything, and I've searched through for some sort of switch or parameter that defines how many tests run at once with no luck. These tests are using junit 4.12.

Selenium grid runs out of free slots

I have a large suite of SpecFlow tests executing against a selenium grid running locally. The grid has a single host configured for max 10 firefox instances. The tests are run from NUnit serially, so I would only expect to require a single session at a time.
However, when approximately half of the test cases have been run, the console window reporting output from the hub starts reporting
INFO: Node host [url] has no free slots
Why?
All the test cases are associated with a TearDown method that closes and disposes the WebDriver, although I haven't verified that absolutely every test gets to this method without failing. I would expect a maximum of one session to be active at once. How can I find out what is stopping the host from recycling those sessions?
edit #1:
I think I've narrowed down the cause of the issue - it is indeed to do with not closing the WebDriver. There are [AfterScenario] attributes on the teardown methods that are meant to do this, but they only match a subset of scenarios as they have parameters on them. Removing the parameter so that the teardown associates with every scenario fixes the session exhaustion (or seems to) but there are some tests that expect to reacquire an existing session, so I'll have to fix them separately.
A bit of background: This test suite was inherited as part of a 'complete' solution and it's been left untouched and never run since delivery. I'm putting it back into service and have had to discover its quirks as I go - I didn't write any of this. I've had brief encounters with both Selenium and SpecFlow but never used the two together.
The issue turned out to be a facepalm-level fail - mostly in the sense that I didn't spot it. Some logging code was trying to write to a file that wasn't there, the thrown exception bypassed the call to Dispose() on the WebDriver, and was then swallowed with no error reporting. Therefore the sessions were hanging around. Removing the logging code fixed the session exhaustion.
Look on the node (remote desktop) and see what is happening on the box. It does sound like your test isn't closing out it's session properly.

Fitnesse: How to stop one test run when executing a suite

I am using Fitnesse 20130530 to execute a test suite that contains multiple tests. Most of my tests use script tables with SLIM to drive Selenium. I use a Stop Test Exception to stop the execution of a test when one of the method calls raises an exception. Unfortunately, this also stops the execution of the whole suite. Is there a way to just stop the current test and then continue execution with the next test in the suite?
Not in FitNesse itself, but you can build it into your fixtures.
When I had a similar problem I was able to solve it using what we called "fail fast" mode. This was a static variable that could be set to true under certain conditions (typically by an element not found exception or similar).
Our main driver was structured such that we could pass through one spot that could check for that value before calling the browserDriver. This would then skip the broswerDriver calls until the test ended.
The next text would clear the flag and start up again.
You would need to manage the whole process, but it can work.

Running Selenium Tests in Parallel

I have selenium test that takes 1 minute to complete . If I want to run this 1000 times I have to wait 16 hours . Is there any way I can run 5 tests in parallel so that it can be done in 3 hours ? I have generated a JUnit test scrip and tried to run in with multiple threads but they end up using the same Firefox window . I don't want to run this on grid cause running 5 Firefox window is not that resource intensive.
Thanks
By using below logic you can run your junit cases in parallel.
Class[] cls={test1.class,test2.class,test3.class,test4.class};
JUnitCore.runClasses(new ParallelComputer(true,false),cls);
In above method first parameter of ParallelComputer() indicates classes and second one is for methods. Here I'm running classes in parallel but not methods.
ParallelComputer Class documentation
http://junit-team.github.io/junit/javadoc/4.10/org/junit/experimental/ParallelComputer.html
Try with this example
http://mycila.googlecode.com/svn/sandbox/src/main/java/com/mycila/sandbox/junit/runner/
The file to launch is MySuite.java. Works well for me.