Intellij/Scalatest failing to start tests - intellij-idea

I am running my Playspec tests but I keep getting error that 60 of the tests failed to start. What is surprising is the nesting of test cases.
Sorry, I don't have more information at the moment. What does the nested testcase list (lower left corner) in the picture mean? Are these test running in parallel?

The issue was that each test was starting its own instance of database. After a point in time, the system gets overwhelmed.

Related

Junit 5 , parallel tests : logs getting mixed up

for my project I am using junit5 for testing. Now, as we know when we run test in parallel, all the logs get mixed in different order. Now, as my project is more of backend, it requires debugging of logs alott.
Thus , is there a way when we can reorder the logs on the basis of test method ran ? here anyone has faced such issue and resolved it OR have any thoughts on that. It would be much help. Thank you.

TFS Automated tests with multiple iterations show as passed even when the second iteration fails

I am working with an TFS 2017 environment with test agent 2015. Before this we had an TFS 2013 environment with test agent 2013 and MTM (this worked fine).
On the moment we have the following problem:
We run a set with around 40 tests, all of them have multiple iterations. if the first iteration fails we see this in tfs, the test status is set to failed this is perfect. However if the first iteration succeeds and the second fails the test case is set to passed in TFS. But if the second iteration fails we want the whole test to be set to failed. The way it is now it looks like almost all our tests pass however sometimes a lot of later iterations fail what means that we get false reporting.
When I open the .TRX file belonging to one machine I can see what iterations failed and which one succeeded.
so the problem in a nutshell:
if the first iteration of a test passes and the second one fails the whole test is set to Passed in stead of failed what gives us false reporting.
I have absolutely no idea what we are doing wrong. But now it gives is false information about our runs.
Is there anyone here that has experienced the same problem?
Any help would be really appreciated as I have not been able to find any information about this subject on google.
I have posted this on the Microsoft forum. They have answered that they can reproduce it what means it's probably an issue in tfs/testagent. More information can be found here:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/4a384376-feae-46a9-a3da-e4445bc905d8/tfs-automated-tests-with-multiple-iterations-show-as-passed-even-when-the-second-iteration-fails?forum=tfsgeneral

Rerun Failed Behat Tests Only A Set Number of Times

I have an Behat testing suite running through Travis CI on Pull Requests. I know that you can add a "--rerun" option to the command line to rerun failed tests, but for me Behat just keeps trying to rerun the failed tests, which eventually times out the test run session.
Is there a way to limit the number of times that failed tests are re-ran? Something like: "behat --rerun=3" for trying to run a failed scenario up to three times?
The only other way I can think to accomplish what I want is to use the database I'm testing Behat against or to write to a file and store test failures and the number of times they have been run.
EDIT:
Locally, running the following command ends up re-running only the one test I purposely made to fail...and it does it in a loop until something happens. Sometimes it was 11 times and sometimes 100+ times.
behat --tags #some_tag
behat --rerun
So, that doesn't match what the behat command line guide states. In my terminal, the help option give me "--rerun Re-run scenarios that failed during last execution." without any mention of the failed scenario file. I'm using a 3.0 version of behat though.
Packages used:
"behat/behat": "~3.0",
"behat/mink": "~1.5",
"behat/mink-extension": "~2.0",
"behat/mink-goutte-driver": "~1.0",
"behat/mink-selenium2-driver": "~1.1",
"behat/mink-browserkit-driver": "~1.1",
"drupal/drupal-extension": "~3.0"
Problem:
Test fail at random due to mainly Guzzle timeout errors going past 30 seconds trying to GET a URL. Sure you could try bumping up the max execution time, but since other tests have no issues and 30 seconds is a long time to wait for a request, I don't think that will fix the issue and it will make my test runs much longer for not a good reason.
It is possible that you might not have an efficient CI setup?
I think that this option should run the failed tests only once and maybe your setup enters in a loop.
If you need to run the failed scenarios for a number of times maybe you should write a script that checks some condition and runs with --rerun.
Be aware that as i see on the Behat CLI guide if no scenario was found in the file where the failed scenarios are saved then all scenarios will be executed.
I don't think that using '--rerun' in CI is good practice. You should do a high level review of the results before deciding to do a rerun.
I checked today the rerun on Behat 3 and it seems there might be a bug related to the rerun option, i saw today some pull requests on github.
One of them is https://github.com/Behat/Behat/pull/857
Related to the timeout you can check if Travis has some timeout to set, it it has enough resources and you can use the same steps to run it from desktop and see the difference for the same test environment.
Also set CURLOPT_TIMEOUT for guzzle with the value needed to pass in case is not to exaggerated and you will need to find other solution to improve the execution speed.
It should not be such an issue to have a higher value because this should be a conditional wait, so if is faster it will not impact the execution time, else it will wait longer for problematic scenarios.

Selenium grid runs out of free slots

I have a large suite of SpecFlow tests executing against a selenium grid running locally. The grid has a single host configured for max 10 firefox instances. The tests are run from NUnit serially, so I would only expect to require a single session at a time.
However, when approximately half of the test cases have been run, the console window reporting output from the hub starts reporting
INFO: Node host [url] has no free slots
Why?
All the test cases are associated with a TearDown method that closes and disposes the WebDriver, although I haven't verified that absolutely every test gets to this method without failing. I would expect a maximum of one session to be active at once. How can I find out what is stopping the host from recycling those sessions?
edit #1:
I think I've narrowed down the cause of the issue - it is indeed to do with not closing the WebDriver. There are [AfterScenario] attributes on the teardown methods that are meant to do this, but they only match a subset of scenarios as they have parameters on them. Removing the parameter so that the teardown associates with every scenario fixes the session exhaustion (or seems to) but there are some tests that expect to reacquire an existing session, so I'll have to fix them separately.
A bit of background: This test suite was inherited as part of a 'complete' solution and it's been left untouched and never run since delivery. I'm putting it back into service and have had to discover its quirks as I go - I didn't write any of this. I've had brief encounters with both Selenium and SpecFlow but never used the two together.
The issue turned out to be a facepalm-level fail - mostly in the sense that I didn't spot it. Some logging code was trying to write to a file that wasn't there, the thrown exception bypassed the call to Dispose() on the WebDriver, and was then swallowed with no error reporting. Therefore the sessions were hanging around. Removing the logging code fixed the session exhaustion.
Look on the node (remote desktop) and see what is happening on the box. It does sound like your test isn't closing out it's session properly.

Cuccumber + Capybara, When running a scenario in my feature file only the background steps are running whilst the scenario steps are ignored

After quite a few hours of searching for answers to this to no avail, along with trying to source the issue myself within rubymine, I am now resigning to asking a question for it...
When I run one of my scenario's in my feature file, or all scenario's, it only processes the background steps and then ignores all the others that are within my scenario.
The stats at the end then report:
1 Scenario (1 Failed)
4 Steps (3 Skipped, 1 Passed)
So no steps failed! I have verified that the scenario works on another machine and passes successfully. Does anyone have an idea why it would just be ignoring my scenario steps?
Thank you in advance
I have actually managed to fix this problem myself!!! :)
In the javascript_emulation.rb file there is a known issue around capybara and racktest, the workaround and easy fix for that is to remove ::Driver after :Capybara for the java emulation bits.
If none of the ::Driver entries are removed the following error is returned:
undefined method 'click' for class 'Capybara::Driver:RackTest:Node' (NameError)
then a list of the problem areas in different files.
If the ::Driver entry is removed from the class Capybara::Driver:RackTest::Node
then the test will run but only process the background tests.
All instances of ::Driver must be removed in this file. For me there were 4 in total.
Hope this helps others :)