Rerun Failed Behat Tests Only A Set Number of Times - behat

I have an Behat testing suite running through Travis CI on Pull Requests. I know that you can add a "--rerun" option to the command line to rerun failed tests, but for me Behat just keeps trying to rerun the failed tests, which eventually times out the test run session.
Is there a way to limit the number of times that failed tests are re-ran? Something like: "behat --rerun=3" for trying to run a failed scenario up to three times?
The only other way I can think to accomplish what I want is to use the database I'm testing Behat against or to write to a file and store test failures and the number of times they have been run.
EDIT:
Locally, running the following command ends up re-running only the one test I purposely made to fail...and it does it in a loop until something happens. Sometimes it was 11 times and sometimes 100+ times.
behat --tags #some_tag
behat --rerun
So, that doesn't match what the behat command line guide states. In my terminal, the help option give me "--rerun Re-run scenarios that failed during last execution." without any mention of the failed scenario file. I'm using a 3.0 version of behat though.
Packages used:
"behat/behat": "~3.0",
"behat/mink": "~1.5",
"behat/mink-extension": "~2.0",
"behat/mink-goutte-driver": "~1.0",
"behat/mink-selenium2-driver": "~1.1",
"behat/mink-browserkit-driver": "~1.1",
"drupal/drupal-extension": "~3.0"
Problem:
Test fail at random due to mainly Guzzle timeout errors going past 30 seconds trying to GET a URL. Sure you could try bumping up the max execution time, but since other tests have no issues and 30 seconds is a long time to wait for a request, I don't think that will fix the issue and it will make my test runs much longer for not a good reason.

It is possible that you might not have an efficient CI setup?
I think that this option should run the failed tests only once and maybe your setup enters in a loop.
If you need to run the failed scenarios for a number of times maybe you should write a script that checks some condition and runs with --rerun.
Be aware that as i see on the Behat CLI guide if no scenario was found in the file where the failed scenarios are saved then all scenarios will be executed.
I don't think that using '--rerun' in CI is good practice. You should do a high level review of the results before deciding to do a rerun.
I checked today the rerun on Behat 3 and it seems there might be a bug related to the rerun option, i saw today some pull requests on github.
One of them is https://github.com/Behat/Behat/pull/857
Related to the timeout you can check if Travis has some timeout to set, it it has enough resources and you can use the same steps to run it from desktop and see the difference for the same test environment.
Also set CURLOPT_TIMEOUT for guzzle with the value needed to pass in case is not to exaggerated and you will need to find other solution to improve the execution speed.
It should not be such an issue to have a higher value because this should be a conditional wait, so if is faster it will not impact the execution time, else it will wait longer for problematic scenarios.

Related

Is it possible to reserve a GitLab runner for all jobs/stages of a pipeline?

I'm using GitLab pipelines to run e2e tests on various physical machines (these machines are connected to the test hardware in a 1 to 1 relation). On each machine, a GitLab runner is installed. The pipeline consists of three major parts:
prepare the test hardware (deploy, configure)
execute the e2e tests (on the test hardware)
clean up the test hardware
Currently I'm doing all of this in one job, by using the before_script, script and after_script keywords. But I would like to use multiple jobs (or even stages) for this.
The problem I'm facing is, that I can't be sure that all jobs/stages are executed on the same runner. So it might happen, that the prepare step is executed on runner1 and the execute step is executed on runner2 (even in parallel), which obviously is not what I want. The preparation is more than just creating artifacts, therefore I can't simply give it to the next job.
Tags also seems not to solve this, because a tag can only be specified for one job, not for multiple, or the complete stage.
I understand that this is not the way how runners are used normally, but I still wonder if there is a way to achieve this.
Or can someone point out another approach to solve this?
I'm using GitLab Community Edition 14.3.2.
I think you have two options here for how you can split this up -
As sytech mentioned, you can tag each machine with machine-1, machine-2, etc, which will allow you to make your jobs sticky to each runner. Since you can use variables in runner tags, you could have a job at the start that checks which runner is not running tests, and sets RUNNER_TAG or something similar to that runner, so you don't have to hardcode your runner to a single box
You could not have the test boxes run the jobs directly (presumably you're using a shell runner to do this today), and use SSH or winRM to access the box directly, and modify it from there. Then the state of your runner doesn't matter at all. This is likely the "cleaner" way to do it, so your test boxes don't have to share resources or state with the runner

How to run TestCafe tests in parallel in CI by specifying the metadata

As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.

How to wait till all components are loaded in Cypress?

My Cypress test cases are working fine when I run from my system pointing to QA. But the scheduled builds from CI are failing randomly because sometimes the page is taking more time to load.
I've tried cy.wait(1500) -> It works sometimes and fails sometimes. So, I was wondering is there a command in cypress that waits till all components in the page is loaded. Instead of I try different values inside cy.wait() which in turn fails someday?
By default, Cypress has smart waits for all elements to load and the page to render. This section confirms that: https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Cypress-is-Not-Like-jQuery
Instead of inserting cy.wait() in each and every step (Best practice is to minimise the use of this), you can include a maximum timeout on your cypress.json file:
"defaultCommandTimeout": 30000,
"pageLoadTimeout": 60000,
"requestTimeout": 60000,
If it still fails, then something is wrong with your test environment and might be a good time for a dev to check its performance as you don't want an environment that loads that long especially if it is a replica of production / live site.

How to Skip the passed test cases in Cucumber-QAF setup

I have a project where I am running 100 scenarios every day. After the run has complete, through listeners I am updating the pass/fail in an Excel sheet. I want to hear about a solution where, if I am running the test suite again, the passed test cases should be skipped and only failed test cases should run. I dont want to use retry.I tried to use skipException in beforeInvocation listener method but the test case is still executing the passed test case. How can I skip the passed test cases and execute only the failed one through listeners ?
Every time before the start of the scenario, it should go to the listener and check the excel sheet whether the scenario is passed or fail. If passed then the scenario should be skipped.
Any help will be greatly appreciated.
Update: I am able to do it through listeners, with skipException, but in my report it is showing test as failed and not as skipped
When you run bdd tests, qaf generates configuration file with name testng-failed-qas.xml under reports dir. You should use that config file to run only failed scenarios.

Forcing integration tests to run one at a time in a jenkins pipeline

I have a small collection of integration tests that utilize selenium in a class. The idea is that these tests run every time there is a merge to the codebase, with the merge proceeding through the pipeline and having a series of tests running against the new code.
The thing is, these selenium tests have to run one at a time. They're using the browser to log into a website, and the account will just log out if more than one person tries to log into the account at once, it'll just log out, and the test will obviously fail, so I need these tests to run one at a time. I've tried using the #NotThreadSafe annotation, doesn't seem to have changed anything, and I've searched through for some sort of switch or parameter that defines how many tests run at once with no luck. These tests are using junit 4.12.