I run the test on bamboo with selenium technology and the test tab does not show the test failure, how I can view the test failure?
To see test results (including info which test caused fail) you need to add proper parser task to your job. There are many available parser tasks i.e. JUnit Parser, NUnit Parser, TestNG Parser.
The important thing is to move parser task under Final tasks bar (parser will execute even if previous task fail).
You don't have any test failures. In fact, no tests were running in this build as you can see in line
0 test in total
You job may have failed for one of 2 reasons:
Actual failure of the job. Check detailed log on Logs tab to see if this is the case
It may have also failed because you configured it to fail when there was no tests (and in this case there was no tests indeed) as we see. The corresponding configuration would look like this:
Related
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
I want to exit test in the test case. And do not want the report to show the number of test cases (TC003) that exit test In the example below,
*** test cases ***
TC001
Run Keyword If '1'=='1' Log To Console xx
TC002
Run Keyword If '2'=='2' Log To Console xx
TC003
Run Keyword If '3'!='3' Exit Test
How do I use it? You can guide me
I don't think there's anything you can do. Once the test starts running, there's no way to remove it from the reports and logs.
It sounds like you're trying to skip a test under certain circumstances. If so, it will be able to mark a test as skipped (versus pass/fail) in robot framework 4.0, though it will still show up in the logs and reports.
If you really don't want it in the reports, you can write a script that removes the tests from the output.xml file and then regenerates the html logs and reports using rebot.
I have a project where I am running 100 scenarios every day. After the run has complete, through listeners I am updating the pass/fail in an Excel sheet. I want to hear about a solution where, if I am running the test suite again, the passed test cases should be skipped and only failed test cases should run. I dont want to use retry.I tried to use skipException in beforeInvocation listener method but the test case is still executing the passed test case. How can I skip the passed test cases and execute only the failed one through listeners ?
Every time before the start of the scenario, it should go to the listener and check the excel sheet whether the scenario is passed or fail. If passed then the scenario should be skipped.
Any help will be greatly appreciated.
Update: I am able to do it through listeners, with skipException, but in my report it is showing test as failed and not as skipped
When you run bdd tests, qaf generates configuration file with name testng-failed-qas.xml under reports dir. You should use that config file to run only failed scenarios.
I'm running 11 test scenarios on 3 different system all together parallely.
S1: Win7 Firefox46.0
S2: Win10 Chrome58.0
S3: Mac Safari9.0
After completion I can see the test failure in TestNG report but I can't track in which system the scenario is failed.
Is there any way so that I can track in which system or environment test failed.
How do yo execute the test cases? Do you do it in your build with CI-System, IDE?
On the selenium website https://github.com/SeleniumHQ/selenium/wiki/Grid2 is described how to surrender capabilities on the grid. You could deliver them as String variables and looking for their values in case of failing.
Maybe this could help you?
Using TestNG it can be very easy: Just put the browser name as a parameter into a data provider and print it in your stacktrace. It can be shortened like: "ch" for Chrome or "ff" for Firefox.
A control variable like can be useful for you if you decide to run a test case in another browser tommorow.
So my typical workflow is
I write a data driven test using TestNG in IntelliJ.
I supply hundreds of data items
Run the test and one or two of them fail
I see the list of passed/failed tests in the "Run" pane.
I would like the ability to just right click that "instance" of the test and run that test alone (with breakpoints). Currently IntelliJ does not seem to have that feature. I would have to right click the test and when I run, it runs the whole set of tests with hundreds of data points.
Is this possible?
TestNG supports this at the testng.xml level, where you can specify which indices of your data provider should be used. It's called "invocation-numbers" and you can see what it looks like by running a test with a data provider, failing some of its invocation numbers and looking at the testng-failed.xml that gets generated.
Back to your question: your IDE needs to support this feature in order to make it available in the UI, so I suggest you ask on the IDEA forums
The feature has been added as of Intellij 142.1217: https://youtrack.jetbrains.com/issue/IDEA-57906