So my typical workflow is
I write a data driven test using TestNG in IntelliJ.
I supply hundreds of data items
Run the test and one or two of them fail
I see the list of passed/failed tests in the "Run" pane.
I would like the ability to just right click that "instance" of the test and run that test alone (with breakpoints). Currently IntelliJ does not seem to have that feature. I would have to right click the test and when I run, it runs the whole set of tests with hundreds of data points.
Is this possible?
TestNG supports this at the testng.xml level, where you can specify which indices of your data provider should be used. It's called "invocation-numbers" and you can see what it looks like by running a test with a data provider, failing some of its invocation numbers and looking at the testng-failed.xml that gets generated.
Back to your question: your IDE needs to support this feature in order to make it available in the UI, so I suggest you ask on the IDEA forums
The feature has been added as of Intellij 142.1217: https://youtrack.jetbrains.com/issue/IDEA-57906
Related
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
We have QA and DEV environment in our automation repo. We are using karate as our framework. We have TestParallel class and integrated allure report.
How could we run all tests in QA first then in DEV back to back using TestParallel Class and see the results in the same report?
Thanks for such a great tool btw.
We are going to try and make this easier in the next version.
For now, you have to aggregate the reports yourself. Can you try this and let us know how it goes.
use the Runner class 2 times to run your tests with different settings and karate.env set for QA and then DEV
the important part is using a different value for the workingDir, e.g. target/reports/qa and then target/reports/dev - else the second run will overwrite the first
now when generating the HTML report, you can provide target/reports as the source folder. this should work for the Maven Cucumber Reports, for Allure, please figure this out on your own
if the above approach does not work well enough for your needs, please figure out a way to manually aggregate the Results object you get from each instance of the Runner, this should not be too complicated as Java code
I'm using Selenium IDE 2.3.0 to record actions in my web application and create tests.
Before every test I have to clear all cookies, load the main page, log in with a specific user and submit the login form. These ~10 commands are fix and every test case needs them, but I don't want to record or copy them from other tests every time.
Is there a way to configure how "empty" test cases are created?
I know I could create a prepare.html file or something and prepend it to a test suite. But I need to be able to run either a single test or all tests at once, so every test case must include the commands.
Ok I finally came up with a solution that suits me. I wrote custom commands setUpTest and tearDownTest, so I only have to add those two manually to each test.
I used this post to get started:
Adding custom commands to Selenium IDE
Selenium supports object-oriented design. You should create a class that takes those commands that you are referring to and always executes those, in each of the tests that you are executing you could then make a call to that class and the supporting method and then execute it.
A great resource for doing this is here.
Currently our Jenkins server only displays a history/graph for the overall number of passed/skipped/failed tests - I'm assuming that's the behavior out of the box.
If you select a single test, you'll get information for how long the test was failing (assuming it did fail).
However, we'd like to see is a history for that single test across the different builds to identify whether the test has been failing in the past (and when) even though it just passed. If you find a build where it failed, you could click on it, and investigate what might have caused the failure; if it passes again, you could check whether something actually fixed the test, or whether it was failing randomly all along.
Is this something that can be done somehow through the config, or do we need an additional plugin for this? If yes, which one?
Not sure if this makes much difference, but we're using Java (Maven) & TestNG (Surefire).
Both the TestNG plugin and the JUnit plugin will actually display history of the test results.
You just need to pick a given result and then:
For JUnit click on "History" on the left side, and
For TestNG click you will see the history in the graph above the result. You can just click on the bars in the bars to see the older results, and also if you click closer to the edge, the scope of the test results will adjust
The Test Results Analyzer plugin does the job for me. There appears to be other suitable plugins out there as well.
https://wiki.jenkins-ci.org/display/JENKINS/Test+Results+Analyzer+Plugin
Does the Static Code Analysis plugin help?
I'm looking for a some sort of management/reporting tool that collects the results of tests, stores them for reporting, then lets users generate reports based on those tests.
We have numerous test running tools that run on a variety of platforms, but all output test results in the JUnit format. The test are not specific to hardware or platform, but rather generic. What we would like to do is have an automated (or manual) test run be able to submit it to a central location along with additional information, like platform, OS, hardware configuration and maybe user defined data. The management/reporting tool would store this data.
Then, a manager would be able to go to the tool and request (or more likely, access a dashboard that developers have setup) an update on the current status. This could be a list of test results that were run in a particular configuration, or a hardware status, or just the results of specific test(s).
Any suggestions?
We built a test management tool Enterprise Tester (www.enterprisetester.com) that allows users to pull automated test results in nUnit, nUnit, XSLT, Selenium etc and report off the results.
In addition to pulling the results and reporting you are able to trace these tests back to requirements that have been created giving you the ability to see test coverage and the status of this coverage on dashboards. If you are using JIRA (or google) or anything that uses open social gadgets you can pass these gadgets to other tools also.
Feel free to contact me directly if you would like to talk further about it
Regards
Bryce
You can also try an open source project called Allure Framework. It was created specially for showing test execution results in a nice form. A set of popular test frameworks such as JUnit, PHPUnit, TestNG, py.test, RSpec, Scalatest are already supported. Other ones such as NUnit will be supported soon.
Check out Hudson. it's very useful and configurable