Is there a way to re-run successful tests, from previous run, with google test? - googletest

Is there a gtest flag or any other way to re-run tests with google test that were previously successful ( and no change to any code).

It doesn’t seem that there is a specific option for rerunning tests.
You could run the test collection and generate a report in XML or JSON format. Then, you could parse the report, and create a --gtest_filter=... argument by joining the names of successful tests as a colon-separated string.
You could also imagine writing a custom test listener. This would need to write out successful test names to a file, and once again, you could write some simple code to generate the corresponding --gtest_filter argument.

Related

Test Results Repository Solutions?

I have been searching for a while now and am surprised that I can't find any solutions out there for test result storage with grouping and searching capabilities.
I'd like a service or self hosted solution that supports:
storing test results in xunit/junit organized by keyword. In other words, I want to keep all my "test process A" test results together and all my "test process B" results together. I want to store failure traces and overall pass/fail at a minimum
get last run results for keyword: get the last "auth" test results with failure details
get run history results by keyword in some format
search of some sort on test results
I happen to be have:
Cypress tests
typescript/mocha tests without cypress
custom test framework tests that will need custom reporters
but I am fine with any test results solution that supports a generic input like xunit.
I am definitely open to suggestions that use any other storage system that can accomplish this even if it isn't strictly a test results tool.

Execute one feature at a time during application execution

I'm using Karate in this way; during application execution, I get the test files from another source and I create feature files based on what I get.
then I iterate over the list of the tests and execute them.
My problem is that by using
CucumberRunner.parallel(getClass(), 5, resultDirectory);
I execute all the tests at every iteration, which causes tests to be executed multiple times.
Is there a way to execute one test at a time during application execution (I'am fully aware of the empty test class with annotation to specify one class but that doesn't seem to serve me here)
I thought about creating every feature file in a new folder so that I can specify the path of the folder that contains only one feature at a time, but CucumberRunner.parallel() accepts Class and not path.
Do you have any suggestions please?
You can explicitly set a single file (or even directory path) to run via the annotation:
#CucumberOptions(features = "classpath:animals/cats/cats-post.feature")
I think you already are aware of the Java API which can take one file at a time, but you won't get reports.
Well you can try this, set a System property cucumber.options with the value classpath:animals/cats/cats-post.feature and see if that works. If you add tags (search doc) each iteration can use a different tag and that would give you the behavior you need.
Just got an interesting idea, why don't you generate a single feature, and in that feature you make calls to all the generated feature files.
Also how about you programmatically delete (or move) the files after you are done with each iteration.
If all the above fails, I would try to replicate some of this code: https://github.com/intuit/karate/blob/master/karate-junit4/src/main/java/com/intuit/karate/junit4/Karate.java

I am using msbiuild to call my itegrations tests using open cover. I want to append all the results into one XML file. Is this possible?

I am using msbuild to call my integrations tests using open cover. I want to append all the results into one XML file. Is this possible?
Currently I run open cover against each individual dll we have. This produces an xml file for each dll. Is there a way of just having all the results appended into the one file when running from opencover?
I would like to get all the results appended into the default test results .xml file.
You can use the -mergeoutput switch that allows the output of one run to be loaded and the data updated into the next run.
I used ReportGenerator to amalgamate all the OpenCover results and used the existing team city nunit reporter, to report on the nunit results generated. This gives me two reports from all runs. One with coverage and the second with just the nunit results.

How to modify a single test case in manual recording using MTM

I have created Testcase and record it also. Now i need to modify the test case.
If I do modification even of a single step, then next time if I want execute it, MTM is asking to again record the complete test case. Is it possible to modify single step?
If you are using the Fast Forward feature, then this is the expected behavior. Any changes to the test will require you to record the steps again.

Prefill new test cases in Selenium IDE

I'm using Selenium IDE 2.3.0 to record actions in my web application and create tests.
Before every test I have to clear all cookies, load the main page, log in with a specific user and submit the login form. These ~10 commands are fix and every test case needs them, but I don't want to record or copy them from other tests every time.
Is there a way to configure how "empty" test cases are created?
I know I could create a prepare.html file or something and prepend it to a test suite. But I need to be able to run either a single test or all tests at once, so every test case must include the commands.
Ok I finally came up with a solution that suits me. I wrote custom commands setUpTest and tearDownTest, so I only have to add those two manually to each test.
I used this post to get started:
Adding custom commands to Selenium IDE
Selenium supports object-oriented design. You should create a class that takes those commands that you are referring to and always executes those, in each of the tests that you are executing you could then make a call to that class and the supporting method and then execute it.
A great resource for doing this is here.