How to get nunit console 3 to output failed and ignored tests (only) to text files? - nunit-3.0

I understand that nunit console 3 can write to TestResult.xml after running the tests, where the TestResult.xml is located inside the directory specified by --work parameter.
But from what I can tell, TestResult.xml contains too many (irrelevant) details, that I don't need to fix my unit tests errors. All I need is just the failed or ignored test cases, just like what is displayed in command line prompt when I am running nunit console in it.
How to configure the parameters for nunit console 3 so that it only gives me failed or ignored test cases?

Sorry, I'm a few months late (unfortunately I came across this problem only now). But as far as I searched, the only possibility is to use XSL transformation. There are some applications that can convert the XML report file, but... I tested a few, but unfortunately the output was not that was I need.
Therefore I created a simple NUnit3summary application that can transform the standard XML output file to text file. I was surprised that nobody until now made something like this (or at least did not publish it). It were only two hours of work (first working version) and a few more to finish it to stare ready for publishing.
It is only a simple application that was aimed for my needs. You can use filtration for now only with another application, e.g.:
NUnit3summary.exe TestResult.xml | grep Failed >FailedTests.txt
You can see a practical application here (this is also the project where the application was needed, because of too many errors in unit tests).

Related

How to use environment specific test data in Karate

I would like to know how it is possible to use different data sets on runtime when executing tests in various environments. I have read the documentation but i am unable to find the best solution for this scenario.
Requirement: Execute a test in QA environment and then execute the same test in SIT. However, use different data in the request e.g customerIds. The reason for this is because the data setup in each environment is very different.
Would appreciate it if you could propose the best solution for this scenario.
Here in the documentation, you can find an explanation on how to do this : https://github.com/intuit/karate#environment-specific-config
Then you can simply specify the environment when launching karate :
mvn test -DargLine="-Dkarate.env=e2e"
And all your tests will be able to use the variables you've defined for the specified environment.
Edit: another hint, in your config file, specify the path of a file. Now, depending on your env, you'll be able to read a different file, containing all your data.
Edit after your comment :
Let's say you defined two environments, "qa" and "prod".
For every data where there is a difference between the two, simply create two files : myFile-qa.json and myFile-prod.json.
Now, in your tests, when you want to read a file, just read ('myFile-'+env+'.json'). And just like that, you read the correct file depending on your defined environment.

Running the exploration result file of the TestSuite machine(Spec Explorer) in a Console Application

I'm having a problem while trying to follow a example of Spec Explorer, while using Visual Studio 2012.
I've been following this link, but I get stuck on the running the Spec Explorer file with a Console Application.
My problem starts at the next sentence:
"Running this part of the program on the exploration result file of the TestSuite machine in the SMB2 project results in output as below:"
I don't know how to do this, but they don't elaborate on it, does anyone of you know how I should do this?
I assume you are using Visual Studio (VS)2012.
Then I guess you tried already the "RequirementReport" example of Spec Explorer.
This should give you the same possibilities and a running example (using VS2010).
I assume you tried this example, but it was not working (due to VS2012).
Then you tried it with this article in your link.
You're interested in just a report of an exploration result. You're not playing with the idea any more of creating your own full blown path coverage strategy. Right?
You created a new console-application-C#-project and copied the program code from the end of the article into it.
You are able to compile. But you forgot to replace "args[0]" with the full qualified path to an .seexpl-file! Right?
A lot of guessing, but I need 4 more points until I can ask questions in a comment ...

Can you run two test cases simultaneously in a Test Suite in Microsoft Test Manager 2010?

I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx

FxCop - TFS integration : Need to create TFS bug to the last checked in person if FxCop fails

We are in a specific requirement regarding FxCop integration with TFS2010. The requirement is as follows.
- Execute the build.in specific intervals (There is already a method)
- Run the FxCop after each build. (This is too simple and known)
- If anything fails,need to create a TFS bug item and assign to the person who checked in the file last time.
We know that 'gated checkin' the the best way. But due to some reasons we cannot adopt that. The challenge we are facing is on the creation of the bugs against the last checked in person of each file.
Does anybody have done this type of solution before? Are there any code available public which does this?
Thanks in advance.
It was completed by coding the whole part. The basic idea is as follows
Take latest and run the exsting build script () which produce pdb as well
At the end of the build script start FxCops using FxCopCmd and get the output to xml file
Parse the xml and find out the xml message nodes which contains failed reviews
Extract the code file path from the above xml node
Map the file path to TFS path (ie c:\code to tfs path starting with $\code)
Find the last checked in person's details
Create and assign a bug to that person.
This was specific to our project where we cannot implement the gated checkin due to the large code base and high frequency code check ins. But had to implement automated reviews.
This can be closed

Intellij running one test in TestNG

So my typical workflow is
I write a data driven test using TestNG in IntelliJ.
I supply hundreds of data items
Run the test and one or two of them fail
I see the list of passed/failed tests in the "Run" pane.
I would like the ability to just right click that "instance" of the test and run that test alone (with breakpoints). Currently IntelliJ does not seem to have that feature. I would have to right click the test and when I run, it runs the whole set of tests with hundreds of data points.
Is this possible?
TestNG supports this at the testng.xml level, where you can specify which indices of your data provider should be used. It's called "invocation-numbers" and you can see what it looks like by running a test with a data provider, failing some of its invocation numbers and looking at the testng-failed.xml that gets generated.
Back to your question: your IDE needs to support this feature in order to make it available in the UI, so I suggest you ask on the IDEA forums
The feature has been added as of Intellij 142.1217: https://youtrack.jetbrains.com/issue/IDEA-57906