How to find or log currently running test in MSTEST.exe - crash

We have a build process including unittest launched via mstest.exe. Sometimes some unittest get stuck, a messagebox or send error dialog is stuck or entire process crashes. I don't know how to find which of tests is the bad one.
Is there any way how to find what is the name of currently running test in case the unittest is stuck?
Is there a way how to find the name of unittest which was runned the last time?
I don't want to set up timeout for every single test, because I am not sure, what is the suitable timeout.
A nice solution for me could be to log when unittests start a when finish. I could find what is the last logged unittest. Is there a way to log it?

You can use TestContext.TestName for it:
/// Use this method to run code before running each test.
[TestInitialize()]
public void TestInitialize()
{
YourLogger.LogFormat("Run test {0}", this.TestContext.TestName);
}
/// Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
YourLogger.LogFormat("Test {0} is done", this.TestContext.TestName);
}

Related

Repast - call simulation from java program without GUI

I am following the instruction to test calling my simulation model from another java program.
package test;
//import repast.simphony.runtime.RepastMain;
public class UserMain {
public UserMain(){};
public void start(){
String[] args = new String[]{"D:\\user\\Repast_java\\IntraCity_Simulator\\IntraCity_Simulator.rs"};
repast.simphony.runtime.RepastMain.main(args);
// repast.simphony.runtime.RepastBatchMain.main(args);
}
public static void main(String[] args) {
UserMain um = new UserMain();
um.start();
}
}
The java program will launch the GUI with the RepastMain configuration:
repast.simphony.runtime.RepastMain.main(args);
The java program will soon be terminated without running and returning nothing if I apply non-GUI configuration:
repast.simphony.runtime.RepastBatchMain.main(args);
How to enable the running of the simulation in headless mode?
SECONDLY, I need to deploy my simulation model on a remote server (Linux). What is the best way for the server to call my simulation model? if HTTP, how to perform the configuration subsequently? The running of the model is preferred to be batch run method (either a single run or multiple runs depending on the user choice). The batch run output needs to be transformed into JSON format to feedback to the server.
Parts of the batch run mechanism for Simphony can probably be used for this. For some context on headless command line batch runs, see:
https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf
That doesn't align exactly with what you are trying to do, given that you are embedding the simulation run in other java code, but should help as background.
Ultimately, though the batch run code calls an InstanceRunner:
https://github.com/Repast/repast.simphony/blob/master/repast.simphony.distributed.batch/src/repast/simphony/batch/InstanceRunner.java
The InstanceRunner either iterates over a list of parameters sets in a file or parameter set strings passed to it directly and then performs a simulation run for each of those parameter sets. If you passed it a single parameter set, it would run once which I think is what you want to do. So, I would suggest looking at the InstanceRunner code to get a sense of how it works, and mimic InstanceRunner.main() in your code that calls the simulation.
As for the remote execution, Simphony cancopy a simulation to a remote resource, run it and copy the results back. That's integrated with the Simphony GUI and so is not callable from other code without some work on your part. All the relevant code is in:
https://github.com/Repast/repast.simphony/tree/master/repast.simphony.distributed.batch/src/repast/simphony/batch
The SSHSession class has code for executing commands on a remote resource over SSH, methods for copying files and so on. So, perhaps that might be useful to you.

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

In NUnit [TearDown], how to find out whether the test was run solo?

So, in TearDown, I have got the info about the test outcome and the test result message, but I would like to handle things specifically on whether the test was run solo (a single test in the test session) or was started in a whole set of tests (e.g. "Run all tests/All tests from Solution").
The goal is to detect, whether the developer individually started the test (manually, from within Visual Studio) or it was started using a Continuous Integration system.
This is what I have so far:
/// <summary>
/// A helper function for resolving problems when string comparison fails.
/// </summary>
/// <remarks>
/// Intended to be used to analyze the detected differences.
/// </remarks>
[TearDown]
public void CompareNonMatchingStringsOnFailure() {
if (TestContext.CurrentContext.Result.Outcome.Status == TestStatus.Failed) {
string outputMessage = TestContext.CurrentContext.Result.Message;
if (outputMessage.StartsWith("Expected string to be ")) {
// do extended comparison
// This should only run on single runs, which were initiated manually from visual studio
//...
}
}
}
How to get info about the test run/session in the TearDown method?
You can't do this in the code of a teardown because (1) TearDown is still part of a test and (2) tests are not supposed to know anything about who ran them, why they are running etc. The execution environment knows about the test, but the test does not know the execution environment. In fact, NUnit goes to a lot of trouble to make sure things work the same in each environment. While there are ways to trick NUnit, they are generally bad ideas and version-dependent.
Here's what you can do...
Create a fixture that inherits from your fixture.
Put the logic you want in the new fixture TearDown method.
Mark the new fixture as [Explicit].
Do not add any categories to the new fixture.
Because of (3) the new fixture will not run as part of CI or even from the IDE when you run all tests.
It can only be run explicitly. Since it has no categories, that means it can only be run by name... i.e. by selecting the entire fixture or a single test.
That isn't quite what you asked for. If you run the entire fixture, you will get the full comparison for all the inherited test methods. However, it may be sufficient for what you are trying to accomplish.

JUnit - Postponed assert failure

I'm automating functional tests using JUnit. And I ran into a problem: if I follow the rule "one (significant) assert per test method" then I end up with a bunch of 6-line test methods per one test case (17 is the biggest number yet). If I put them all into one test method I have to comment out failing asserts or leave half of the test never launched.
I don't like the first way because it launches browser for too many times, and it appears that browser launch + login/logout are more "expensive" and time consuming than test run itself.
The second way is no better because it introduces a lot of manual work in any case of managing it.
So, my questions are:
1. What are the best practices for such cases?
2. Is there some way to postpone test failure till the end of test? I mean, less important assert, that doesn't stop test run but causes it to fail in the end nonetheless.
UPD: Yes, I'm using Selenium. And I have a parent class for every test class to unify their settings.
You can use #BeforeClass and #AfterClass to launch and shutdown the Browser once for each test class or you can create a Rule that launches your Browser and use it with #ClassRule.
ErrorCollector may be helpful.
#BeforeClass
public static void beforeClass(){
launchBrowser();
}
#Before
public void before() {
login();
}
#AfterClass
public static void beforeClass(){
killBrowser();
}
That could be the answer to your problem

Re-run the failed Selenium Nunit tests

In our test environment, there are some tests that fail irregularly on
certain circumstances.
So my question is, what can be done to rerun the failed Nunit tests only.
My idea is implement some steps in the Nunit TearDown to re-run the failed test as below
[TearDown]
public void TearDownTest()
{
TestStatus state = TestContext.Status;
if (state == TestStatus.Failed)
{
// if so, is it possible to rerun the test ??
}
}
My requierment is - I want to try to run my failed test at least three times, if it fails
for first and second time
Can anybody suggest your thoughts on this
Thanks in advance
Anil
Instead of using the teardown, I'd rather use the xml report, use some xslt to figure out the failing fixtures and refeed them to a build step running the tests.