In NUnit [TearDown], how to find out whether the test was run solo? - testing

So, in TearDown, I have got the info about the test outcome and the test result message, but I would like to handle things specifically on whether the test was run solo (a single test in the test session) or was started in a whole set of tests (e.g. "Run all tests/All tests from Solution").
The goal is to detect, whether the developer individually started the test (manually, from within Visual Studio) or it was started using a Continuous Integration system.
This is what I have so far:
/// <summary>
/// A helper function for resolving problems when string comparison fails.
/// </summary>
/// <remarks>
/// Intended to be used to analyze the detected differences.
/// </remarks>
[TearDown]
public void CompareNonMatchingStringsOnFailure() {
if (TestContext.CurrentContext.Result.Outcome.Status == TestStatus.Failed) {
string outputMessage = TestContext.CurrentContext.Result.Message;
if (outputMessage.StartsWith("Expected string to be ")) {
// do extended comparison
// This should only run on single runs, which were initiated manually from visual studio
//...
}
}
}
How to get info about the test run/session in the TearDown method?

You can't do this in the code of a teardown because (1) TearDown is still part of a test and (2) tests are not supposed to know anything about who ran them, why they are running etc. The execution environment knows about the test, but the test does not know the execution environment. In fact, NUnit goes to a lot of trouble to make sure things work the same in each environment. While there are ways to trick NUnit, they are generally bad ideas and version-dependent.
Here's what you can do...
Create a fixture that inherits from your fixture.
Put the logic you want in the new fixture TearDown method.
Mark the new fixture as [Explicit].
Do not add any categories to the new fixture.
Because of (3) the new fixture will not run as part of CI or even from the IDE when you run all tests.
It can only be run explicitly. Since it has no categories, that means it can only be run by name... i.e. by selecting the entire fixture or a single test.
That isn't quite what you asked for. If you run the entire fixture, you will get the full comparison for all the inherited test methods. However, it may be sufficient for what you are trying to accomplish.

Related

NUnit Selenium tests inside the same TestFixture cannot be running in parallel

I have the question about the NUnit test setup (.NET Core 3.1 and NUnit 3) for Selenium tests in Visual Studio 2019.
In AssemblyInfo.cs, I add 2 lines.
[assembly: Parallelizable(ParallelScope.Children)]
[assembly: LevelOfParallelism(4)]
The Code is easy. Initialize the driver in the SetUp(). However, when using the test explorer to run 2 tests, 2 chrome windows are open. But they are not running in parallel ( Still not working using Setup, OneTimeSetup attributes)
If I initialize the driver in the TestMethod directly, it is fine , but it is the dupe code.
Does it mean NUnit Selenium Tests inside the same TestFixture cannot be running in parallel?
Thanks,
Ray
[TestFixture]
public class Account : BaseTest
{
[SetUp]
public void Setup()
{
_driver = new ChromeDriver();
_driver.Manage().Window.Maximize();
}
[Test]
[Category("UAT")]
[Order(1)]
public void Test1()
{
_driver.Navigate().GoToUrl("https://www.msn.com");
Assert.AreEqual("https://www.msn.com/", _driver.Url);
}
[Test]
[Category("UAT")]
[Order(0)]
public void Test2()
{
_driver.Navigate().GoToUrl("https://www.google.com");
Assert.AreEqual("https://www.google.com/", _driver.Url);
}
I had the same question, but I've managed to put together an example where VS2019 + Selenium + Parallelism does work, though I believe there is a limitation that seems likely to be what you are encountering (I'll speak to it at the end) based upon your example.
To make it work, I added an AssemblyInfo.cs file with the two attributes you noted:
[assembly: Parallelizable(ParallelScope.Children)]
[assembly: LevelOfParallelism(6)]
I tested having with the "Parallelizable" attributed to the class or in the AssemblyInfo.cs file and both worked. Including the "LevelOfParallelism" was required -- but in my tests, it was not required for parallel execution of non-Selenium unit tests.
My model involves executing my tests against multiple WebDrivers, which I am currently doing by passing an IEnumerable collection to each test using the [TestCaseSource] or [ValueSource] attributes. This allows me to reuse a WebDriver instance with each test to reduce the overall execution time (it is so expensive to spin up/down instances) and ensure correct clean-up, but I observed that although the tests are run in parallel, each WebDriver instance can only execute one test at a time. If you were using a different WebDriver instance with the second test, they would be executed in parallel.

Repast - call simulation from java program without GUI

I am following the instruction to test calling my simulation model from another java program.
package test;
//import repast.simphony.runtime.RepastMain;
public class UserMain {
public UserMain(){};
public void start(){
String[] args = new String[]{"D:\\user\\Repast_java\\IntraCity_Simulator\\IntraCity_Simulator.rs"};
repast.simphony.runtime.RepastMain.main(args);
// repast.simphony.runtime.RepastBatchMain.main(args);
}
public static void main(String[] args) {
UserMain um = new UserMain();
um.start();
}
}
The java program will launch the GUI with the RepastMain configuration:
repast.simphony.runtime.RepastMain.main(args);
The java program will soon be terminated without running and returning nothing if I apply non-GUI configuration:
repast.simphony.runtime.RepastBatchMain.main(args);
How to enable the running of the simulation in headless mode?
SECONDLY, I need to deploy my simulation model on a remote server (Linux). What is the best way for the server to call my simulation model? if HTTP, how to perform the configuration subsequently? The running of the model is preferred to be batch run method (either a single run or multiple runs depending on the user choice). The batch run output needs to be transformed into JSON format to feedback to the server.
Parts of the batch run mechanism for Simphony can probably be used for this. For some context on headless command line batch runs, see:
https://repast.github.io/docs/RepastBatchRunsGettingStarted.pdf
That doesn't align exactly with what you are trying to do, given that you are embedding the simulation run in other java code, but should help as background.
Ultimately, though the batch run code calls an InstanceRunner:
https://github.com/Repast/repast.simphony/blob/master/repast.simphony.distributed.batch/src/repast/simphony/batch/InstanceRunner.java
The InstanceRunner either iterates over a list of parameters sets in a file or parameter set strings passed to it directly and then performs a simulation run for each of those parameter sets. If you passed it a single parameter set, it would run once which I think is what you want to do. So, I would suggest looking at the InstanceRunner code to get a sense of how it works, and mimic InstanceRunner.main() in your code that calls the simulation.
As for the remote execution, Simphony cancopy a simulation to a remote resource, run it and copy the results back. That's integrated with the Simphony GUI and so is not callable from other code without some work on your part. All the relevant code is in:
https://github.com/Repast/repast.simphony/tree/master/repast.simphony.distributed.batch/src/repast/simphony/batch
The SSHSession class has code for executing commands on a remote resource over SSH, methods for copying files and so on. So, perhaps that might be useful to you.

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

How to find or log currently running test in MSTEST.exe

We have a build process including unittest launched via mstest.exe. Sometimes some unittest get stuck, a messagebox or send error dialog is stuck or entire process crashes. I don't know how to find which of tests is the bad one.
Is there any way how to find what is the name of currently running test in case the unittest is stuck?
Is there a way how to find the name of unittest which was runned the last time?
I don't want to set up timeout for every single test, because I am not sure, what is the suitable timeout.
A nice solution for me could be to log when unittests start a when finish. I could find what is the last logged unittest. Is there a way to log it?
You can use TestContext.TestName for it:
/// Use this method to run code before running each test.
[TestInitialize()]
public void TestInitialize()
{
YourLogger.LogFormat("Run test {0}", this.TestContext.TestName);
}
/// Use TestCleanup to run code after each test has run
[TestCleanup()]
public void MyTestCleanup()
{
YourLogger.LogFormat("Test {0} is done", this.TestContext.TestName);
}

SpecFlow - How to use data driven tests like NUnits TestCaseSource property?

I'm a QA who decided to use SpecFlow for my test automation after some consideration. I think it's brilliant, but missing one feature which I did use often with other test runners such as NUnit - something similar to the TestCaseSource property from NUnit to specify a potentially dynamic set of data for tests to be ran against at run time.
I would often have different data in each environment the test should run in, so cannot specify hardcoded values for test parameters. A trivial example is for checking that each type of user account is able to login, the user account credentials can be retrieved using a DB query to populate each test case dynamically in NUnit:
public List<User> GetTestData()
{
List<User> testData = new List<User>();
testData = MyDatabase.GetAllUsersInfo().ToList();
return testData;
}
[Test, TestCaseSource("GetTestData")]
public void CallLoginService(User user)
{
var response = LoginController.TryLogin(User.UserName, User.Password);
if (response.Error != null)
{
Assert.Fail("Failed to Login: {0}", response.Error);
}
Assert.AreEqual("Logged in ok", response.Message, "Login message not as expected");
}
Obviously this is a simple example of that feature, but I think it describes it well enough. I know we have the ability in SpecFlow to use a Scenario Outline and table of test run input data, but that is still static, so doesn't fit the bill.
I've been looking for a while and have not found anything in SpecFlow like this yet, does anybody know of anything similar to the above which can be used (or planned if anyone who works on the project reads this)?
Thanks :)
I have no idea if anything like this is planned but for now the problem is that there is a background code generation step when you edit your feature file via Visual Studio.
When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.
So your process would become
edit your data source
export to feature file
get specflow's VS plugin to convert to feature.cs
run msbuild
run tests via Nunit or similar
I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.