Running Test Cases Separately Nunit Console Runner - nunit-3.0

I am developing tests using Nunit3 and have several test cases for one method. I want to run test cases separately using Nunit console runner. How can I achieve this?
[TestCase(12,3,4)]
[TestCase(12,2,6)]
[TestCase(12,4,3)]
public void DivideTest(int n, int d, int q)
{
Assert.AreEqual( q, n / d );
}
Something like this runs all the test cases in the end.
nunit3-console.exe --test=DivideTest(12,3,4) path/to/your/test.dll

Naming the test case provided what I was looking for as a workaround:
[TestCase(12,3,4 TestName="foo"))]
public void DivideTest(int n, int d, int q)
{
Assert.AreEqual( q, n / d );
}
nunit3-console.exe --test=foo path/to/your/test.dll

Related

Elrond mandos test elrond_wasm_debug::mandos_rs pass however erdpy contract test fail

I'm writing test cases for my NFT smart contract (SC). When I check the state of the SC after creating my NFT I'm expecting to see a variable (next_index_to_mint:u64, that's I increase by 1 every new NFT) to be updated.
So I'm running the test using the command:
$ erdpy contract test
INFO:projects.core:run_tests.project: /Users/<user>/sc_nft
INFO:myprocess:run_process: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos'], in folder: None
CRITICAL:cli:External process error:
Command line: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos']
Output: Scenario: buy_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: create_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: init.scen.json ... ok
Done. Passed: 1. Failed: 2. Skipped: 0.
ERROR: some tests failed
However, when I'm running the test using elrond_wasm_debug::mandos_rs function with the create_nft.scen.json file, it passed.
use elrond_wasm_debug::*;
fn world() -> BlockchainMock {
let mut blockchain = BlockchainMock::new();
blockchain.set_current_dir_from_workspace("");
blockchain.register_contract_builder("file:output/test.wasm", nft_auth_card::ContractBuilder);
blockchain
}
#[test]
fn create_nft() {
elrond_wasm_debug::mandos_rs("mandos/create_nft.scen.json", world());
}
BTW, if you want to add this to the NFT SC example, that would be great in the tests/ folder.
I tried to put an incorrect value, and it failed as expected. So my question is how could it be possible that it work using mandos elrond_wasm debug but not erdpy ?
running 1 test
thread 'create_nft' panicked at 'bad storage value. Address: sc:nft-minter. Key: str:nextIndexToMint. Want: "0x04". Have: 0x02', /Users/<user>/elrondsdk/vendor-rust/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.28.0/src/mandos_step/check_state.rs:56:21
Here is the code (I use the default NFT SC example):
const NFT_INDEX: u64 = 0;
fn create_nft_with_attributes<T: TopEncode>(...) -> u64 {
...
self.next_index_to_mint().set_if_empty(&NFT_INDEX);
let next_index_to_mint = self.next_index_to_mint().get();
self.next_index_to_mint().set(next_index_to_mint+1);
...
}
#[storage_mapper("nextIndexToMint")]
fn next_index_to_mint(&self) -> SingleValueMapper<u64>;
Short answer: most likely you haven't re-built your contract before testing it with erdpy.
Long answer: currently there are two ways mandos tests are executed, as you've exemplified in your case:
Run tests directly from rust through mandos_rs
Run tests through erdpy (which in turn uses mandos_go)
These two frameworks (mandos_rs and mandos_go) are functioning in different ways:
mandos_rs: this framework is running on your rust code directly and it's testing it agains a mocked VM and mocked blockchain in the background. Therefore, it's not necessary to build your contract when using mandos_rs.
mandos_go: this framework is testing your compiled contract against a
REAL VM with mocked blockchain in the background, so it's necessary to build your latest changes into a .wasm bytecode (e.g. erdpy contract build) before running the tests via mandos_go, as this compiled file will be loaded by the VM like in a real use scenario.

Spock 2.0 is reporting an extra test for data-driven tests

I'm upgrading a project from Spock 1.3 to 2.0, and I've noticed that data-driven tests seem to have an extra test that the IDE is reporting somewhere. For example, the "maximum of two numbers" data-driven example from the documentation shows 4 tests pass when there are only 3 rows:
class MathSpec extends Specification {
def "maximum of two numbers"() {
expect:
Math.max(a, b) == c
where:
a | b | c
1 | 3 | 3
7 | 4 | 7
0 | 0 | 0
}
}
What is going on here?
Firstly, your question is an IntelliJ IDEA question as much as it is a Spock question, because you want to know why parametrised Spock 2 tests look like that in IDEA.
Secondly, the code you posted is different from the code you ran in IntelliJ IDEA. Probably your feature method starts more like this in order to achieve the test iteration naming we see in your screenshot:
def "maximum of #a and #b is #c"() {
// ...
}
Having established that, next let me remind you of the very first sentence of the Spock 2.0 release notes:
Spock is now a test engine based on the JUnit Platform
This means that in contrast to Spock 1.x which was based on a JUnit 4 runner, Spock 2.0 sports its own JUnit test engine, i.e. the Spock engine is on the same level as the Jupiter engine, both running on the JUnit platform.
The way parametrised tests are reported in IDEA is the same for JUnit 5 tests as for Spock 2 tests:
Test class A
- Test method x
- parametrised method name 0
- ...
- parametrised method name n
- Test method y
- parametrised method name 0
- ...
- parametrised method name n
Test class B
- Test method z
- parametrised method name 0
- ...
- parametrised method name n
...
IDEA is not "reporting an extra test", it simply adds a level of grouping by method name to the test report.
If for example you run this parametrised JUnit 5 test
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class NumbersTest {
#ParameterizedTest(name = "{0} is an odd number")
#ValueSource(ints = {1, 3, 5, -3, 15, Integer.MAX_VALUE}) // six numbers
void isOdd_ShouldReturnTrueForOddNumbers(int number) {
assertTrue(Numbers.isOdd(number));
}
public static class Numbers {
public static boolean isOdd(int number) {
return number % 2 != 0;
}
}
}
it looks like this in IDEA:
I.e., what you see it to be expected for JUnit platform tests.

How to make coroutines run in sequence from outside call

I'm a really newbie in coroutines and how it works, I've read a lot about it but I can't seem to understand how or if I can achieve my final goal.
I will try to explain with much detail as I can. Anyway here is my goal:
Ensure that coroutines run sequentially when a method that has said coroutine is called.
I've created a test that matches what I would like to happen:
class TestCoroutines {
#Test
fun test() {
println("Starting...")
runSequentially("A")
runSequentially("B")
Thread.sleep(1000)
}
fun runSequentially(testCase: String) {
GlobalScope.launch {
println("Running test $testCase")
println("Test $testCase ended")
}
}
}
Important Note: I have no control about how many times someone will call runSequentially function. But I want to guarantee that it will be called in order.
This test runs the following outputs:
Starting...
Running test B
Running test A
Test A ended
Test B ended
Starting...
Running test A
Running test B
Test B ended
Test A ended
This is the output I want to achieve :
Starting...
Running test A
Test A ended
Running test B
Test B ended
I think I understand why this is happening: Every time I call runSequentially I'm creating a new Job which is where it's running, and that runs asynchronously.
Is it possible, with coroutines, to guarantee that they will only run after the previous (if it's running) finishes, when we have no control on how many times said coroutine is called?
What you're looking for is a combination of a queue that orders the requests and a worker that serves them. In short, you need an actor:
private val testCaseChannel = GlobalScope.actor<String>(
capacity = Channel.UNLIMITED
) {
for (testCase in channel) {
println("Running test $testCase")
println("Test $testCase ended")
}
}
fun runSequentially(testCase: String) = testCaseChannel.sendBlocking(testCase)

SpecRun.exe hangs for 60 seconds after test execution

posted this to google groups SpecFlow but there is little or no activity there so here we go.
I have a SpecFlow/Selenium/MSBuild project and I am running one simple scenario through
the command line, something like this:
SpecRun.exe run Default.srprofile "/filter:#%filter%"
The browser instance fires up, the assert is done, and the browser instance closes. This
takes about 5-10 seconds.
However: after this, I have to wait for 60 seconds until the SpecRun process closes and gives me the result like:
Discovered 1 tests
Thread#0:
0% completed
Thread#0: S
100% completed
Done.
Result: all tests passed
Total: 1
Succeeded: 1
Ignored: 0
Pending: 0
Skipped: 0
Failed: 0
Execution Time: 00:01:01.1724989
I am currently assuming this is because it is writing the test execution report to disk.. but I can not figure out how to turn this OFF... http://www.specflow.org/documentation/Reporting/
And, I can not figure out why this would take 60 seconds, or how to further debug this.
I have removed the AfterScenario and checked the selenium driver quit/close and verified that is not what is causing the problem.
Can anyone shed some light on this ?
Thank you
Jesus. There was something seriously wrong with the BaseStepDefinitions. Did some more debugging and found that the BeforeScenario was hit 25 times on one single test. 25 instances were launched and closed per 1 single scenario. Fixed by starting all over again with a clean file like:
[Binding]
public class BaseStepDefinitions
{
public static IWebDriver Driver;
private static void Setup()
{
Driver = new ChromeDriver();
}
[BeforeFeature]
public static void BeforeFeature()
{
Setup();
}
[AfterFeature]
public static void AfterFeature()
{
Driver.Dispose();
}
}
I will not post my original file because it is embarrassing.
This is a similar problem that helped me https://groups.google.com/forum/#!topic/specflow/LSt0PGv2DeY

How can I get the uptime of a IBM AIX box in seconds?

I'm writting a Perl script for which I need the uptime in seconds to do some calculation, in all the machine in the shop (i.e. linux, SunOS, and AIX). I have a way to get the uptime for linux (/proc/uptime), and SunOS (kstat -p unix:0:system_misc:boot_time), thanks to an another posting on this site, but I can find a good way of getting it for AIX. I don't really like the idea of parsing uptime with reg-ex, since uptime changes when the machine is been up just sec, mins, days, or over a year.
This snippet in C works under AIX 6.1.
I can't give you the source article as I only have source code left.
#include <utmpx.h>
int main ( )
{
int nBootTime = 0;
int nCurrentTime = time ( NULL );
struct utmpx * ent;
while ( ( ent = getutxent ( ) ) ) {
if ( !strcmp ( "system boot", ent->ut_line ) ) {
nBootTime = ent->ut_tv.tv_sec;
}
}
printf ( "System was booted %d seconds ago\n", nCurrentTime - nBootTime );
}
Parse the output of last(1)?
Find a file/directory that is only created/refreshed at boot time and stat it?
Frankly, using different regexs to handle the different possible outputs from uptime doesn't sound so bad.
Answering old thread for new interested stumblers.
We're going to make a lightweight C program called getProcStartTime that you'll have plenty of other uses for. It tells you when a process was started, given the PID. I believe you will still get a time stamp down to the second even if the process was started months or years ago. Save this source code as a file called getProcStartTime.c:
#include <time.h>
#include <procinfo.h>
main(argc, argv)
char *argv[];
{
struct procentry64 psinfo;
pid_t pid;
if (argc > 1) {
pid = atoi(argv[1]);
}
else {
printf("Usage : getProcStartTime pid\n");
return 1;
}
if (getprocs64(&psinfo, sizeof(struct procentry64), NULL, sizeof(struct fdsinfo64) , &pid, 1) > 0) {
time_t result;
result = psinfo.pi_start;
struct tm* brokentime = localtime(&result);
printf("%s", asctime(brokentime));
return 0;
} else {
perror("getproc64");
return 1;
}
}
Then compile it:
gcc getProcStartTime.c -o getProcStartTime
Here's the magic logic: AIX just like Linux has a process called init with PID 1. It can't be killed or restarted. So the start time of PID 1 is the boot time of your server.
./getProcStartTime 1
On my server, returns Wed Apr 23 10:33:30 2014; yours will be different.
Note, I originally made getProcStartTime specifically for this purpose, but now I use it in all kinds of other scripts. Want to know how long an Oracle database has been up? Find the PID of Oracle's PMON and pass that PID as your arg after getProcStartTime.
If you really want the output as an integer number of seconds, it would be an easy programming exercise to modify the code above. The name getProcUptime is just a suggestion. Then you could just call:
./getProcUptime 1
UPDATE: Source code and precompiled binary for AIX 6.1/7.1 have been put on my Github repo here: https://github.com/Josholith/getProcStartTime