We use NUnit for implementing GUI tests. We have multiple TestFixtures (Test Suites) focused on a set of application functionalities. Test suites have different priorities of execution ( E.g.: Set A need to be verified before running Set B, because Set B uses functionalities from Set A).
My question is: Is there any way how to run test suites in given order using NUnit-Console?
I've tried passing parameter /test for every test suite, parameters were passed in test suite execution priority order, but it didn't work as I expected, test suites weren't executed in required order.
The line was something like that: "[nunit-console runner path]" /test Tests.TestSuiteWithPriority01 /test Tests.TestSuiteWithPriority02 tests.dll
The --test command-line option is used to construct a filter, which determines which tests are run. It doesn't affect order - no command-line options have to do with order. NUnit applies the created filter to the tests as it examines them, deciding one test at a time whether it should be executed.
Neither the order of the options nor the order in which NUnit examines the tests has any connection to the order in which they are executed. The execution order is determined by:
Any OrderAttributes you use in your tests.
If no such attributes are used, the order is unspecified. (*)
You can specify [Order(n)] on any fixture or method. Those items with an OrderAttribute execute first, starting with the lowest value of n. If you are running tests in parallel, the order doesn't guarantee that following tests will not start while the first test is running. It's up to you to ensure you don't run such tests in parallel.
See the docs as well: https://github.com/nunit/docs/wiki/Order-Attribute
*Note: some people use the alphabetical order of tests. Some versions of NUnit, in some environments use that ordering. It's not guaranteed by NUnit, so it's not a good idea to rely on it.
Related
We have some generic test cases in Azure DevOps that are included in multiple test suites. When you run a test case for a web app, the test runner window displays the test case ID and name but not the test suite name. Our client is finding this is leading to some confusion as to what is actually being tested. is there a way to display the test suite name as well as the test case name? I've just discovered that you can use parameters in a test case so I'm about to investigate that, but I think that it may apply only to steps and not the title.
For the record, I have decided to use configurations for this. It's more of a workaround than a solution, because it means creating lots of configurations and assigning them to test suites and also that configurations can't be used for other purposes.
Is there anyway to specify in Playwright to run Spec files (not individual tests in a file) in an order. For example I want tests to be in this order:
Login.spec.ts
profile.spec.ts
It depends on whether you need them to run serially, with each test starting after the other finishes, or whether you want files to be started in a specific order, but still in parallel and thus don’t care about anything inside them going before or not (since each file will take a different amount of time, any execution, including individual tests themselves within the file, could happen at various times intermingled with other files).
If the serial option, you’ll need to disable parallelism by limiting workers to 1 and either alphabetically name your files for automatic sorting or create a test list file that runs them in the order you specify, as described by Playwright on controlling test order.
If the parallel option, just wanted to start in a certain order, I imagine the options from the serial approach would cause that behavior when workers is not limited to 1. But it would again only control file start order, not individual tests. Unless you have the fullyParallel option on, in which case I believe it would also individually start tests within a file in order before moving on. Or individual test start order could be theoretically controlled similarly if you have one test per file.
So if you need each test to finish before starting the next, do the serial approach as described by that doc. If you only care about start order and not inside execution or finishing order, theoretically just use one of those approaches but with worker limit more than 1, and fullyParallel on for individual tests or off for ordering at just the file level.
Hope that helps!
Follow an ordered naming convention for test files.
Example:
module_A_01
module_A_02
module_B
module_C
Note: Keep in mind 11 comes before 2 in alphabetical order so make it '02'.
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
I came across this question:
How can I run multiple tests in parallel with JS/nightwatchjs?
But I want to execute multiple tests in parallel in the chrome browser only, in multiple chrome driver sessions.
I am used to java-testng-selenium based test suites where I can specify in the testng.xml file that I want to run multiple test classes or test methods in parallel and the framework does that exactly. If I specify in testng.xml that I want multiple test methods to execute in parallel in 4 threads, 4 chrome browser sessions pop-up and 4 test methods are executed in parallel.
Here's an example with a thread-count=2: https://github.com/adityai/testng-parallelsample/blob/master/methods-test-testng.xml
How can I do the same with nightwatch.js?
You might want to try the test_workers configuration, although I haven't tried it myself it should do exactly what you are looking for.
And a nice article that demostrates it in action:
https://markus.oberlehner.net/blog/speeding-up-nightwatch-powered-acceptance-tests/
I have a small collection of integration tests that utilize selenium in a class. The idea is that these tests run every time there is a merge to the codebase, with the merge proceeding through the pipeline and having a series of tests running against the new code.
The thing is, these selenium tests have to run one at a time. They're using the browser to log into a website, and the account will just log out if more than one person tries to log into the account at once, it'll just log out, and the test will obviously fail, so I need these tests to run one at a time. I've tried using the #NotThreadSafe annotation, doesn't seem to have changed anything, and I've searched through for some sort of switch or parameter that defines how many tests run at once with no luck. These tests are using junit 4.12.