Two tests dependent on same test - automation

I have two tests that are dependent on same test(Using DependsON). Are these dependent test cases will be executed one after other?
I am using testNG version-6.10

Would the dependent testcases run one after the other or together depends on what your parallel execution strategy is.
If you have set parallel='methods' in your suite file, then both the dependent methods will run together.
If you have disabled parallel execution (by setting parallel=false in your suite file), then the dependent methods will run one after the other. Which runs first and which runs next is not determined (since TestNG relies on reflection to query methods from a class)
But in either cases, they will be executed only after the master test (the one on which both your tests depend on) runs successfully.

Related

Spring boot test files execution ordering

I have a project with several hundred test files some of the test files use DataJpaTest annotation, some are MockMvc based controller tests and some uses mocked objects without database dependency, Based on test execution order I see context needs to be re-initialized for different flavors of test files, Is there a way to control execution of test files order so that context reload can be avoided? Say all mock tests first followed by controller tests and then DataJpaTest?
Right now test case execution taking about 30 minutes looking for way to improve the speed up test execution.
JUnit Jupiter provides options to control Test Execution Order.
However, you should look into your test setup and verify if your tests create too many Application Contexts.
Spring Test framework can cache Application Contexts and reuse them among different test suites. See Spring Test documentation for more info.

Why do my selenium xunit tests in Visual Studio run in parallel?

I'm writing unit tests using Selenium and xunit and on their own they run great but if I select all of the tests (multiple classes) in Test Explorer (they appear to be grouped by class - this is not intentional) they run in parallel. Run Tests in Parallel is not selected. Each one of my tests creates and then deletes test data so they obviously can't run in parallel. One test might delete data right after another test created that data and so the test would fail. So how can I run all of my tests and not have them run in parallel? I guess I could make them all use one partial class that spans multiple files but that's not my first choice.
I found a solution (although not an explanation). Just put [Collection("Sequential")] as the first line under each namespace. This forces everything to run sequentially.

Data driven testing using Selenium Grid

I have to execute large number of test cases in parallel using TestNG and Selenium. Each test case will be executed in different data set using Data driven testing. How to run these test cases in parallel in different machines? We can use Parallel attribute in TestNG but that is restricted to a single machine.
Can Selenium Grid tweaked and use in this purpose? If yes how to use this or any other suggestion?
I want examples of (https://www.seleniumhq.org/docs/07_selenium_grid.jsp#when-to-use-it)
To reduce the time it takes for the test suite to complete a test
pass.
Basically it's quite complicated it needs lot of understanding i haven't done it but i know that you need to create one root machine and rest machines will be childs of the parent machine then you can run the test scripts parallel but you need to make sure that those script shouldn't be dependent otherwise their will be lot of issue
I have shared the link with you so you can check how you set up?
https://medium.com/#appening/how-to-run-your-test-on-multiple-machines-using-selenium-grid-3aa37d5d2b63

Serial execution of package tests

I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test

How to stop further execution of Tests within a TestFixture if one of them fails in NUnit?

I want to stop further execution of Tests within a TestFixture if one of them fails in NUnit.
Of course the common and advised practice is to make tests independent of each other. However, the case I would like to use NUnit for, requires that all tests and test fixtures following one that failed, are not executed. In other words, test failure causes the whole NUnit execution to stop (or proceeds with the next [TestFixture] but both scenarios should be configurable).
The simple, yet not acceptable solution, would be to force NUnit termination by sending a signal of some kind to the NUnit process.
Is there a way to do this in an elegant way?
I believe you can use NAnt to do this. Specifically, the nunit or nunit2 tasks have a haltonfailure parameter that allows the test run to stop if a test fails.