In googletest, is there a way to make tests dependent on each other? I have one test (a database connection) for which if it fails, it doesn't make sense to run certain other tests (that use the DB). I'd like to make those dependent tests fail fast without executing.
I could put the assertion of the DB connection test into a test fixture, but since a new fixture object is constructed for every test, it will run a lot of times unnecessarily. Is there an elegant way to make all the DB using tests fail together?
You could use a googletest Environment to create the DB connection.
Or I guess you could set a global boolean when the test successfully makes the DB connection, and which is checked at the start of every other test.
Related
Is there a way to tell one browser instance from another when running concurrent tests in Testcafe?
Say we have two tests.
One creates some entity and then changes it and verifies that change is applied correctly.
Another deletes all the entities and verifies that everything is deleted.
If we run these tests in parallel they will interfere with each other. So there must be either a way to embrace this concurrency and synchronize these tests with some primitive or to make them parallel and run in isolated sandboxes.
I would prefer to go to the second option.
It could be something like
test('Some test', async t => {
await useSandbox(t.browser.alias, t.browser.os.name, t.browser.instanceId);
... rest of the test
})
But AFAIK there is no way to tell one browser instance from another inside the test code. Or is there?
TestCafe does not have a mechanism to affect test execution from another test. When TestCafe starts tests in parallel, it does not suppose that one test will interfere another.
TestCafe starts every test with clear cookies, storages and a user profile. So, if your data is kept in localStorage, every test will be run independently. However, if your data is kept on the server side (i.e. in a database), then TestCafe cannot use it in a sandbox, since all tests interact with DB through the same website.
In this case, it's better to run these two tests one by one, not simultaneously.
I'm new with automation test term. Currently I had a project which would like to apply Cucumber to test Rest Api. But when i try to assert out put of endpoints of this api base on current data, so I wonder what happen if I changed environment or there are any change in test database in the future, so my test case will be potential to fail.
What is the best practice to write test which's independence on database.
Or I need to run my test with empty separated db and execute some script to initialize db before to run test?
In order for your tests to be trustworthy, they should not depend on the test data being in the database or not. You should be in control of that data. So in order to make it independent of the current state of the database: insert the expected data as a precondition (setup) of your test. (And delete it again at the end of the test). If the database connection is not actually part of your test, you could also stub or mock the result from the database (this will make your tests faster, as you're not using db connectivity).
If you are going to assert the response value that comes (eg:number of cars) it is actually impossible to make it database independent. I guess you can understand why? What I would do in a similar situation is something like this.
Use the API and get the number of cars in the database (eg: 544) and assign it to a variable.
Using the API add another car to the database.
Then again check the total cars in the database and assert (should be 544 + 1 else fail)
Hope this helps.
Is there a way to specify code to be run before all the tests in the current test run? Even when running tests across deeply nested directories? e.g.
a/
a/a_test.go
b/c/
b/c/c_test.go
d_test.go
I want to write some code that runs once before and once after all the tests in files a_test.go, c_test.go, d_test.go have run.
I know about TestMain, which sorta does what I want if I needed to do this at the package level, but this doesn't run before/after all the tests in subdirectories/subpackages. I want something that's one level above TestMain.
I'm not limited to go test, so if there's a third-party test runner that would do this for go, that would be alright as well.
I'm looking for something akin to nosetests's SetUpPackage or pytest's session scoped fixtures.
Is there a mechanism in googletest framework that allows the test to clear the data even after a test fails (The code throws an exception and stops further execution (of clearing the data) if a test fails.
Thanks!
Run the tests on a temporary, in-memory database.
Since SQLite operates from a single file, you can use SetUp() in a test fixture to copy a pre configured database file to where your program expects the database to be, overwriting the "runtime" database file with the pre configured one before every test.
That way every test gets a completely fresh database, initialized with all tables and possibly base data of your choice without running any database creation scripts. That should keep test runs speedy.
My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.