Entity Framework Code First - Tests Overlapping Each Other - testing

My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.

Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.

Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.

Related

How to build automation test for Webservice api that's independence with Database

I'm new with automation test term. Currently I had a project which would like to apply Cucumber to test Rest Api. But when i try to assert out put of endpoints of this api base on current data, so I wonder what happen if I changed environment or there are any change in test database in the future, so my test case will be potential to fail.
What is the best practice to write test which's independence on database.
Or I need to run my test with empty separated db and execute some script to initialize db before to run test?
In order for your tests to be trustworthy, they should not depend on the test data being in the database or not. You should be in control of that data. So in order to make it independent of the current state of the database: insert the expected data as a precondition (setup) of your test. (And delete it again at the end of the test). If the database connection is not actually part of your test, you could also stub or mock the result from the database (this will make your tests faster, as you're not using db connectivity).
If you are going to assert the response value that comes (eg:number of cars) it is actually impossible to make it database independent. I guess you can understand why? What I would do in a similar situation is something like this.
Use the API and get the number of cars in the database (eg: 544) and assign it to a variable.
Using the API add another car to the database.
Then again check the total cars in the database and assert (should be 544 + 1 else fail)
Hope this helps.

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

Prefill new test cases in Selenium IDE

I'm using Selenium IDE 2.3.0 to record actions in my web application and create tests.
Before every test I have to clear all cookies, load the main page, log in with a specific user and submit the login form. These ~10 commands are fix and every test case needs them, but I don't want to record or copy them from other tests every time.
Is there a way to configure how "empty" test cases are created?
I know I could create a prepare.html file or something and prepend it to a test suite. But I need to be able to run either a single test or all tests at once, so every test case must include the commands.
Ok I finally came up with a solution that suits me. I wrote custom commands setUpTest and tearDownTest, so I only have to add those two manually to each test.
I used this post to get started:
Adding custom commands to Selenium IDE
Selenium supports object-oriented design. You should create a class that takes those commands that you are referring to and always executes those, in each of the tests that you are executing you could then make a call to that class and the supporting method and then execute it.
A great resource for doing this is here.

Designing a CRUD test suite

I'm writing a suite of black-box automated tests for our application. I keep bumping into the same design problem, so I was wondering what people here think about it.
Basically, it's a simple CRUD system. For argument's sake, let's see you're testing the screens to create, view, edit and delete user accounts. What I would like to do is write one test which tests that user creation works correctly, another test that checks that viewing a user shows you the same data as you originally typed in, another test that checks that editing a user works, and finally a test that deleting a user is OK.
The trouble is, if I do that, then the tests must be run in a certain order, or they won't work. (E.g., you can't delete a user that hasn't been created yet.) Now some say that the test setup should create everything that the test needs, and the teardown should put the system back into a consistent state. But think about it... the Create User test is going to need to delete that user afterwards, and the Delete User test will have to create a user first... so the two tests now have identical code, and the only difference is whether that code is in the setup / body / teardown. That just seems wrong.
In short, I seem to be faced with several alternatives, all of which seem broken:
Use setup to create users and teardown to delete them. This duplicates all of the Create User and Delete User test code as setup / teardown code.
Force the tests to run in a specific order. This violates the principle that tests should work in isolation and be runnable in any order.
Write one giant test which creates a user, views the user, edits the user, and then deletes the user, all as one huge monolithic block.
Note that creating a user is not a trivial matter; there's quite a lot of steps involved. Similarly, when deleting a user you have to specify what to do with their assigned projects, etc. It's not a trivial operation by any means.
Now, if this were a white-box test, I could mock the user account objects, or mock the database that holds them, or even prod the real database on disk. But these are black box tests, which test only the external, user-visible interface. (I.e., clicking buttons on a screen.) The idea is to test the whole system from end to end, without modifying it [except through GUI commands, obviously].
We have the same issue. We've taken two paths. In one style of test, we use the setup and teardown as you suggest to create the data (users, tickets, whatever) that the test needs. In the other style, we use pre-existing test data in the database. So, for example, if the test is AdminShouldBeAbleToCreateUser, we don't do either of those, because that's the test itself. But if the test is ExistingUserShouldBeAbleToCreateTicket, we use a pre-defined user in the test data, and if the test is UserShouldBeAbleToDeleteOwnTicket, we use a pre-defined user and create the ticket in the setup.

Make tests dependent and fail together in googletest?

In googletest, is there a way to make tests dependent on each other? I have one test (a database connection) for which if it fails, it doesn't make sense to run certain other tests (that use the DB). I'd like to make those dependent tests fail fast without executing.
I could put the assertion of the DB connection test into a test fixture, but since a new fixture object is constructed for every test, it will run a lot of times unnecessarily. Is there an elegant way to make all the DB using tests fail together?
You could use a googletest Environment to create the DB connection.
Or I guess you could set a global boolean when the test successfully makes the DB connection, and which is checked at the start of every other test.