How to build automation test for Webservice api that's independence with Database - testing

I'm new with automation test term. Currently I had a project which would like to apply Cucumber to test Rest Api. But when i try to assert out put of endpoints of this api base on current data, so I wonder what happen if I changed environment or there are any change in test database in the future, so my test case will be potential to fail.
What is the best practice to write test which's independence on database.
Or I need to run my test with empty separated db and execute some script to initialize db before to run test?

In order for your tests to be trustworthy, they should not depend on the test data being in the database or not. You should be in control of that data. So in order to make it independent of the current state of the database: insert the expected data as a precondition (setup) of your test. (And delete it again at the end of the test). If the database connection is not actually part of your test, you could also stub or mock the result from the database (this will make your tests faster, as you're not using db connectivity).

If you are going to assert the response value that comes (eg:number of cars) it is actually impossible to make it database independent. I guess you can understand why? What I would do in a similar situation is something like this.
Use the API and get the number of cars in the database (eg: 544) and assign it to a variable.
Using the API add another car to the database.
Then again check the total cars in the database and assert (should be 544 + 1 else fail)
Hope this helps.

Related

Is it possible to automate comparisons between database results with groovy script?

I am in the process of migrating an application which currently performs all of its SQL calls internally (as part of the code) to Restful services so that the SQL calls are handled externally. I am in need of a smart way to test that these changes have no effect on the actual data that is retrieved from the database.
I was thinking it might be possible to write some automation tests against the APIs and then use groovy script to compare the results of both.
Using Soap UI
1: old SQL call - returns XML
2: New Restful call -returns JSON
3: Compare both results.
The issue I have is that a direct call returns XML whereas the new call returns JSON.
I just need to know if this is a waste of time or if this is worth pursuing?
The only other option I can think of currently is to manually run two tests on both versions at the same time and observe any differences if they occur.
Thanks for taking the time to read.
If you need any more detail please let me know but tried to keep it brief!

How to test a Mule application flow?

I have been assigned with the following task regarding a Mule flow application currently in production:
To stores the client IP which is using the webservice
To implements a control that limits to ten the times each IP could ask
daily to the website
I have knowledge in Java core and SQL but null background with Mule. All the people that I can ask are in the same situation.
Once I get the app package (the one that is currently in production) up and running, I have stopped it and add the following elements to the flow:
In a subflow with some initial tasks, I have addded a database element to store the IP of the computer which is using the webservice (The user_request is a table I have just created in the DB which stores the IP and date of connection.):
insert into user_request values
(#[MULE_REMOTE_CLIENT_ADDRESS], #[function:datestamp:dd-MM-yy])
To ask the website, a database element performs a select query to provide a choice with some inputs. Depending on the value of those inputs the request is done to the website or not:
Database (Select) --> Choice --> Ask or not to the website depending on the select output
So, there, I have added to the database element that performs the select and additional output that is a count of user_request table for current IP and current day so it can provide the choice with the original inputs as usual and also this extra one (I am copying only the suquery I added):
SELECT COUNT(*) as TRIES FROM USER_REQUEST
WHERE IP_ADDRESS=#[MULE_REMOTE_CLIENT_ADDRESS]
AND REQUEST_DATE=#[function:datestamp:dd-MM-yy]
In the choice, I have added this condition to the path that finally ask the website:
#[payload.get(0).TRIES < 10]
Reached this point, the app runs and give no errors but I don't know how to test it. Where does the flow start? How can I test it as I was the user?
Aditionally, if you see anything wrong in the syntax I used above, I would appreciate if you tell me.
Thanks in advance!!!
munit will require you to learn the basics of this process first, but it is the primary testing tool of mule. With it, you will create a test suite to execute various flows and verify that when given know inputs the correct processing occur in a repeatable manner. In the test, you can mock critical calls, such as a write to your DB so that the actual is called but not actually done so as to not modify your DB table. Likewise, on reads from the DB you can either actually make a call to get known data, or returned mocked test data to exercise all paths in the flow.

Selenium setup/teardown best practices- returning data to orginal state

{
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("Illinois"));
_editUserPage.State.SelectedText = "New York";
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
_editUserPage.SaveChanges();
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
}
In my example above, I am changing the User's state from Illinois to New York; my question is: should I change the state back to the orignal value of Illiois at the end of the test?
I have roughly 20 other independent tests in the same file and I wanted to know what the best practice is for returning data to the original state. We are using setup/teardown for the entire test suite, just not within each individual test.
Best practice so far which I did see was this:
The test had one test data input (excel sheet)
Each run would add some prefix to the data (e.g. name Pavel => Test01_Pavel)
Test did verify that such data do not exist in the system
Test created testing data according to the input and verified that those data are present
Test deleted all testing data and verified that those data are deleted.
But the really best answer is "it depends." I, personally, am not deleting any test data from the system because
Test environment is strictly divided from prod one
Test data can be useful later on during performance testing (e.g. downloading list of users from the system)
So the real question which you should ask yourself is:
Does deleting test data at the end bring anything good to you?
And vice versa: What happens if the test data remain in the system?
BTW, if you feel like that "the application will definitely break if there is too much nonsense/dummy data in it" you should definitely test that scenario. Imagine that your service will become popular over night (Charlie sheen tweeting about using your page:) ) and millions of users would like to register themselves.
The approach taken in the company I work for is:
Spin up a dedicated test environment in cloud (AWS)
Kick off suite of tests
Each test would insert the data it requires
Once the test suite has completed then tear down the servers, inc. db
This way you have a fresh database each time the tests run and therefore the only danger of bad data breaking a test is if 2 tests in the suite are generating conflicting data.

Entity Framework Code First - Tests Overlapping Each Other

My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.

Unit Testing the Data Access Layer - Testing Update Methods?

I'm looking into adding some unit tests for some classes in my data access layer and I'm looking at an update routine that has no return value. It simply updates a row based on the id you provide at whichever column name you provide.
Inside of this method, we collect the parameters and pass them to a helper routine which calls the stored procedure to update the table.
Is there a recommended approach for how to do unit testing in such a scenario? I'm having a hard time thinking of a test that wouldn't depend on other methods.
Test the method that reads the data from the database, first.
Then you can call the update function, and use the function that was tested above, to verify that the value that was updated is correct.
I tend to use other methods in my unit tests as long as I have tests that also test those that were called.
If your helper functions are in the database (stored procedures or functions) then just test those with a DatabaseUnitTest first, then test the visual basic code.
I would just use a lookup method to validate that the data was properly updated.
Yes, technically this would relay on the lookup method working properly, but I don't think you necessarily have to avoid that dependency. Just make sure the lookup method is tested as well.
I would use the method to get that data and check the return value to what you updated and Assert the expected value. This does assume the method used to retrieve the data has been tested and works correctly.
I use nhibernate and transactions and for unittests I don't commit to the database but I flush the session which gives the same errors if needed but doesn't write the data.
Of course if you have a build server you just run the unittests against a freshly made database which is freshly made on each build. Try using an filebased database like firebird or something.