Selenium setup/teardown best practices- returning data to orginal state - selenium

{
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("Illinois"));
_editUserPage.State.SelectedText = "New York";
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
_editUserPage.SaveChanges();
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
}
In my example above, I am changing the User's state from Illinois to New York; my question is: should I change the state back to the orignal value of Illiois at the end of the test?
I have roughly 20 other independent tests in the same file and I wanted to know what the best practice is for returning data to the original state. We are using setup/teardown for the entire test suite, just not within each individual test.

Best practice so far which I did see was this:
The test had one test data input (excel sheet)
Each run would add some prefix to the data (e.g. name Pavel => Test01_Pavel)
Test did verify that such data do not exist in the system
Test created testing data according to the input and verified that those data are present
Test deleted all testing data and verified that those data are deleted.
But the really best answer is "it depends." I, personally, am not deleting any test data from the system because
Test environment is strictly divided from prod one
Test data can be useful later on during performance testing (e.g. downloading list of users from the system)
So the real question which you should ask yourself is:
Does deleting test data at the end bring anything good to you?
And vice versa: What happens if the test data remain in the system?
BTW, if you feel like that "the application will definitely break if there is too much nonsense/dummy data in it" you should definitely test that scenario. Imagine that your service will become popular over night (Charlie sheen tweeting about using your page:) ) and millions of users would like to register themselves.

The approach taken in the company I work for is:
Spin up a dedicated test environment in cloud (AWS)
Kick off suite of tests
Each test would insert the data it requires
Once the test suite has completed then tear down the servers, inc. db
This way you have a fresh database each time the tests run and therefore the only danger of bad data breaking a test is if 2 tests in the suite are generating conflicting data.

Related

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

After Test Case/Suite/Plan Migration all Test Cases appear as "Active" - does not match source state

After performing a migration for a Team Area (Work Items then Test configs) the feedback I have from my tester colleague is that every Test that has migrated is sitting in an "Active" state, when a good number of these have either "Passed", "Failed" or "In Progress" on the source platform.
Is this a limitation of the WorkItemMigrationConfig or of any of the Test configuration processors? is this expected behaviour?
It is quite pertinent to retain the outcome states of Tests that have been run historically
This is the expected behaviour.
There is no way to migrate Test Runs (which is where that data comes from) to another environment. Test results data is lost during a migration.

How to build automation test for Webservice api that's independence with Database

I'm new with automation test term. Currently I had a project which would like to apply Cucumber to test Rest Api. But when i try to assert out put of endpoints of this api base on current data, so I wonder what happen if I changed environment or there are any change in test database in the future, so my test case will be potential to fail.
What is the best practice to write test which's independence on database.
Or I need to run my test with empty separated db and execute some script to initialize db before to run test?
In order for your tests to be trustworthy, they should not depend on the test data being in the database or not. You should be in control of that data. So in order to make it independent of the current state of the database: insert the expected data as a precondition (setup) of your test. (And delete it again at the end of the test). If the database connection is not actually part of your test, you could also stub or mock the result from the database (this will make your tests faster, as you're not using db connectivity).
If you are going to assert the response value that comes (eg:number of cars) it is actually impossible to make it database independent. I guess you can understand why? What I would do in a similar situation is something like this.
Use the API and get the number of cars in the database (eg: 544) and assign it to a variable.
Using the API add another car to the database.
Then again check the total cars in the database and assert (should be 544 + 1 else fail)
Hope this helps.

Cross browsers testing - how to ensure uniqueness of test data?

My team is new to automation and plan to automate the cross browsers testing.
Thing that we not sure, how to make sure the test data is unique for each browser’s testing? The test data need to be unique due to some business rules.
I have few options in mind:
Run the tests in sequential order. Restore database after each test completed.
The testing report for each test will be kept individually. If any error occurs, we have to reproduce the error ourselves (data has been reset).
Run the tests concurrently/sequentially. Add a prefix to each test data to uniquely identify the test data for different browser testing. E.g, FF_User1, IE_User1
Run the tests concurrently/sequentially. Several test nodes will be setup and connect to different database. Each test node will run the test using different browser and the test data will be stored in different database.
Anyone can enlighten me which one is the best approach to use? or any other suggestion?
Do you need to run every test in all browsers? Otherwise, mix and match - pick which tests you want to run in which browser. You can organize your test data like in option 2 above.
Depending on which automation tool you're using, the data used during execution can be organized as iterations:
Browser | Username | VerifyText(example) #headers
FF | FF_User1 | User FF_User1 successfully logged in
IE | IE_User1 | User IE_User1 successfully logged in
If you want to randomly pick any data that works for a test and only want to ensure that the browsers use their own data set, then separate the tables/data sources by browser type. The automation tool should have an if clause you can use to then select which data set gets picked for that test.

DB (SQL) automated stress/load tools?

I want to measure the performance and scalability of my DB application. I am looking for a tool that would allow me to run many SQL statements against my DB, taking the DB and script (SQL) file as arguments (+necessary details, e.g. host name, port, login...).
Ideally it should let me control parameters such as number of simulated clients, duration of test, randomize variables or select from a list (e.g. SELECT FROM ... WHERE value = #var, where var is read from command line or randomized per execution). I would like to test results to be saved as CSV or XML file that I can analyze and plot them. And of course in terms of pricing I prefer "free" or "demo" :-)
Surprisingly (for me at least) while there are dozens of such tools for web application load testing, I couldn't find any for DB testing!? The ones I did see, such as pgbench, use a built-in DB based on some TPC scenario, so they help test the DBMS configuration and H/W but I cannot test MY DB! Any suggestions?
Specifically I use Postgres 8.3 on Linux, though I could use any DB-generic tool that meets these requirements. The H/W has 32GB of RAM while the size of the main tables and indexes is ~120GB. Hence there can be a 1:10 response time ratio between cold vs warm cache runs (I/O vs RAM). Realistically I expect requests to be spread evenly, so it's important for me to test queries against different pieces of the DB.
Feel free to also contact me via email.
Thanks!
-- Shaul Dar (info#shauldar.com)
JMeter from Apache can handle different server types. I use it for load tests against web applications, others in the team use it for DB calls. It can be configured in many ways to get the load you need. It can be run in console mode and even be clustered using different clients to minimize client overhead ( and so falsifying the results).
It's a java application and a bit complex at first sight. But still we love it. :-)
k6.io can stress test a few relational databases with the xk6-sql extension.
For reference, a test script could be something like:
import sql from 'k6/x/sql';
const db = sql.open("sqlite3", "./test.db");
export function setup() {
db.exec(`CREATE TABLE IF NOT EXISTS keyvalues (
id integer PRIMARY KEY AUTOINCREMENT,
key varchar NOT NULL,
value varchar);`);
}
export function teardown() {
db.close();
}
export default function () {
db.exec("INSERT INTO keyvalues (key, value) VALUES('plugin-name', 'k6-plugin-sql');");
let results = sql.query(db, "SELECT * FROM keyvalues;");
for (const row of results) {
console.log(`key: ${row.key}, value: ${row.value}`);
}
}
Read more on this short tutorial.
The SQL Load Generator is another such tool:
http://sqlloadgenerator.codeplex.com/
I like it, but it doesn't yet have the option to save test setup.
We never really found an adequate solution for stress testing our mainframe DB2 database so we ended up rolling our own. It actually just consists of a bank of 30 PCs running Linux with DB2 Connect installed.
29 of the boxes run a script which simply wait for a starter file to appear on an NFS mount then start executing fixed queries based on the data. The fact that these queries (and the data in the database) are fixed means we can easily compare against previous successful runs.
The 30th box runs two scripts in succession (the second is the same as all the other boxes). The first empties then populates the database tables with our known data and then creates the starter file to allow all the other machines (and itself) to continue.
This is all done with bash and DB2 Connect so is fairly easily maintainable (and free).
We also have another variant to do random queries based on analysis of production information collected over many months. It's harder to check the output against a known successful baseline but, in that circumstance, we're only looking for functional and performance problems (so we check for errors and queries that take too long).
We're currently examining whether we can consolidate all those physical servers into virtual machines, on both the mainframe running zLinux (which will use the shared-memory HyperSockets for TCP/IP, basically removing the network delays) and Intel platforms with VMWare, to free up some of that hardware.
It's an option you should examine if you don't mind a little bit of work up front since it gives you a great deal of control down the track.
Did you check Bristlecone an open source tool from Continuent? I don't use it, but it works for Postgres and seems to be able to do the things that your request. (sorry as a new user, I cannot give you the direct link to the tool page, but Google will get you there ;o])