Designing a CRUD test suite - testing

I'm writing a suite of black-box automated tests for our application. I keep bumping into the same design problem, so I was wondering what people here think about it.
Basically, it's a simple CRUD system. For argument's sake, let's see you're testing the screens to create, view, edit and delete user accounts. What I would like to do is write one test which tests that user creation works correctly, another test that checks that viewing a user shows you the same data as you originally typed in, another test that checks that editing a user works, and finally a test that deleting a user is OK.
The trouble is, if I do that, then the tests must be run in a certain order, or they won't work. (E.g., you can't delete a user that hasn't been created yet.) Now some say that the test setup should create everything that the test needs, and the teardown should put the system back into a consistent state. But think about it... the Create User test is going to need to delete that user afterwards, and the Delete User test will have to create a user first... so the two tests now have identical code, and the only difference is whether that code is in the setup / body / teardown. That just seems wrong.
In short, I seem to be faced with several alternatives, all of which seem broken:
Use setup to create users and teardown to delete them. This duplicates all of the Create User and Delete User test code as setup / teardown code.
Force the tests to run in a specific order. This violates the principle that tests should work in isolation and be runnable in any order.
Write one giant test which creates a user, views the user, edits the user, and then deletes the user, all as one huge monolithic block.
Note that creating a user is not a trivial matter; there's quite a lot of steps involved. Similarly, when deleting a user you have to specify what to do with their assigned projects, etc. It's not a trivial operation by any means.
Now, if this were a white-box test, I could mock the user account objects, or mock the database that holds them, or even prod the real database on disk. But these are black box tests, which test only the external, user-visible interface. (I.e., clicking buttons on a screen.) The idea is to test the whole system from end to end, without modifying it [except through GUI commands, obviously].

We have the same issue. We've taken two paths. In one style of test, we use the setup and teardown as you suggest to create the data (users, tickets, whatever) that the test needs. In the other style, we use pre-existing test data in the database. So, for example, if the test is AdminShouldBeAbleToCreateUser, we don't do either of those, because that's the test itself. But if the test is ExistingUserShouldBeAbleToCreateTicket, we use a pre-defined user in the test data, and if the test is UserShouldBeAbleToDeleteOwnTicket, we use a pre-defined user and create the ticket in the setup.

Related

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

Is there a way to tell one browser instance from another when running concurrent tests in Testcafe

Is there a way to tell one browser instance from another when running concurrent tests in Testcafe?
Say we have two tests.
One creates some entity and then changes it and verifies that change is applied correctly.
Another deletes all the entities and verifies that everything is deleted.
If we run these tests in parallel they will interfere with each other. So there must be either a way to embrace this concurrency and synchronize these tests with some primitive or to make them parallel and run in isolated sandboxes.
I would prefer to go to the second option.
It could be something like
test('Some test', async t => {
await useSandbox(t.browser.alias, t.browser.os.name, t.browser.instanceId);
... rest of the test
})
But AFAIK there is no way to tell one browser instance from another inside the test code. Or is there?
TestCafe does not have a mechanism to affect test execution from another test. When TestCafe starts tests in parallel, it does not suppose that one test will interfere another.
TestCafe starts every test with clear cookies, storages and a user profile. So, if your data is kept in localStorage, every test will be run independently. However, if your data is kept on the server side (i.e. in a database), then TestCafe cannot use it in a sandbox, since all tests interact with DB through the same website.
In this case, it's better to run these two tests one by one, not simultaneously.

Testing multiple User Roles

My question may sound a little bit stupid.
My team has to test a Web application that it is used by 3 different User Roles. So, we start by writing our Test Cases based on the User Stories. My problem is that I don't want to create 3 different Test Cases for each User Role. I think that this needs a lot of time when writing the Test Cases and later testing them because:
Total Test Cases Number = User Stories x Test Cases Per User Story x User Roles Number.
Moreover, I don't want to create new Test Cases if some time in the future a new User Role will be created because they will be just duplicates with some little differences.
Is there a better way to manage this situation?
Thanks in advance.
Single Responsibility Principle?
Code and test the user access separately to the user story, unless you really do get a completely different story based on your role, in which case, its a distinct spec and warrants its own test.
Not sure on the coding front (depends on what the situation is and how the code is implemented), but I can answer from a testing perspective (2 yrs so far, over half of it in a traditional waterfall system migrating to Agile).
The web application I test is similar in that we have three user types (global) and three user roles (tied to "projects" which are buckets of sites, sites in term as buckets of imagery, look-up EyeQ if curious). So, 9 possible combos, 8 of which can make a site. Current regression procedure doc has over 100 test cases, 20 or so are edit/create/delete site. total overall: 500+ test cases majority are manually run (ongoing efforts to automate them, but takes time as we've gone thru a UI reboot).
Anyway, I've had to rewrite some of our manual procedures as a result of the sweeping UI changes and am trying avoid the mistakes authors before me made, such as the one you describe (excessive repetition aka reuse same test three times with slight variations).
Rather than stick to their strategy of writing cases, I use looping (same term applies in coding)- that is, test cases that use one role-type combo per pass. Instead of having the same test case written 3+ times and each executed separately for each role/type, use the procedure once but add a few steps at the end.
example test case:
user can create a site (8/9 of the type-role combos can do this in my app)
what they did before I came in:
test case 1- sys admin not tied to project can make site (10 steps);
test case 2- sys admin with project role can make site (same 10 steps);
test case 3- account admin not tied to proj can make site (same 10 steps as 1st case);
test case 4- account admin with proj role can make site (ditto);
test case 5... and so on
what I do:
test case 1: Do 10 steps as user with combo 1,
step 11- log out as that combo, log in as user with combo 2 and repeat 1-10,
step 12- log out as user from step 11 back in as user with combo 3 and repeat 1-10,
...
The difference:
3+ test cases or 30+ steps executed (in this case, about 100)
vs
1 test case: under 20 steps
Take this with a grain of salt though, it depends on your type of problem.
If it really is repetitive (as with the example) try to loop as much as possible.
The advantage is, when you get an auto-test framework up, a simple for-loop within the test case with an array object or struct for input.
The disadvantage is, it wouldn't be as modular (takes an extra 30 seconds to find problem cause if something breaks, but that's my opinion).
No need to confuse. You need to just make Matrix for access rights Vs. User Roles.
For e.g :- Raw : User Modules(rights of users)
Column : User Role
Just mark in excel sheet that which user have what type of permission or access.
You can also download some tools which has ability to generate this type of permutations and combination.
You can download from here.
https://testcasegenerator.codeplex.com/
Download Test Case Generator
It is greate tool for measure permutaiion and combination in accurate way.
We have tested these role-based test cases for a huge enterprise application with close to 38 roles and 100s of fields editable or not editable across more than 15-20 web pages using mind maps.
since there were a lot of workflow statuses linked with each role, the thorough testing was needed.
Add a generic test case covering functionality and permissions and mention in test notes to execute the test case for each role as per mind map designed. Attach the mind map to test case.
We converted the test cases into mind map:
Sample MindMap
Mind maps helps in consolidating large chunk of data into a pictorial form that made easy for understanding the test cases and speeds up the execution.
Simply create a table of user roles listed vertically and a list of functions mentioned in the step horizontally. Then mark yes or no in each cell for that role. Repeat for each step. You can put your step description to verify the user’s authorization to perform the action based on the table. If you have a test data column you can put the table there.

Entity Framework Code First - Tests Overlapping Each Other

My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.

How to verify lots of events in a reasonable way

I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris