By many I mean hundreds/thousands. I need to test features that many users will need to see/hear. Obviously these users have different permission levels and some are in different programs. Can a test case be written to pull userids and passwords from the db to test in this way efficiently? Or is this something that is best manually tested by spot checking different log ons?
Call the DB before you run the test to get your users/passwords/whatever else.
Are you using NUnit? If so, you could use the Nunit ValueSourceAttribute to get the data into your test and use a variable for the credentials during your login step.
Related
I have been searching for a while now and am surprised that I can't find any solutions out there for test result storage with grouping and searching capabilities.
I'd like a service or self hosted solution that supports:
storing test results in xunit/junit organized by keyword. In other words, I want to keep all my "test process A" test results together and all my "test process B" results together. I want to store failure traces and overall pass/fail at a minimum
get last run results for keyword: get the last "auth" test results with failure details
get run history results by keyword in some format
search of some sort on test results
I happen to be have:
Cypress tests
typescript/mocha tests without cypress
custom test framework tests that will need custom reporters
but I am fine with any test results solution that supports a generic input like xunit.
I am definitely open to suggestions that use any other storage system that can accomplish this even if it isn't strictly a test results tool.
I am developing a solution for validation of exams developed on top of a web software. This implies that:
Multiple users, each with separate logins and tenants, will implement an application to match exam standards
The exam proctor will have to run a validator that checks the implemented application against the definition of what is correct for each step (i.e. in a given step, the unit price times the ordered quantity is the dollar amount to be ordered).
The validator should give exact reports of what occurred so the exam can be rated.
For this, we decided to implement a stack using Selenium for browser automation, and SpecFlow/Gherkin/Cucumber to interact with Selenium.
Right now the main issue I'm having is how to have the person who administers the exam successfully and easily validate, for 20 students, that their exam is correct. My current way of running things is having an NUnit console runner being invoked by a powershell script that then uses SpecFlow to create a detailed execution report.
Should my powershell script go edit the feature files with tables containing the logins for each of the students, obtained from a .csv or something? Is there any way I can pass the csv file to NUnit so it can be used in the tests?
Thanks,
JM
I would put the login information into the app.config or another file. Before you start the test run, change the values for that run. In the steps then you read the values from it.
I agree with all the responses provided earlier. However, if you dont want to do any of those, you can set an environment variable with the patient login key (or even the credentials) and save login+password in a file, Database or even a csv. At runtime, you will just need to read this key and insert whatever logic you want. This will work well even on non windows, build machines etc
I'm writing a suite of black-box automated tests for our application. I keep bumping into the same design problem, so I was wondering what people here think about it.
Basically, it's a simple CRUD system. For argument's sake, let's see you're testing the screens to create, view, edit and delete user accounts. What I would like to do is write one test which tests that user creation works correctly, another test that checks that viewing a user shows you the same data as you originally typed in, another test that checks that editing a user works, and finally a test that deleting a user is OK.
The trouble is, if I do that, then the tests must be run in a certain order, or they won't work. (E.g., you can't delete a user that hasn't been created yet.) Now some say that the test setup should create everything that the test needs, and the teardown should put the system back into a consistent state. But think about it... the Create User test is going to need to delete that user afterwards, and the Delete User test will have to create a user first... so the two tests now have identical code, and the only difference is whether that code is in the setup / body / teardown. That just seems wrong.
In short, I seem to be faced with several alternatives, all of which seem broken:
Use setup to create users and teardown to delete them. This duplicates all of the Create User and Delete User test code as setup / teardown code.
Force the tests to run in a specific order. This violates the principle that tests should work in isolation and be runnable in any order.
Write one giant test which creates a user, views the user, edits the user, and then deletes the user, all as one huge monolithic block.
Note that creating a user is not a trivial matter; there's quite a lot of steps involved. Similarly, when deleting a user you have to specify what to do with their assigned projects, etc. It's not a trivial operation by any means.
Now, if this were a white-box test, I could mock the user account objects, or mock the database that holds them, or even prod the real database on disk. But these are black box tests, which test only the external, user-visible interface. (I.e., clicking buttons on a screen.) The idea is to test the whole system from end to end, without modifying it [except through GUI commands, obviously].
We have the same issue. We've taken two paths. In one style of test, we use the setup and teardown as you suggest to create the data (users, tickets, whatever) that the test needs. In the other style, we use pre-existing test data in the database. So, for example, if the test is AdminShouldBeAbleToCreateUser, we don't do either of those, because that's the test itself. But if the test is ExistingUserShouldBeAbleToCreateTicket, we use a pre-defined user in the test data, and if the test is UserShouldBeAbleToDeleteOwnTicket, we use a pre-defined user and create the ticket in the setup.
I am new to automation testing and started working on Selenium webdriver and Nunit framework.
I have some queries related to test data management, and am looking for the best approach.
I have to design some test cases where a user registers for an event, but can only register once. If I want to run the test multiple times or run the test on multiple browsers in parallel, what would be the best approach?
I need to search for an event and perform some actions on these. These events would not be available if I run the test case after a few days.
You can clear the logical flag that makes the users registered and then re-use them. Just avoid re-using users across more than one browser.
If you are using automation and don't need to explicitly test the negative conditions of failing to re-register, then you build the registration clearing into the script.
My question may sound a little bit stupid.
My team has to test a Web application that it is used by 3 different User Roles. So, we start by writing our Test Cases based on the User Stories. My problem is that I don't want to create 3 different Test Cases for each User Role. I think that this needs a lot of time when writing the Test Cases and later testing them because:
Total Test Cases Number = User Stories x Test Cases Per User Story x User Roles Number.
Moreover, I don't want to create new Test Cases if some time in the future a new User Role will be created because they will be just duplicates with some little differences.
Is there a better way to manage this situation?
Thanks in advance.
Single Responsibility Principle?
Code and test the user access separately to the user story, unless you really do get a completely different story based on your role, in which case, its a distinct spec and warrants its own test.
Not sure on the coding front (depends on what the situation is and how the code is implemented), but I can answer from a testing perspective (2 yrs so far, over half of it in a traditional waterfall system migrating to Agile).
The web application I test is similar in that we have three user types (global) and three user roles (tied to "projects" which are buckets of sites, sites in term as buckets of imagery, look-up EyeQ if curious). So, 9 possible combos, 8 of which can make a site. Current regression procedure doc has over 100 test cases, 20 or so are edit/create/delete site. total overall: 500+ test cases majority are manually run (ongoing efforts to automate them, but takes time as we've gone thru a UI reboot).
Anyway, I've had to rewrite some of our manual procedures as a result of the sweeping UI changes and am trying avoid the mistakes authors before me made, such as the one you describe (excessive repetition aka reuse same test three times with slight variations).
Rather than stick to their strategy of writing cases, I use looping (same term applies in coding)- that is, test cases that use one role-type combo per pass. Instead of having the same test case written 3+ times and each executed separately for each role/type, use the procedure once but add a few steps at the end.
example test case:
user can create a site (8/9 of the type-role combos can do this in my app)
what they did before I came in:
test case 1- sys admin not tied to project can make site (10 steps);
test case 2- sys admin with project role can make site (same 10 steps);
test case 3- account admin not tied to proj can make site (same 10 steps as 1st case);
test case 4- account admin with proj role can make site (ditto);
test case 5... and so on
what I do:
test case 1: Do 10 steps as user with combo 1,
step 11- log out as that combo, log in as user with combo 2 and repeat 1-10,
step 12- log out as user from step 11 back in as user with combo 3 and repeat 1-10,
...
The difference:
3+ test cases or 30+ steps executed (in this case, about 100)
vs
1 test case: under 20 steps
Take this with a grain of salt though, it depends on your type of problem.
If it really is repetitive (as with the example) try to loop as much as possible.
The advantage is, when you get an auto-test framework up, a simple for-loop within the test case with an array object or struct for input.
The disadvantage is, it wouldn't be as modular (takes an extra 30 seconds to find problem cause if something breaks, but that's my opinion).
No need to confuse. You need to just make Matrix for access rights Vs. User Roles.
For e.g :- Raw : User Modules(rights of users)
Column : User Role
Just mark in excel sheet that which user have what type of permission or access.
You can also download some tools which has ability to generate this type of permutations and combination.
You can download from here.
https://testcasegenerator.codeplex.com/
Download Test Case Generator
It is greate tool for measure permutaiion and combination in accurate way.
We have tested these role-based test cases for a huge enterprise application with close to 38 roles and 100s of fields editable or not editable across more than 15-20 web pages using mind maps.
since there were a lot of workflow statuses linked with each role, the thorough testing was needed.
Add a generic test case covering functionality and permissions and mention in test notes to execute the test case for each role as per mind map designed. Attach the mind map to test case.
We converted the test cases into mind map:
Sample MindMap
Mind maps helps in consolidating large chunk of data into a pictorial form that made easy for understanding the test cases and speeds up the execution.
Simply create a table of user roles listed vertically and a list of functions mentioned in the step horizontally. Then mark yes or no in each cell for that role. Repeat for each step. You can put your step description to verify the user’s authorization to perform the action based on the table. If you have a test data column you can put the table there.