Testing multiple User Roles - testing

My question may sound a little bit stupid.
My team has to test a Web application that it is used by 3 different User Roles. So, we start by writing our Test Cases based on the User Stories. My problem is that I don't want to create 3 different Test Cases for each User Role. I think that this needs a lot of time when writing the Test Cases and later testing them because:
Total Test Cases Number = User Stories x Test Cases Per User Story x User Roles Number.
Moreover, I don't want to create new Test Cases if some time in the future a new User Role will be created because they will be just duplicates with some little differences.
Is there a better way to manage this situation?
Thanks in advance.

Single Responsibility Principle?
Code and test the user access separately to the user story, unless you really do get a completely different story based on your role, in which case, its a distinct spec and warrants its own test.

Not sure on the coding front (depends on what the situation is and how the code is implemented), but I can answer from a testing perspective (2 yrs so far, over half of it in a traditional waterfall system migrating to Agile).
The web application I test is similar in that we have three user types (global) and three user roles (tied to "projects" which are buckets of sites, sites in term as buckets of imagery, look-up EyeQ if curious). So, 9 possible combos, 8 of which can make a site. Current regression procedure doc has over 100 test cases, 20 or so are edit/create/delete site. total overall: 500+ test cases majority are manually run (ongoing efforts to automate them, but takes time as we've gone thru a UI reboot).
Anyway, I've had to rewrite some of our manual procedures as a result of the sweeping UI changes and am trying avoid the mistakes authors before me made, such as the one you describe (excessive repetition aka reuse same test three times with slight variations).
Rather than stick to their strategy of writing cases, I use looping (same term applies in coding)- that is, test cases that use one role-type combo per pass. Instead of having the same test case written 3+ times and each executed separately for each role/type, use the procedure once but add a few steps at the end.
example test case:
user can create a site (8/9 of the type-role combos can do this in my app)
what they did before I came in:
test case 1- sys admin not tied to project can make site (10 steps);
test case 2- sys admin with project role can make site (same 10 steps);
test case 3- account admin not tied to proj can make site (same 10 steps as 1st case);
test case 4- account admin with proj role can make site (ditto);
test case 5... and so on
what I do:
test case 1: Do 10 steps as user with combo 1,
step 11- log out as that combo, log in as user with combo 2 and repeat 1-10,
step 12- log out as user from step 11 back in as user with combo 3 and repeat 1-10,
...
The difference:
3+ test cases or 30+ steps executed (in this case, about 100)
vs
1 test case: under 20 steps
Take this with a grain of salt though, it depends on your type of problem.
If it really is repetitive (as with the example) try to loop as much as possible.
The advantage is, when you get an auto-test framework up, a simple for-loop within the test case with an array object or struct for input.
The disadvantage is, it wouldn't be as modular (takes an extra 30 seconds to find problem cause if something breaks, but that's my opinion).

No need to confuse. You need to just make Matrix for access rights Vs. User Roles.
For e.g :- Raw : User Modules(rights of users)
Column : User Role
Just mark in excel sheet that which user have what type of permission or access.
You can also download some tools which has ability to generate this type of permutations and combination.
You can download from here.
https://testcasegenerator.codeplex.com/
Download Test Case Generator
It is greate tool for measure permutaiion and combination in accurate way.

We have tested these role-based test cases for a huge enterprise application with close to 38 roles and 100s of fields editable or not editable across more than 15-20 web pages using mind maps.
since there were a lot of workflow statuses linked with each role, the thorough testing was needed.
Add a generic test case covering functionality and permissions and mention in test notes to execute the test case for each role as per mind map designed. Attach the mind map to test case.
We converted the test cases into mind map:
Sample MindMap
Mind maps helps in consolidating large chunk of data into a pictorial form that made easy for understanding the test cases and speeds up the execution.

Simply create a table of user roles listed vertically and a list of functions mentioned in the step horizontally. Then mark yes or no in each cell for that role. Repeat for each step. You can put your step description to verify the user’s authorization to perform the action based on the table. If you have a test data column you can put the table there.

Related

How do I design a Gherkin/SpecFlow/Selenium solution to have easily parametrizable logins

I am developing a solution for validation of exams developed on top of a web software. This implies that:
Multiple users, each with separate logins and tenants, will implement an application to match exam standards
The exam proctor will have to run a validator that checks the implemented application against the definition of what is correct for each step (i.e. in a given step, the unit price times the ordered quantity is the dollar amount to be ordered).
The validator should give exact reports of what occurred so the exam can be rated.
For this, we decided to implement a stack using Selenium for browser automation, and SpecFlow/Gherkin/Cucumber to interact with Selenium.
Right now the main issue I'm having is how to have the person who administers the exam successfully and easily validate, for 20 students, that their exam is correct. My current way of running things is having an NUnit console runner being invoked by a powershell script that then uses SpecFlow to create a detailed execution report.
Should my powershell script go edit the feature files with tables containing the logins for each of the students, obtained from a .csv or something? Is there any way I can pass the csv file to NUnit so it can be used in the tests?
Thanks,
JM
I would put the login information into the app.config or another file. Before you start the test run, change the values for that run. In the steps then you read the values from it.
I agree with all the responses provided earlier. However, if you dont want to do any of those, you can set an environment variable with the patient login key (or even the credentials) and save login+password in a file, Database or even a csv. At runtime, you will just need to read this key and insert whatever logic you want. This will work well even on non windows, build machines etc

How to test a Mule application flow?

I have been assigned with the following task regarding a Mule flow application currently in production:
To stores the client IP which is using the webservice
To implements a control that limits to ten the times each IP could ask
daily to the website
I have knowledge in Java core and SQL but null background with Mule. All the people that I can ask are in the same situation.
Once I get the app package (the one that is currently in production) up and running, I have stopped it and add the following elements to the flow:
In a subflow with some initial tasks, I have addded a database element to store the IP of the computer which is using the webservice (The user_request is a table I have just created in the DB which stores the IP and date of connection.):
insert into user_request values
(#[MULE_REMOTE_CLIENT_ADDRESS], #[function:datestamp:dd-MM-yy])
To ask the website, a database element performs a select query to provide a choice with some inputs. Depending on the value of those inputs the request is done to the website or not:
Database (Select) --> Choice --> Ask or not to the website depending on the select output
So, there, I have added to the database element that performs the select and additional output that is a count of user_request table for current IP and current day so it can provide the choice with the original inputs as usual and also this extra one (I am copying only the suquery I added):
SELECT COUNT(*) as TRIES FROM USER_REQUEST
WHERE IP_ADDRESS=#[MULE_REMOTE_CLIENT_ADDRESS]
AND REQUEST_DATE=#[function:datestamp:dd-MM-yy]
In the choice, I have added this condition to the path that finally ask the website:
#[payload.get(0).TRIES < 10]
Reached this point, the app runs and give no errors but I don't know how to test it. Where does the flow start? How can I test it as I was the user?
Aditionally, if you see anything wrong in the syntax I used above, I would appreciate if you tell me.
Thanks in advance!!!
munit will require you to learn the basics of this process first, but it is the primary testing tool of mule. With it, you will create a test suite to execute various flows and verify that when given know inputs the correct processing occur in a repeatable manner. In the test, you can mock critical calls, such as a write to your DB so that the actual is called but not actually done so as to not modify your DB table. Likewise, on reads from the DB you can either actually make a call to get known data, or returned mocked test data to exercise all paths in the flow.

Can Selenium be efficiently used to test many different user log ons?

By many I mean hundreds/thousands. I need to test features that many users will need to see/hear. Obviously these users have different permission levels and some are in different programs. Can a test case be written to pull userids and passwords from the db to test in this way efficiently? Or is this something that is best manually tested by spot checking different log ons?
Call the DB before you run the test to get your users/passwords/whatever else.
Are you using NUnit? If so, you could use the Nunit ValueSourceAttribute to get the data into your test and use a variable for the credentials during your login step.

Designing a CRUD test suite

I'm writing a suite of black-box automated tests for our application. I keep bumping into the same design problem, so I was wondering what people here think about it.
Basically, it's a simple CRUD system. For argument's sake, let's see you're testing the screens to create, view, edit and delete user accounts. What I would like to do is write one test which tests that user creation works correctly, another test that checks that viewing a user shows you the same data as you originally typed in, another test that checks that editing a user works, and finally a test that deleting a user is OK.
The trouble is, if I do that, then the tests must be run in a certain order, or they won't work. (E.g., you can't delete a user that hasn't been created yet.) Now some say that the test setup should create everything that the test needs, and the teardown should put the system back into a consistent state. But think about it... the Create User test is going to need to delete that user afterwards, and the Delete User test will have to create a user first... so the two tests now have identical code, and the only difference is whether that code is in the setup / body / teardown. That just seems wrong.
In short, I seem to be faced with several alternatives, all of which seem broken:
Use setup to create users and teardown to delete them. This duplicates all of the Create User and Delete User test code as setup / teardown code.
Force the tests to run in a specific order. This violates the principle that tests should work in isolation and be runnable in any order.
Write one giant test which creates a user, views the user, edits the user, and then deletes the user, all as one huge monolithic block.
Note that creating a user is not a trivial matter; there's quite a lot of steps involved. Similarly, when deleting a user you have to specify what to do with their assigned projects, etc. It's not a trivial operation by any means.
Now, if this were a white-box test, I could mock the user account objects, or mock the database that holds them, or even prod the real database on disk. But these are black box tests, which test only the external, user-visible interface. (I.e., clicking buttons on a screen.) The idea is to test the whole system from end to end, without modifying it [except through GUI commands, obviously].
We have the same issue. We've taken two paths. In one style of test, we use the setup and teardown as you suggest to create the data (users, tickets, whatever) that the test needs. In the other style, we use pre-existing test data in the database. So, for example, if the test is AdminShouldBeAbleToCreateUser, we don't do either of those, because that's the test itself. But if the test is ExistingUserShouldBeAbleToCreateTicket, we use a pre-defined user in the test data, and if the test is UserShouldBeAbleToDeleteOwnTicket, we use a pre-defined user and create the ticket in the setup.

Selenium setup/teardown best practices- returning data to orginal state

{
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("Illinois"));
_editUserPage.State.SelectedText = "New York";
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
_editUserPage.SaveChanges();
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
}
In my example above, I am changing the User's state from Illinois to New York; my question is: should I change the state back to the orignal value of Illiois at the end of the test?
I have roughly 20 other independent tests in the same file and I wanted to know what the best practice is for returning data to the original state. We are using setup/teardown for the entire test suite, just not within each individual test.
Best practice so far which I did see was this:
The test had one test data input (excel sheet)
Each run would add some prefix to the data (e.g. name Pavel => Test01_Pavel)
Test did verify that such data do not exist in the system
Test created testing data according to the input and verified that those data are present
Test deleted all testing data and verified that those data are deleted.
But the really best answer is "it depends." I, personally, am not deleting any test data from the system because
Test environment is strictly divided from prod one
Test data can be useful later on during performance testing (e.g. downloading list of users from the system)
So the real question which you should ask yourself is:
Does deleting test data at the end bring anything good to you?
And vice versa: What happens if the test data remain in the system?
BTW, if you feel like that "the application will definitely break if there is too much nonsense/dummy data in it" you should definitely test that scenario. Imagine that your service will become popular over night (Charlie sheen tweeting about using your page:) ) and millions of users would like to register themselves.
The approach taken in the company I work for is:
Spin up a dedicated test environment in cloud (AWS)
Kick off suite of tests
Each test would insert the data it requires
Once the test suite has completed then tear down the servers, inc. db
This way you have a fresh database each time the tests run and therefore the only danger of bad data breaking a test is if 2 tests in the suite are generating conflicting data.