Functional tests philosophy : test features or requirement? - testing

I'm currently writing some functional tests, and I started wondering what's the best philosophy between these two.
Situation
My application has some secured page, that need the user's group to have the right credentials to have access. These user are split into 2 groups : the 'collaborator group', and the 'accountable group'. Credentials are given to the groups.
Possible philosophies
Solution 1: Tests the credentials a.k.a. Test the features.
For each secured page, I test the
access with 2 users : one with the
right credential, and only this one,
and one without the right credential.
Pros: Tests only the fact that the page is secured against a specific credential
Cons: Doesn't test the "final" application behavior, as wanted (and user) by the client.
Solution 2: Test the groups a.k.a. test the requirements
For each secured page, I test the
access with a user of each group, and
check that only the allowed groups
gain access to the secured page.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
Cons:
Tests are tieds with the tests fixtures
Tests will have to change if the business rules changes or if more groups are created.
Thank you.

I think the second solution is the good one. The credentials will be tested as far as they are associated with a group.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
This is the most important part. Functional tests aims to test the final application in every possible cases. If you want to test the fact that your credentials have the same behavior with a user or a group, you'd better use unit tests.
Cons: Tests will have to change if the business rules changes or if more groups are created.
Your tests cases will always have to be updated if the business of your application changes. As you do with your unit tests. If your modify the code of a function, you check if your unit tests are still able to control each case. It's the same way with functional tests.
Maintaining your tests (and the fixtures they need to run) is a very tedious task, but it's the only way to ensure you're code is strong.
Hope it helped.

I would do both tests. The first one like you pointed out, does not need updating, but is testing the crucially important fact that users without entitlements do not have access. The second is the more comprehensive test and like #TimotheeMartin pointed out, tests will always need to be updated when the code changes.

Related

How to determine if an automated functional test was successful

Goal:
Determine if a functional test was successful.
Scenario:
We have a functional requirement: "A user should be able to signup with username and password. The username has to be a valid email-adress. The password has to be at least 8 characters long".
We have a method "SignupResult UserManager.Signup(string username, string password)".
We want a happy-test with valid intputs, and a sad-test with invalid inputs.
Sub-Systems of the UserManager (e.g. Database) can be either mocked or real systems.
Question:
What would be the best way to determine if the user was successfully signed up. I can imagine the following options:
If any of the sub-system was mocked, one could check if a specific function like "DB.SaveUser(...)" was called. This would destroy the idea of a functional test being a blackbox test and requires that the test-writer has knowledge of the implementation.
If we use real sub-systems, one could for example check if the row in the DB exists. That would be not adequate like the attempt above.
One could use another function like "UserManager.CheckUser(...)" to check if the user was created. This would introduce another method that is tested, also there may be operations that would have no "test-counterpart", or one would have to implement them, just for testing - that seems not ideal.
We could check the result "SignupResult" and/or check for exceptions thrown. This would require defining the interface of the method. This also would require all methods to return a sensible value - I guess this will be a good approach anyway.
To me the last methods seems to be the way to go. Am I correct? Are there other approaches? How would we check side-effects like "an email was sent to the new user" ?
You may want to acquaint yourself with the concept of the Test Pyramid.
There's no single correct way to design and implement automated tests - only trade-offs.
If you absolutely must avoid any sort of knowledge of implementation details, there's really only way to go about it: test the actual system.
The problem with that is that automated tests tend to leave behind a trail of persistent state changes. For example, I once did something like what you're asking about and wrote a series of automated tests that used the actual system (a REST API) to sign up new users.
The operations people soon asked me to turn that system off, even though it only generated a small fraction of actual users.
You might think that the next-best thing would be a full systems test against some staging or test environment. Yes, but then you have to take it on faith that this environment sufficiently mirrors the actual production environment. How can you know that? By knowing something about implementation details. I don't see how you can avoid that.
If you accept that it's okay to know a little about implementation details, then it quickly becomes a question of how much knowledge is acceptable.
The experience behind the test pyramid is that unit tests are much easier to write and maintain than integration tests, which are again easier to write and maintain than systems tests.
I usually find that the sweet spot for these kinds of tests are self-hosted state-based tests where only the actual system dependencies such as databases or email servers are replaced with Fakes (not Mocks).
Perhaps it is the requirement that needs further refinement.
For instance, what precisely would your user do to verify if she has signed up correctly? How would she know? I imagine she'd look at the response from the system: "account successfully created". Then she'd only know that the system posts a message in response to that valid creation attempt.
Testing for the posted message is actionable, just having a created account is not. This is acceptable as a more specific test, at a lower test level.
So think about why exactly users should register? Just to see response? How about the requirement:
When a user signs up with a valid username and a valid password, then she should be able to successfully log into the system using the combination of that username and password.
Then one can add a definition of a successful login, just like the definitions of validity of the username and password.
This is actionable, without knowing specifics about internals. It should be acceptable as far as system integration tests go.

Data Handling in a Cucumber Framework with 5000 scenarios. QAF implementation is Out of Scope

I am going to work on a new project where we have 5000 test cases/scenarios. Each scenario has coming functions like login, Amount Transfer, etc. Therefore each scenario will have certain data. So, in case 5000 scenarios, I feel it will very difficult to handle the data. Because even if the password of the Login user gets changed, then I need to update the Password 5000 times in all the scenarios from different feature files. This goes against the idea of Automation where we are targeting to reduce the manual effort. So I am asking here if anyone has any idea/workarounds to handle such situations I hope there should be. Thanks.
Typically speaking you would create a new user for each test and discard it at the end of the test.
You should also not describe all the details of the users that are created. These often are not relevant to the test and merely incidental details. Rather you would use a template and only describe the changes made to this template in the feature file. If for auditing reasons you have to describe this template, you can write a scenario that tests your template.

API Testing - Best Strategy to adopt

I have few questions for you. Let me give some details before I throw the questions.
Recently I have been involved in Rest api test automation.
We have a single rest api to fetch resources,( this api is basically used in our web app UI for different business workflows to get data)
though its the same resource api, the actual endpoint varies accordingly for different situations
.i.e the query parameters passed in the api url gets differed based on the business workflow.
For example something like
`./getListOfUsers?criteria=agedLessThanTen
../getListOfUsers?criteria=agedBetweenTenAndTwenty
../getListOfUsers?criteria=agedBetweenTwentyAndThirty etc
As there is only one api and as the business workflow do not demand it, we don't have chaining requests between apis
So the test is just hitting on individual endpoints and validating the response content.
The response are validated based on a supplied test data. So the test data file has list of users expecting when hit on each particular endpoint.
i.e the test file is like a static content which will be used to check the response content everytime we hit on it...
if the actual response retreived from server deviating with our supplied testdata,it will be a fialure.
(also tests are there for no content respose,with out auth etc)
This test is ok to confirm the endpoints are working and response content is good or not .
My actual questions are on the test strategy or business covergae here,
Is such single hit on the api end point is sufficient here or not..
or same endpoint should be hit again for other scenarios or not, especially when the above given example of endpoints actually complimenting each other
and a regression issues that might happen, can possible captured in anyof it ?
If api endpoints are complimenting each other, adding more tests , will it be just duplicate of tests/ more maintainance / and other problems later on and should we avoid it ?
if its not giving values ?
Whats the general trend on API automation regarding the covergae? . I beleive it should be utilzed to test the business integration flows, and scenarios if it demands
but for situations like this , is it really required
also should we keep this point in mind here ?, that automation is not to replace manual testing, but only to compliment it .
and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
Thanks
Is such single hit on the api end point is sufficient here or not..
Probably not, for each one you would want to verify various edge cases (e.g. lowest and highest vales, longest string), negative tests (e.g. negative numbers where only positive allowed) and other tests according to the business and implementation logic.
Whats the general trend on API automation regarding the covergae?
...
automation is not to replace manual testing, but only to compliment it . and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
If you build your test in a modular way then maintenance becomes less of an issue, you need to implement each API anyway and the logic and test data above that should be the less complicated part of the test system.
Indeed you usually want to have a test pyramid of many unit tests, some integration tests and fewer end to end tests, but in this case since there is no UI involved, the end user is just another software module, and execution time for REST APIs is relatively short and stability is relatively good then it is probably acceptable to have a wider end to end test layer.
I used a lot of conditionals above since it's only you that can evaluate the situation in light of the real system.
If possible consider generating test data on the fly instead of using hard coded values from a file, this will require a parallel logic implemented in your tests but will make maintenance and coverage an easier job.

How to end to end test a web application?

As far as I know, end to end tests are black box tests, so I should not know anything about the application, just use its interface and check the result... But how should I check a read-only api, if I cannot touch the database to set the data I intend to read with it?
Is it really crucial to write the fixtures with an e2e approach? In my current case, I have a www and a members subdomain. The content of the www subdomain is managed from the member's subdomain with proper authorization. I want to develop with tdd beginning with the e2e tests ending with the unit tests... If I want to develop the www subdomain first, I cannot make an e2e fixture, because the data writing is not implemented yet. Should I first implement the member's subdomain if I want to develop this way? I intend to use event sourcing, so it would be very easy to make a DB fixture, just inserting some domain events, that would be all...
I don't understand the logic of this kind of testing, if I cannot insert anything to the database, it will be empty at the beginning. But then I won't have a user with proper authorization, so I cannot write into the database, and so I cannot test anything. So this is a catch-22... At least I have to have a test user with the proper authorization, but making a test user requires writing to the database... I think my brain burned out :D
All tests need data to test against, even black box testing. Typically you will have a setup function that populates your database with data before the test runs. You will likely have a teardown step as well, which might restore the database to a pre-test state. This in no way violates any rules of testing.

Selenium Test Case vs. Test Suite vs. general usage

How do I know what should be a test case and what a test suite in Selenium?
Is there any general rule for it? I've read the seleniumhq site any several others, but
they only have some basic examples while I want to test a whole website.
My questions are for example:
Say I'm testing some multi-step web form. Should I make it one test suite and each
step (in web form) would be a single test case or all steps should be one test case?
Say I've built a web forum and I want to test several features in it. Do I make one
test suite and each test case tests each feature (or several cases per each feature) OR
I'll have many test suites and each suite tests one feature with a few test cases.
What to do if I have a form which contains 5 checkboxes - each of them can be obviously clicked
or not. This may have some consequences when I submit the form. So - theoretically there are 2^5=32
possible execution flows. Should I test all 32? Or maybe should I just test each checkbox separately
to simplify things. When can/should I simplify, when not? (assuming that checkboxes MAY be
somehow related).
Should each feature have test cases which test both positive and negative results?
For example should I just focus on correct workflows - i.e. submit valid form and see if the
website did what I asked for (worked) OR also submit empty form and check if error message
appeared.
Can you answer these giving some practical examples (if needed)? - maybe using some (StackOverflow?)
site as example site.
Answer to 1 and 2:
I think this is more an issue about test design than selenium. Consider Selenium as a tool which controls the browser/website like a user would do. It simulates a user clicking through the page. To know what a test case is and what a test suite is you should think of the functionalities of your web application you want to test. Let's say you have web shop than one test case could test the following use case:
user puts articles in cart
user enters his data (name etc)
user gets a summary of his order
user confirms the order
It depends on your application which workflows or functionality you want to test.
I would consider a test suite for a whole project so one suite for one web application. And this application has a lot of test cases. Every test case is a use case.
When building a test suite, consider some design patterns like ui-mapping, page object design and consider the advantages of a test management system (like TestNG in Java).
here are some links to that:
http://www.shino.de/2011/07/26/on-the-pageobject-pattern/
http://www.theautomatedtester.co.uk/tutorials/selenium/page-object-pattern.htm
http://www.cheezyworld.com/2010/11/09/ui-tests-not-brittle/
http://hedleyproctor.com/2011/07/automating-selenium-testing-with-testng-ant-and-cruisecontrol/
Answer to 3 and 4:
It is similar to 1 and 2. It is always a question WHAT you want to test. Or a question what your project leader wants you to test (or customer). Every functionality which is important and should work should be tested.