How to end to end test a web application? - testing

As far as I know, end to end tests are black box tests, so I should not know anything about the application, just use its interface and check the result... But how should I check a read-only api, if I cannot touch the database to set the data I intend to read with it?
Is it really crucial to write the fixtures with an e2e approach? In my current case, I have a www and a members subdomain. The content of the www subdomain is managed from the member's subdomain with proper authorization. I want to develop with tdd beginning with the e2e tests ending with the unit tests... If I want to develop the www subdomain first, I cannot make an e2e fixture, because the data writing is not implemented yet. Should I first implement the member's subdomain if I want to develop this way? I intend to use event sourcing, so it would be very easy to make a DB fixture, just inserting some domain events, that would be all...
I don't understand the logic of this kind of testing, if I cannot insert anything to the database, it will be empty at the beginning. But then I won't have a user with proper authorization, so I cannot write into the database, and so I cannot test anything. So this is a catch-22... At least I have to have a test user with the proper authorization, but making a test user requires writing to the database... I think my brain burned out :D

All tests need data to test against, even black box testing. Typically you will have a setup function that populates your database with data before the test runs. You will likely have a teardown step as well, which might restore the database to a pre-test state. This in no way violates any rules of testing.

Related

How to determine if an automated functional test was successful

Goal:
Determine if a functional test was successful.
Scenario:
We have a functional requirement: "A user should be able to signup with username and password. The username has to be a valid email-adress. The password has to be at least 8 characters long".
We have a method "SignupResult UserManager.Signup(string username, string password)".
We want a happy-test with valid intputs, and a sad-test with invalid inputs.
Sub-Systems of the UserManager (e.g. Database) can be either mocked or real systems.
Question:
What would be the best way to determine if the user was successfully signed up. I can imagine the following options:
If any of the sub-system was mocked, one could check if a specific function like "DB.SaveUser(...)" was called. This would destroy the idea of a functional test being a blackbox test and requires that the test-writer has knowledge of the implementation.
If we use real sub-systems, one could for example check if the row in the DB exists. That would be not adequate like the attempt above.
One could use another function like "UserManager.CheckUser(...)" to check if the user was created. This would introduce another method that is tested, also there may be operations that would have no "test-counterpart", or one would have to implement them, just for testing - that seems not ideal.
We could check the result "SignupResult" and/or check for exceptions thrown. This would require defining the interface of the method. This also would require all methods to return a sensible value - I guess this will be a good approach anyway.
To me the last methods seems to be the way to go. Am I correct? Are there other approaches? How would we check side-effects like "an email was sent to the new user" ?
You may want to acquaint yourself with the concept of the Test Pyramid.
There's no single correct way to design and implement automated tests - only trade-offs.
If you absolutely must avoid any sort of knowledge of implementation details, there's really only way to go about it: test the actual system.
The problem with that is that automated tests tend to leave behind a trail of persistent state changes. For example, I once did something like what you're asking about and wrote a series of automated tests that used the actual system (a REST API) to sign up new users.
The operations people soon asked me to turn that system off, even though it only generated a small fraction of actual users.
You might think that the next-best thing would be a full systems test against some staging or test environment. Yes, but then you have to take it on faith that this environment sufficiently mirrors the actual production environment. How can you know that? By knowing something about implementation details. I don't see how you can avoid that.
If you accept that it's okay to know a little about implementation details, then it quickly becomes a question of how much knowledge is acceptable.
The experience behind the test pyramid is that unit tests are much easier to write and maintain than integration tests, which are again easier to write and maintain than systems tests.
I usually find that the sweet spot for these kinds of tests are self-hosted state-based tests where only the actual system dependencies such as databases or email servers are replaced with Fakes (not Mocks).
Perhaps it is the requirement that needs further refinement.
For instance, what precisely would your user do to verify if she has signed up correctly? How would she know? I imagine she'd look at the response from the system: "account successfully created". Then she'd only know that the system posts a message in response to that valid creation attempt.
Testing for the posted message is actionable, just having a created account is not. This is acceptable as a more specific test, at a lower test level.
So think about why exactly users should register? Just to see response? How about the requirement:
When a user signs up with a valid username and a valid password, then she should be able to successfully log into the system using the combination of that username and password.
Then one can add a definition of a successful login, just like the definitions of validity of the username and password.
This is actionable, without knowing specifics about internals. It should be acceptable as far as system integration tests go.

API Testing - Best Strategy to adopt

I have few questions for you. Let me give some details before I throw the questions.
Recently I have been involved in Rest api test automation.
We have a single rest api to fetch resources,( this api is basically used in our web app UI for different business workflows to get data)
though its the same resource api, the actual endpoint varies accordingly for different situations
.i.e the query parameters passed in the api url gets differed based on the business workflow.
For example something like
`./getListOfUsers?criteria=agedLessThanTen
../getListOfUsers?criteria=agedBetweenTenAndTwenty
../getListOfUsers?criteria=agedBetweenTwentyAndThirty etc
As there is only one api and as the business workflow do not demand it, we don't have chaining requests between apis
So the test is just hitting on individual endpoints and validating the response content.
The response are validated based on a supplied test data. So the test data file has list of users expecting when hit on each particular endpoint.
i.e the test file is like a static content which will be used to check the response content everytime we hit on it...
if the actual response retreived from server deviating with our supplied testdata,it will be a fialure.
(also tests are there for no content respose,with out auth etc)
This test is ok to confirm the endpoints are working and response content is good or not .
My actual questions are on the test strategy or business covergae here,
Is such single hit on the api end point is sufficient here or not..
or same endpoint should be hit again for other scenarios or not, especially when the above given example of endpoints actually complimenting each other
and a regression issues that might happen, can possible captured in anyof it ?
If api endpoints are complimenting each other, adding more tests , will it be just duplicate of tests/ more maintainance / and other problems later on and should we avoid it ?
if its not giving values ?
Whats the general trend on API automation regarding the covergae? . I beleive it should be utilzed to test the business integration flows, and scenarios if it demands
but for situations like this , is it really required
also should we keep this point in mind here ?, that automation is not to replace manual testing, but only to compliment it .
and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
Thanks
Is such single hit on the api end point is sufficient here or not..
Probably not, for each one you would want to verify various edge cases (e.g. lowest and highest vales, longest string), negative tests (e.g. negative numbers where only positive allowed) and other tests according to the business and implementation logic.
Whats the general trend on API automation regarding the covergae?
...
automation is not to replace manual testing, but only to compliment it . and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
If you build your test in a modular way then maintenance becomes less of an issue, you need to implement each API anyway and the logic and test data above that should be the less complicated part of the test system.
Indeed you usually want to have a test pyramid of many unit tests, some integration tests and fewer end to end tests, but in this case since there is no UI involved, the end user is just another software module, and execution time for REST APIs is relatively short and stability is relatively good then it is probably acceptable to have a wider end to end test layer.
I used a lot of conditionals above since it's only you that can evaluate the situation in light of the real system.
If possible consider generating test data on the fly instead of using hard coded values from a file, this will require a parallel logic implemented in your tests but will make maintenance and coverage an easier job.

If the client keeps on changing the requirements every now and then, then what testing method should be followed?

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.
How to deal with such cases?
1) 1st ask complete Clint requirement and note every small point in doc.
2) Understand that total functionality.
3) Use your default testing method.
4) Your not mention which type your testing.( app or portal )
5) As well as possible which is comfortable and feel easy you continue that testing.
6) You want automation testing.please use this (App-appium or Web-selenium)
I hope this is helpful for you.
I would suggest you the following things
->Initially gather all the requirements and check with the client if you have any queries through email.
->Document every thing in MOM whenever you have client call and share with everyone who has attended the call(dev team,client,business,QA)
->Prepare a Test Plan strategy document, test cases and share it to client and request him for his sign off.
->Once, you are all set start with smoke testing then check the major functionalities in that release and then could proceed further.
->You could automate the regression test cases as you are going to execute them for every release(I would suggest to use Selenium if its a desktop application then UFT).
Kindly, let me know if you have any queries.

What should I test in views?

Testing and Rspec are new to me. Currently I'm using Rspec with Shoulda and Capybara to test my application. It's all fine to test models, controllers, helpers, routing and requests. But what should I exactly test in views? Actually I want to test everything in views, including DOM, but I also don't want to overdone things.
These three things would be a good starting point
Use Capybara to go start at the root of your site, and have it click on links and whatever until it gets to the view you want tested.
Make sure what ever content is supposed to be on the page, is actually showing up on the page. So, if they 'user' went to the Product 1 page, make sure all the Product 1 content is actually there.
If different users see different content, make sure that's working. So, if Admin users see Admin-type buttons, make sure the buttons are they when the user is an Admin, and that aren't when the user isn't.
Those 3 things are a pretty good base. Even the first one is a big win. That will catch any kind of weird view syntax errors you may have accidentally introduced, as the syntax error will fail the test.
At my work we are using RSpec only to do unit testing.
For business testing or behavior testing we are using Cucumber that is much more readable for the business and IT guys.
It's like a contract that you sign with your business or it's like a documentation that you can execute.
Have a look at Cucumber: http://cukes.info/
I use view specs to verify that the view uses the IDs and classes I depend on in my jQuery code.
And to test different versions of the same page. E.g.:
I would not want to create two full request or feature specs to check that a new user sees welcome message A and a returning user welcome message B. Instead I would pick one of the cases, write a request or feature spec for it, and then a additional view spec that tests both cases.
Rails Test Prescriptions might be interesting for you, since it has a chapter dedicated to view testing.

Functional tests philosophy : test features or requirement?

I'm currently writing some functional tests, and I started wondering what's the best philosophy between these two.
Situation
My application has some secured page, that need the user's group to have the right credentials to have access. These user are split into 2 groups : the 'collaborator group', and the 'accountable group'. Credentials are given to the groups.
Possible philosophies
Solution 1: Tests the credentials a.k.a. Test the features.
For each secured page, I test the
access with 2 users : one with the
right credential, and only this one,
and one without the right credential.
Pros: Tests only the fact that the page is secured against a specific credential
Cons: Doesn't test the "final" application behavior, as wanted (and user) by the client.
Solution 2: Test the groups a.k.a. test the requirements
For each secured page, I test the
access with a user of each group, and
check that only the allowed groups
gain access to the secured page.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
Cons:
Tests are tieds with the tests fixtures
Tests will have to change if the business rules changes or if more groups are created.
Thank you.
I think the second solution is the good one. The credentials will be tested as far as they are associated with a group.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
This is the most important part. Functional tests aims to test the final application in every possible cases. If you want to test the fact that your credentials have the same behavior with a user or a group, you'd better use unit tests.
Cons: Tests will have to change if the business rules changes or if more groups are created.
Your tests cases will always have to be updated if the business of your application changes. As you do with your unit tests. If your modify the code of a function, you check if your unit tests are still able to control each case. It's the same way with functional tests.
Maintaining your tests (and the fixtures they need to run) is a very tedious task, but it's the only way to ensure you're code is strong.
Hope it helped.
I would do both tests. The first one like you pointed out, does not need updating, but is testing the crucially important fact that users without entitlements do not have access. The second is the more comprehensive test and like #TimotheeMartin pointed out, tests will always need to be updated when the code changes.