Is it a good practice to create a different database with seeded data for cypress to test? - testing

Suppose I have to test the feedback feature where User A can provide feedback to User B. In that case, I need to log in to the app, make sure User A is a friend of User B, and other additional setup needed to make sure the feedback feature is available in UI. Then only I can write a cypress test to verify that feedback can be edited/added/deleted and stuff.
So, my question is what is a good idea for cases like this? Should I have a different server with a database where User A is already friends with User B and every other precondition related to the feature flag available? This way I can only focus on the actual flow.
Or should I add these conditions in the before hook and clean up after my test succeed?

Related

How to determine if an automated functional test was successful

Goal:
Determine if a functional test was successful.
Scenario:
We have a functional requirement: "A user should be able to signup with username and password. The username has to be a valid email-adress. The password has to be at least 8 characters long".
We have a method "SignupResult UserManager.Signup(string username, string password)".
We want a happy-test with valid intputs, and a sad-test with invalid inputs.
Sub-Systems of the UserManager (e.g. Database) can be either mocked or real systems.
Question:
What would be the best way to determine if the user was successfully signed up. I can imagine the following options:
If any of the sub-system was mocked, one could check if a specific function like "DB.SaveUser(...)" was called. This would destroy the idea of a functional test being a blackbox test and requires that the test-writer has knowledge of the implementation.
If we use real sub-systems, one could for example check if the row in the DB exists. That would be not adequate like the attempt above.
One could use another function like "UserManager.CheckUser(...)" to check if the user was created. This would introduce another method that is tested, also there may be operations that would have no "test-counterpart", or one would have to implement them, just for testing - that seems not ideal.
We could check the result "SignupResult" and/or check for exceptions thrown. This would require defining the interface of the method. This also would require all methods to return a sensible value - I guess this will be a good approach anyway.
To me the last methods seems to be the way to go. Am I correct? Are there other approaches? How would we check side-effects like "an email was sent to the new user" ?
You may want to acquaint yourself with the concept of the Test Pyramid.
There's no single correct way to design and implement automated tests - only trade-offs.
If you absolutely must avoid any sort of knowledge of implementation details, there's really only way to go about it: test the actual system.
The problem with that is that automated tests tend to leave behind a trail of persistent state changes. For example, I once did something like what you're asking about and wrote a series of automated tests that used the actual system (a REST API) to sign up new users.
The operations people soon asked me to turn that system off, even though it only generated a small fraction of actual users.
You might think that the next-best thing would be a full systems test against some staging or test environment. Yes, but then you have to take it on faith that this environment sufficiently mirrors the actual production environment. How can you know that? By knowing something about implementation details. I don't see how you can avoid that.
If you accept that it's okay to know a little about implementation details, then it quickly becomes a question of how much knowledge is acceptable.
The experience behind the test pyramid is that unit tests are much easier to write and maintain than integration tests, which are again easier to write and maintain than systems tests.
I usually find that the sweet spot for these kinds of tests are self-hosted state-based tests where only the actual system dependencies such as databases or email servers are replaced with Fakes (not Mocks).
Perhaps it is the requirement that needs further refinement.
For instance, what precisely would your user do to verify if she has signed up correctly? How would she know? I imagine she'd look at the response from the system: "account successfully created". Then she'd only know that the system posts a message in response to that valid creation attempt.
Testing for the posted message is actionable, just having a created account is not. This is acceptable as a more specific test, at a lower test level.
So think about why exactly users should register? Just to see response? How about the requirement:
When a user signs up with a valid username and a valid password, then she should be able to successfully log into the system using the combination of that username and password.
Then one can add a definition of a successful login, just like the definitions of validity of the username and password.
This is actionable, without knowing specifics about internals. It should be acceptable as far as system integration tests go.

Data Handling in a Cucumber Framework with 5000 scenarios. QAF implementation is Out of Scope

I am going to work on a new project where we have 5000 test cases/scenarios. Each scenario has coming functions like login, Amount Transfer, etc. Therefore each scenario will have certain data. So, in case 5000 scenarios, I feel it will very difficult to handle the data. Because even if the password of the Login user gets changed, then I need to update the Password 5000 times in all the scenarios from different feature files. This goes against the idea of Automation where we are targeting to reduce the manual effort. So I am asking here if anyone has any idea/workarounds to handle such situations I hope there should be. Thanks.
Typically speaking you would create a new user for each test and discard it at the end of the test.
You should also not describe all the details of the users that are created. These often are not relevant to the test and merely incidental details. Rather you would use a template and only describe the changes made to this template in the feature file. If for auditing reasons you have to describe this template, you can write a scenario that tests your template.

If the client keeps on changing the requirements every now and then, then what testing method should be followed?

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.
How to deal with such cases?
1) 1st ask complete Clint requirement and note every small point in doc.
2) Understand that total functionality.
3) Use your default testing method.
4) Your not mention which type your testing.( app or portal )
5) As well as possible which is comfortable and feel easy you continue that testing.
6) You want automation testing.please use this (App-appium or Web-selenium)
I hope this is helpful for you.
I would suggest you the following things
->Initially gather all the requirements and check with the client if you have any queries through email.
->Document every thing in MOM whenever you have client call and share with everyone who has attended the call(dev team,client,business,QA)
->Prepare a Test Plan strategy document, test cases and share it to client and request him for his sign off.
->Once, you are all set start with smoke testing then check the major functionalities in that release and then could proceed further.
->You could automate the regression test cases as you are going to execute them for every release(I would suggest to use Selenium if its a desktop application then UFT).
Kindly, let me know if you have any queries.

Cucumber - Javascript Invoke Login Step Definitions Before Other Step Definitions

Using Chimp.js, Cucumberjs and WebdriverIO, I'm trying to run login step definitions in a browser instance, before other step definitions that depend on a user to be logged in. And possibly without adding them into the Background over and over again for each feature file.
Is this possible? I'm quite new to Wedbdriver.io and Cucumber and any advice would be a great help. Please let me know if more info is needed.
Personally I don't think this is a good idea. To log someone in you have to specify 'who' the user is. Later when your application becomes more complex you might have interactions between different users. Hiding any of this from the scenario is not good.
What you can do is combine user specification and login in single steps e.g.
Given I am logged in as an admin
Given Fred is logged in as a sales executive
etc.
If you are clever about how you implement these steps you can keep things fairly dry by extracting helper methods from the step definitions and using global variables to store people e.g.
Given 'I am logged in as an admin' do
#i = create_user role: admin
login as: admin, user: #i
and reuse these methods in other login steps.
If you organise your features well, you can background alot of these calls e.g.
Feature: Basic admin ops
Background:
Given I am logged in as an admin
Scenario: I can foo
When I foo
Scenario: I can bar
When I bar
Some final thoughts ...
Each scenario is there to drive a particular piece of development. Compared to the work of doing the development writing "Given I am logged in" is trivial.
When something goes wrong knowing that you were supposed to be logged in is an essential piece of information.

Functional tests philosophy : test features or requirement?

I'm currently writing some functional tests, and I started wondering what's the best philosophy between these two.
Situation
My application has some secured page, that need the user's group to have the right credentials to have access. These user are split into 2 groups : the 'collaborator group', and the 'accountable group'. Credentials are given to the groups.
Possible philosophies
Solution 1: Tests the credentials a.k.a. Test the features.
For each secured page, I test the
access with 2 users : one with the
right credential, and only this one,
and one without the right credential.
Pros: Tests only the fact that the page is secured against a specific credential
Cons: Doesn't test the "final" application behavior, as wanted (and user) by the client.
Solution 2: Test the groups a.k.a. test the requirements
For each secured page, I test the
access with a user of each group, and
check that only the allowed groups
gain access to the secured page.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
Cons:
Tests are tieds with the tests fixtures
Tests will have to change if the business rules changes or if more groups are created.
Thank you.
I think the second solution is the good one. The credentials will be tested as far as they are associated with a group.
Pros: Tests the "final" application behavior, as wanted (and user) by the client.
This is the most important part. Functional tests aims to test the final application in every possible cases. If you want to test the fact that your credentials have the same behavior with a user or a group, you'd better use unit tests.
Cons: Tests will have to change if the business rules changes or if more groups are created.
Your tests cases will always have to be updated if the business of your application changes. As you do with your unit tests. If your modify the code of a function, you check if your unit tests are still able to control each case. It's the same way with functional tests.
Maintaining your tests (and the fixtures they need to run) is a very tedious task, but it's the only way to ensure you're code is strong.
Hope it helped.
I would do both tests. The first one like you pointed out, does not need updating, but is testing the crucially important fact that users without entitlements do not have access. The second is the more comprehensive test and like #TimotheeMartin pointed out, tests will always need to be updated when the code changes.