If the client keeps on changing the requirements every now and then, then what testing method should be followed? - testing

I always perform regression testing as soon as the changes come up. The case is the client comes up with changes or additional requirement every now and then and that makes the case more messy. I test something and then the whole things get changed. Again I have to test the changed module and perform integration testing with other modules that is linked with it.
How to deal with such cases?

1) 1st ask complete Clint requirement and note every small point in doc.
2) Understand that total functionality.
3) Use your default testing method.
4) Your not mention which type your testing.( app or portal )
5) As well as possible which is comfortable and feel easy you continue that testing.
6) You want automation testing.please use this (App-appium or Web-selenium)
I hope this is helpful for you.

I would suggest you the following things
->Initially gather all the requirements and check with the client if you have any queries through email.
->Document every thing in MOM whenever you have client call and share with everyone who has attended the call(dev team,client,business,QA)
->Prepare a Test Plan strategy document, test cases and share it to client and request him for his sign off.
->Once, you are all set start with smoke testing then check the major functionalities in that release and then could proceed further.
->You could automate the regression test cases as you are going to execute them for every release(I would suggest to use Selenium if its a desktop application then UFT).
Kindly, let me know if you have any queries.

Related

How to determine if an automated functional test was successful

Goal:
Determine if a functional test was successful.
Scenario:
We have a functional requirement: "A user should be able to signup with username and password. The username has to be a valid email-adress. The password has to be at least 8 characters long".
We have a method "SignupResult UserManager.Signup(string username, string password)".
We want a happy-test with valid intputs, and a sad-test with invalid inputs.
Sub-Systems of the UserManager (e.g. Database) can be either mocked or real systems.
Question:
What would be the best way to determine if the user was successfully signed up. I can imagine the following options:
If any of the sub-system was mocked, one could check if a specific function like "DB.SaveUser(...)" was called. This would destroy the idea of a functional test being a blackbox test and requires that the test-writer has knowledge of the implementation.
If we use real sub-systems, one could for example check if the row in the DB exists. That would be not adequate like the attempt above.
One could use another function like "UserManager.CheckUser(...)" to check if the user was created. This would introduce another method that is tested, also there may be operations that would have no "test-counterpart", or one would have to implement them, just for testing - that seems not ideal.
We could check the result "SignupResult" and/or check for exceptions thrown. This would require defining the interface of the method. This also would require all methods to return a sensible value - I guess this will be a good approach anyway.
To me the last methods seems to be the way to go. Am I correct? Are there other approaches? How would we check side-effects like "an email was sent to the new user" ?
You may want to acquaint yourself with the concept of the Test Pyramid.
There's no single correct way to design and implement automated tests - only trade-offs.
If you absolutely must avoid any sort of knowledge of implementation details, there's really only way to go about it: test the actual system.
The problem with that is that automated tests tend to leave behind a trail of persistent state changes. For example, I once did something like what you're asking about and wrote a series of automated tests that used the actual system (a REST API) to sign up new users.
The operations people soon asked me to turn that system off, even though it only generated a small fraction of actual users.
You might think that the next-best thing would be a full systems test against some staging or test environment. Yes, but then you have to take it on faith that this environment sufficiently mirrors the actual production environment. How can you know that? By knowing something about implementation details. I don't see how you can avoid that.
If you accept that it's okay to know a little about implementation details, then it quickly becomes a question of how much knowledge is acceptable.
The experience behind the test pyramid is that unit tests are much easier to write and maintain than integration tests, which are again easier to write and maintain than systems tests.
I usually find that the sweet spot for these kinds of tests are self-hosted state-based tests where only the actual system dependencies such as databases or email servers are replaced with Fakes (not Mocks).
Perhaps it is the requirement that needs further refinement.
For instance, what precisely would your user do to verify if she has signed up correctly? How would she know? I imagine she'd look at the response from the system: "account successfully created". Then she'd only know that the system posts a message in response to that valid creation attempt.
Testing for the posted message is actionable, just having a created account is not. This is acceptable as a more specific test, at a lower test level.
So think about why exactly users should register? Just to see response? How about the requirement:
When a user signs up with a valid username and a valid password, then she should be able to successfully log into the system using the combination of that username and password.
Then one can add a definition of a successful login, just like the definitions of validity of the username and password.
This is actionable, without knowing specifics about internals. It should be acceptable as far as system integration tests go.

Data Handling in a Cucumber Framework with 5000 scenarios. QAF implementation is Out of Scope

I am going to work on a new project where we have 5000 test cases/scenarios. Each scenario has coming functions like login, Amount Transfer, etc. Therefore each scenario will have certain data. So, in case 5000 scenarios, I feel it will very difficult to handle the data. Because even if the password of the Login user gets changed, then I need to update the Password 5000 times in all the scenarios from different feature files. This goes against the idea of Automation where we are targeting to reduce the manual effort. So I am asking here if anyone has any idea/workarounds to handle such situations I hope there should be. Thanks.
Typically speaking you would create a new user for each test and discard it at the end of the test.
You should also not describe all the details of the users that are created. These often are not relevant to the test and merely incidental details. Rather you would use a template and only describe the changes made to this template in the feature file. If for auditing reasons you have to describe this template, you can write a scenario that tests your template.

API Testing - Best Strategy to adopt

I have few questions for you. Let me give some details before I throw the questions.
Recently I have been involved in Rest api test automation.
We have a single rest api to fetch resources,( this api is basically used in our web app UI for different business workflows to get data)
though its the same resource api, the actual endpoint varies accordingly for different situations
.i.e the query parameters passed in the api url gets differed based on the business workflow.
For example something like
`./getListOfUsers?criteria=agedLessThanTen
../getListOfUsers?criteria=agedBetweenTenAndTwenty
../getListOfUsers?criteria=agedBetweenTwentyAndThirty etc
As there is only one api and as the business workflow do not demand it, we don't have chaining requests between apis
So the test is just hitting on individual endpoints and validating the response content.
The response are validated based on a supplied test data. So the test data file has list of users expecting when hit on each particular endpoint.
i.e the test file is like a static content which will be used to check the response content everytime we hit on it...
if the actual response retreived from server deviating with our supplied testdata,it will be a fialure.
(also tests are there for no content respose,with out auth etc)
This test is ok to confirm the endpoints are working and response content is good or not .
My actual questions are on the test strategy or business covergae here,
Is such single hit on the api end point is sufficient here or not..
or same endpoint should be hit again for other scenarios or not, especially when the above given example of endpoints actually complimenting each other
and a regression issues that might happen, can possible captured in anyof it ?
If api endpoints are complimenting each other, adding more tests , will it be just duplicate of tests/ more maintainance / and other problems later on and should we avoid it ?
if its not giving values ?
Whats the general trend on API automation regarding the covergae? . I beleive it should be utilzed to test the business integration flows, and scenarios if it demands
but for situations like this , is it really required
also should we keep this point in mind here ?, that automation is not to replace manual testing, but only to compliment it .
and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
Thanks
Is such single hit on the api end point is sufficient here or not..
Probably not, for each one you would want to verify various edge cases (e.g. lowest and highest vales, longest string), negative tests (e.g. negative numbers where only positive allowed) and other tests according to the business and implementation logic.
Whats the general trend on API automation regarding the covergae?
...
automation is not to replace manual testing, but only to compliment it . and attempt to automate every possible scenario is not going to give value and only will give maintaince trouble later ?
If you build your test in a modular way then maintenance becomes less of an issue, you need to implement each API anyway and the logic and test data above that should be the less complicated part of the test system.
Indeed you usually want to have a test pyramid of many unit tests, some integration tests and fewer end to end tests, but in this case since there is no UI involved, the end user is just another software module, and execution time for REST APIs is relatively short and stability is relatively good then it is probably acceptable to have a wider end to end test layer.
I used a lot of conditionals above since it's only you that can evaluate the situation in light of the real system.
If possible consider generating test data on the fly instead of using hard coded values from a file, this will require a parallel logic implemented in your tests but will make maintenance and coverage an easier job.

how to get the data from captcha in selenium webdriver

I'm using Selenium webdriver (Java).
I need to test the registration form but before submitting, image box (captcha) is appearing but everytime of execution it is going to be changed. I want to know how to get the data from image (captcha).
Anyone can help me?
If the captcha is coming from an environment under your control, you will likely need to implement some sort of method indicating you are in a test environment and have the captcha system return a known value or some indicator of what the expected value is.
If, on the other hand, the captcha is coming from another source out of your control, you are probably our of luck. At that point, you are essentially in the same boat as the spammers who are in a constant arms race to write software that can visually parse a captcha.
UPDATE
I feel the need to add some clarification to the ideas put forth in the question, answer and comments. Essentially you are dealing with one of the following situations (note that when I say 'your', I am referring to you, your company, client, etc):
1) Your form, Your captcha system: If this is the case, your best solution is to work with your developers to add a 'test' mode to your captchas, returning either a known value, or additional information in the page that indicates what the expected value should be. If you are able to make use of a tool, either written by you, or by another, that can successfully 'read' the captcha image, your system is broken. If you can do it in test mode, what is to stop anyone else (spammer, hacker, etc) from bypassing your captcha in exactly the same manner.
2) Your form, 3rd Party captcha system: If this is the case, your best solution is again to see if the system has some 'test' mode that you can make use of. I have no experiance with these systems myself but in general would guess that test methods exist for the major systems out there. A Google search of {Captcha System Name} automated testing should return some good hints as to how to go about testing with the system. If nothing good comes from that, your next bet would be to implement your own, internal, test only, dummy captcha system that works with some known value and make your captcha provider configurable so that you can point to your test system in test/dev/etc and your real system in production.
3) Another Form, Unknown captcha system: I am going to make a leap of faith here and assume this is not your case, but just for completeness I will include it. If this is your case, your not testing anything at all and are simply asking for help bypassing someone else's security mechanisms for your own reasons. If that is the case, please seek your assistance on less scrupulous sites.
Captcha code was introduced in order to prevent from the robot or automation codes. There is no option for automating the Captcha code.
1 . You can give a wait time for the automation, so that the user can enter the captcha code.
2. If the project is in testing url means, you can request your system admin and developer to disable the captcha validation.
May be this can help you, but i din't try on this..
Developers will generate a random value for captcha, and they will convert the value into image as well as they will store the value in session for comparing the entered input is matching with the captcha code.
So If possible, you can take that session value and give as the input.

How to catagorize or group selenium webdriver tests

Suppose I have a web page with several links on it. Also it has few buttons which execute some JavaScript.
So should I create one Java class for testing each of these links and elements or should I test all the links in just one test method and other elements in another one(so ending up with two Scripts).
Is there a another way of gouping these tests.
Thank you.
I have found that writing test cases based on actions is much more useful than writing based on pages.
Obviously, we would love to have everything automated. But realistically, this isn't possible. So we test what is most important...which happens to be: 1. The primary purposes of the product you are testing, and 2. The security of the product.
To make this easier to understand, lets say I have a Checkout page.
If I were to test based on a page, I would make sure every link on the page would work. I would verify that I can type in a new number in the quantity field, and make sure that the page verifies that the credit card number I type in is correct.
This method would somewhat test Security, but beyond that, it would test the UI. What would happen if I clicked on the Checkout button, and I was sent to the right page, but the item I was trying to checkout disappeared? That is a huge problem, but the test based on the page would work.
However, if I were to test based on actions (go to this page, add to cart, type in personal information, and then checkout), now I have made sure that the most important part of your program works, checked security, and even a little UI testing.
All in all, write your testing to do what the average user would do. No normal person is going to sit on the same page, testing out every little feature on that page.
It depends on whether you like to see 1/1 tests passed or 2/2 tests passed.