How will you give Consistency in Automation Testing - automation

Let’s assume the following simple test case which is testing the functionality of a Banking system maintaining the balance of bank accounts:
Check Account #1234 Balance, which become the reference point (Ex: 1000 $)
Perform Deposit of 600 $
Perform Withdraw of 400 $
Check Account #1234 Balance, expecting the balance to be 200 $ over the reference point (Ex: 1200 $)
Given project pressures you and your colleague are ask to run the test suite in a concurrent fashion (could be using different browser version), given that both of you are manipulating the same account your test is sporadically failing.
In the IP sprint you are task to come up with a solution to bring consistency to the test results regardless of the number of members executing the suite in a concurrent fashion, what are the options you would consider.

There are different ways to approach your case, I would like to list some:
1 - If the concurrency is a must and if your Check Account changes something in a Database, then would be necessary to use different accounts, one per thread of execution, this way each test can run with no concerns on what the other tests are doing.
2 - If you can push for a non-concurrent solution, then you only need to run your tests serialized and at the end of each test revert back the check account to the reference point.
3 - Another way to solve this problem is to use mock data. This solution could be a little bit more complex, and it could requiere more work. But if still you want to know more about it contact your development team and let them know about your problem so that you can find a solution together.
You can read more about mocking data here:
Cypress Interceptor
Mockserver
Wiremock
Mockoon
Hope it helps!

Related

Test with same action but different contexts

I'm working in a company as a QA intern, we do end to end testing with browser automation.
We have lots of tests scenarios that looks like that : (we use cucumber)
Given I am logged in
When I go to my address book
Then I can see my contacts
but recently we had few bugs depending on what kind of account we are logged in so in order to tests this edges case I'm thinking of doing something like that:
Given I am logged in as with in as a project manager
When I go to my address book
Then I can see my contacts
And do that for every kind of account (project manager, commercial, etc ...)
But I'm asking about the interest of doing that .. For every user the outcome should be the same, they should all see their contacts. (even if due to a bug it was not the case)
if I start doing that way, I would be legit to have test like
Given I am logged in with a german account
and same with french, english etc...
but it would lead to an explosion of the number of tests
So do you guys test those edge case ?
If yes, how to do it efficiently ?
You can use cucumber sceanrio examples like below .
Scenario Outline:
Given I am logged in as "<userAccount>" .
When I go to my address book .
Then I can see my contacts
Examples:
|userAccount| .
|ABC| .
|XYZ| .
Usually you don't test for this in an integration test. Exactly because it leads to an explosion of test cases. Instead you re-engineer the system in such a way that you can test this at a lower level and still have confidence that the edge case is covered.
One way to do this would be to create "fake" countries. Each with either a single interesting attribute or a combination of interesting attributes.

What to do when an acceptance test has varying user choices and you want to test each of them

I'm writing some acceptance tests for a donation form. I'm using Codeception. For the sake of this example, lets say that the donation form has 3 parts:
Enter your personal information
Enter either Credit Card and Direct Transfer
Submit and receive e-mail confirmation
For the acceptance test I'd like to test the whole process--for both credit card AND direct transfer. Steps 1 and 3 are essentially the same between the two donation processes, but--obviously--you can't run the second step by itself (the donation form wouldn't submit without step 1).
So I'm wondering, would it be "normal" in this case to write two tests (e.g. canDonateWithCreditCard() and canDonateWithDirectTransfer()) that both test all three parts of the process? Even though that's partly testing the same thing twice?
If not, what would be the preferred way to do it?
This is perfectly acceptable at my work we have a sizable automation suite where the same pages get executed multiple times, because of scenarios similar to what you outlined above.
The only caveat I would mention is when building your tests (I don't know how codeception works) but look to build your tests using something along the lines of the page object model (http://martinfowler.com/bliki/PageObject.html) this will mean even though you have multiple tests that may implement the same scenarios each test doesn't have its own implementation of those steps.
This depends on your approach.
1. You can create two different test cases performing the action.
2. You can have a logic in your test to pass the mode of transfer as an argument to the method and perform activities accordingly.
It's always ideal to use Page object model to encapsulate all actions in each page class and also to avoid redundancy.
If both Credit card and Direct transfer actions navigate to a new page, create a new object of the page according to the argument passed, and call the method to do the transfer action.
A simple page object class can be created like this:
http://testautomationlove.blogspot.in/2016/02/page-object-design-pattern.html

How to test "Payment Gateway" without making real payments?

I want to perform rigrous testing on Payment Gateway(2checkout) and Pay Pal. For testing, I need to simulate a large number of successful, failed and halted transactions (transaction stopped due to system crash/reboot). But I don't want to make actual payments.
1. Is there any way I can make a test transaction on payment gateway, using fake card numbers or something else.
2. What are the possible advance testing scenarios for Payment Gateway testing?
For example:
Changing the amount, unmask CVV or card from Inspect
element.
List item
There are two options :
Using the PayPal Sandbox (Application Testing), or
Using Dependancy Injection (Unit Testing).
Both would work but I would suggest a Dependancy Injection approach. Assuming you have a separate object that only interacts with PayPal and then other objects that do your actual application logic (and error handling, etc) then you can just create a dummy version of the PayPal interaction object (that always returns true, or conditionally returns false, whatever) and then test your various application classes in detail.
I would suggest you only one solution, look at this Git PayPal-Android SDK and go through the README.md file. Last link tells you how to create a sandbox PayPal account to create dummy transactions across your sandboxed account developer account.
If you have doubts, you can refer Part 1 and Part 2 of AndroidHive tutorial for this.

gherkin describe the test or the functionality?

This is an interesting topic I came across and my co-workers and I have different opinions on the matter. Should your Gherkin describe exactly what the test is doing, or ONLY show the business logic you tried to achieve in the test.
The biggest example that I run into all the time at work is that if you have access to item A, then you should be able to access A. We can have 20 different types of users with access to A, so we only choose 1 (to keep our test suite from taking 40 hours to run). So which is "better"?
A
Scenario: A user with access to item A can access A
Given I am a type 4 user with access to item A
When I try to access A
Then I am granted access to A
or B
Scenario: A user with access to item A can access A
Given I am a user with access to item A
When I try to access A
Then I am granted access to A
Notice the difference in the given statements (type 4 user)
Granted in the step definition we are going to use a type 4 user for our test, but the test is not specific to a type 4 user. Any user with item A will work for this test, we're just using a type 4 user because we need a user type to login with.
So A describes what the test is doing (Logging in with a type 4 user with access to item A)
And B describes the functionality needed to access item A (just a user with access to item A)
Before you ask, how we determine who has access to item A is a SQL call to the database looking for a specific item linked to a user.
For a cucumber test you are testing the business logic - as an acceptance test - not hte specific implementation details. So you SHOULD do the second not the first. Your Request specs or Integration tests can be more tied to specifics if you want to run tests for type X, type Y and edge cases.
I think one can think of this - and it's not a hard fast rule - as something like:
Unit test to isolate methods and test one thing at a time. Mock & stub everything else to isolate what is being tested.
Integration tests to test how things interact together to test a larger part of your stack, including the interaction of multiple objects. Some would argue that you test everything soup to nuts here, but I think there's a place in a large complex app to test lots of pieces integrated while still not testing the full request cycle.
Request specs - sometimes in simple apps these are pretty much the same as integration tests, in other cases I will do integration tests for everything except the request stack and specifically separate out my request specs. Opinions will vary.
Acceptance tests - this is where you're sitting with your question - where the tests are written in plain business language and avoid technical implementation details within the feature definitions.
Anyway, even if you ignore the thoughts on the rest of the test stack, in the specific question you're asking go for B.
I would say option B is better. The "type 4 user" sounds like an implementation detail.
However, if it is a requirement that all user types have access, then that should form part of the specification too. In that case the test should specify and test all user types.
I would say B is better. For the "Type 4 user" you could make it be part of a Background:
Backgound : User is logged in
Given "Type 4 user" is logged in
Use a place holder for type 4 user by placing it in "" so you can reuse the logged in step definition for other user that have access to item A

Acceptable Failure Rate of E-Commerce?

I am one of the web developers for a small-but-growing e-commerce site. It is now getting about 150 orders per day, and a lot more on Cyber Monday. This is enough volume so that the small fraction of users who have hard-to-reproduce problems are causing significant heacache. My theory is that one of more of the following are true:
The customer is on an unusual browser / OS
The customer experiences a network glitch
The payment gateway takes too long to return a response
The customer somehow hits escape or the back button during a critical moment in the ordering process
The customer closes their browser
The customer's browser just refuses to navigate to the next page
The end result of these problems is usually that a customer unknowingly gets their credit card charged, and often attempts to place a second order. In that case a refund has to be issued on one of these duplicated transactions.
Although I would like to convince my client that there will always be a "normal" percentage of orders that have "weird" glitches, I don't know what "normal" is.
My question is therefore:
In your experience as an e-commerce developer,
what is your observed rate of these glitches?
Alternatively, if you can point me towards statistics, that'd be helpful, too! I haven't been able to find any.
Thanks!
ps. I know that it would be ideal to fix the root cause of such problems, but I simply have not been able to reproduce the problem, even after submitting hundreds of test orders.
You know the old saying - "If you have to ask, you can't affort it"?
It applies here.
It's very likely that your problems would be caused by the reason you listed above - apart from any bugs in your code, of course.
But is that a good enough explanation for your client? As the application traffic increases these problems are likely to increase as well.
You may need to implement a more robust process that can handle unexpected problems, so that customers are not charged unless you have captured their order or they are notified by email that their order completed / something went wrong / what action they should take.
edit:
Your question is when to stop improving the website. I think this depends on the level of service (read: time) you want to give to your client vs their expectations of what they have paid for.
How you deal with it forms part of your business strategy, but my approach would be to very honestly show them a list like this with time estimates to fix each item. Ensure they understand the diminishing returns that each of these fixes achieves. Give them something for free, and charge them for anything else. Negiotiate with them; give them a KPI or performance target that you guarantee to meet. It's important that they understand the costs involved in designing a near-perfect transactional system.
Rather than guessing, I'd try to assert how the errors are raised, build a simple form where users can leave you the browser they used, system specs (maybe) and the steps to reproduce the defect that araised.
Then with that information you can debug your app, make unit tests and fix these bugs, or reduce them to a form where they won't impede your users from buying stuff at your site.
Usually it is just one or two people with weird cards or weird browser/OS combo that cause all the headache, while all "normal" people proceed fine.
You better switch to a gateway that supports background processing (customer always stays on your checkout page while order info is packed into XML and posted to the gateway, and it instantly responds with any errors) - this will at least eliminate navigation problems for dummies.