Auto Populate Test Data for Moodle 3.* - testing

Is there any way I can populate test data of Users, Course, course categories, cohert, group, Submissions, attempts, etc for testing purpose.

Behat is the official framework to test Moodle.
https://docs.moodle.org/dev/Behat_integration
Using "Given..." in so-called "scenarios" you can create temporary data which will be erased after testing.
https://docs.moodle.org/dev/Writing_acceptance_tests#Create_your_own_tests

Related

Custom field in a new ticket creation by agent: Dropdown list that its values are live updated

I want to create a custom field in the agent new ticket screen.
I want the field to be shown as a dropdown list where the values in the dropdown list will be values taken from an API of other service.
I know Zendesk custom fields have an option to upload a csv file with options but since I want to the values to be live updated it won't work for me.
I'm wondering whether I need to create a plugin that will do that job or Zendesk have a solution for this need.
I am not sure this is a suitable stackoverflow questions, but if you have control over the backend of the other service, it is better to automate a call from it to Zendesk custom fields API whenever there are changes to the drop down options.
However, this is a bad idea in the first place because drop down options are associated to tags. There might be conflicts with other field options or other tags used by agents. This will be a nightmare for the data analysts who rely on tags for Zendesk Explore reports, especially if you delete an options and its associated tags.

gherkins, Can 2 scenarios be dependent on each other

Scenario1
When a new user clicks on sign up page
And provides login ID
Then user is signed up and can view profile page.
Scenario2
When user clicks on the edit profile page
And updates his address
Then updated profile should be visible to user
The scenarios are written in a feature file in the same sequence. When writing the cucumber file for the same, I am creating a user in the scenario 1. And in scenario 2 , the same user is being updated. In a way, scenario2 is dependent on 1 as it is updating the same user that was created in scenario1.
My question is should the scenarios be created so that they are dependent on other scenarios.
Or they should be independent of each other execution.. in which case I should create a new user in scenario2 and then perform and update on it and assert it.
Cucumber explicitly recommends you do not make your Scenarios depend on each other. From the FAQ's:
"Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another.
Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario.
If your scenarios use the same or similar steps, or perform similar actions on your system, you can extract helper methods to do those things."
(Sidenote: From personal experience, I can tell you that tests that depend on each other / the state of the system, will very quickly become very hard to maintain. I'd highly recommend you to make your tests independent!)

Automated testing - Best practice - getting test data before test

I have a question about getting test data for automated testing, here is my problem:
I need to prepare automated testing scripts for a clothes shop.
And I need to know what is the best practice for getting test data needed for the following scenario:
Scenario description: Checking if a user can correctly add a product to the basket.
Given I am on the "WOMEN'S DRESSES" page
When I add "XXX" product to the bag
Then I can see "XXX" product displayed in the basket
My question is: how to ensure that the "XXX" product is always available and what is the best practice for this?
Do I have to always connect to the env database and check if the "XXX" product is available and if not then insert it into DB?
Or maybe should modify a little BDD scenario and get the list of currently available products on the "WOMEN'S DRESSES" page, chose any product, add it to the bag and check if it is correctly added to the basket? ( but in that case what to do if there are no products available for the testing env ?)
I want to have efficient and strong automated tests.
There are a few options for this. Some developers and I looked into a similar problem we had for a car-marketplace. We actually found out that the "dirty" way was the best way for us.
We simply did a BeforeFeature database query and picked up all brands which were available in the test environment. We added these brands to FeatureContext and ScenarioContext, depending on the required context. Because of the Feature/Scenario context, you can use these values during the run of that particular feature/scenario. During the "When" step, in the code, we created a list of all the brands available in the database.
Then, we passed every brand through specflow to the code and checked the list if it contained the brand. If the list did contain the brand, a click had to take place on that brand and we checked in the last step if we were landed on the page of that brand.
You could do something similar with the dresses.
So our specflow looked somewhat like:
Scenario Outline: Check available brands till vehicle detail page
Given I navigate to Vehichle Search Page
When I search '<Brand>', if available in test environment
Then I am navigated to the vehicle detail page
Examples:
|Brand|
|Audi |
|BMW |
|etc |
Just an addition:
This way of writing test keeps your test very BDD. Business can read what your are testing. Also, as long as the data model remains as-is, your tests are very maintainable. You can add unlimited amounts of dresses to your table and you will only get test results if the brands are available. So if the database is swiped clean, sure, not a single test will be executed, but you won't get any false positives or unnecessary negatives.
Also, you can choose to add brands to the database if they weren't found in the first place. You could do this within the "When" step. If the dress is not found in the list, you can add it with a query in the database at that moment.
In your case you have to add one more precondition:
GIVEN: XXX product is available on the "WOMEN'S DRESSES" page.
If product is not available - it's another test case )
Create XXX product just before test using most suitable way:
api call
web UI
DB update (not recommended)
or just mock api response for XXX product

What to do when an acceptance test has varying user choices and you want to test each of them

I'm writing some acceptance tests for a donation form. I'm using Codeception. For the sake of this example, lets say that the donation form has 3 parts:
Enter your personal information
Enter either Credit Card and Direct Transfer
Submit and receive e-mail confirmation
For the acceptance test I'd like to test the whole process--for both credit card AND direct transfer. Steps 1 and 3 are essentially the same between the two donation processes, but--obviously--you can't run the second step by itself (the donation form wouldn't submit without step 1).
So I'm wondering, would it be "normal" in this case to write two tests (e.g. canDonateWithCreditCard() and canDonateWithDirectTransfer()) that both test all three parts of the process? Even though that's partly testing the same thing twice?
If not, what would be the preferred way to do it?
This is perfectly acceptable at my work we have a sizable automation suite where the same pages get executed multiple times, because of scenarios similar to what you outlined above.
The only caveat I would mention is when building your tests (I don't know how codeception works) but look to build your tests using something along the lines of the page object model (http://martinfowler.com/bliki/PageObject.html) this will mean even though you have multiple tests that may implement the same scenarios each test doesn't have its own implementation of those steps.
This depends on your approach.
1. You can create two different test cases performing the action.
2. You can have a logic in your test to pass the mode of transfer as an argument to the method and perform activities accordingly.
It's always ideal to use Page object model to encapsulate all actions in each page class and also to avoid redundancy.
If both Credit card and Direct transfer actions navigate to a new page, create a new object of the page according to the argument passed, and call the method to do the transfer action.
A simple page object class can be created like this:
http://testautomationlove.blogspot.in/2016/02/page-object-design-pattern.html

How to test "Payment Gateway" without making real payments?

I want to perform rigrous testing on Payment Gateway(2checkout) and Pay Pal. For testing, I need to simulate a large number of successful, failed and halted transactions (transaction stopped due to system crash/reboot). But I don't want to make actual payments.
1. Is there any way I can make a test transaction on payment gateway, using fake card numbers or something else.
2. What are the possible advance testing scenarios for Payment Gateway testing?
For example:
Changing the amount, unmask CVV or card from Inspect
element.
List item
There are two options :
Using the PayPal Sandbox (Application Testing), or
Using Dependancy Injection (Unit Testing).
Both would work but I would suggest a Dependancy Injection approach. Assuming you have a separate object that only interacts with PayPal and then other objects that do your actual application logic (and error handling, etc) then you can just create a dummy version of the PayPal interaction object (that always returns true, or conditionally returns false, whatever) and then test your various application classes in detail.
I would suggest you only one solution, look at this Git PayPal-Android SDK and go through the README.md file. Last link tells you how to create a sandbox PayPal account to create dummy transactions across your sandboxed account developer account.
If you have doubts, you can refer Part 1 and Part 2 of AndroidHive tutorial for this.