How to handle Waiting times between test executions - selenium

In Jenkins, I have job created for executing automated tests from xray (Test management tool from jira) . In my test execution there are three test cases, each test case needs to wait for one hour or more to run next test cases . how can i handle the wait times automatically without triggering from jira manually.

Related

How can I run fixture in concurrent mode but test inside fixture in sequential mode in Testcafe?

I have 5 fixtures with 10 tests in each fixture. How can I run 5 fixtures in the concurrent mode in 5 different browser instances, but I need tests inside each fixture run sequentially. A result of the test execution I need into a single report.
Test parallelization works in the following way:
calculates the full test count
runs the defined count of browser instances
calculates the portion of tests for each browser instance and runs it
So, it's impossible to guarantee that test portion for each browser will contain tests from only one fixture.
See also: Run Tests Concurrently.

Jmeter cyclic running tests

I want to run cyclical tests in jmeter. I want them to run every day and than run for 10 minutes (every day for 10 minutes). How to do it?
For running test for 10 minutes there are following options:
In the Thread Group tick "Specify Thead lifetime" and put 600 into "Duration" field:
Or use Runtime Controller which allows setting how long its children are allowed to run
With regards to running the test every day, you can go for:
Windows Task Scheduler
Linux/Unix crontab
MacOS launchd
or you can put the JMeter job under the orchestration of a Continuous Integration server, any of them can run specified job on the schedule or basing on various different triggers, tracks job status (successful or failed), some of them provide performance trends
There are many ways to answer your (a bit too broad) question.
Here are some insights that could help:
to launch a JMeter test that last 10 minutes, you have to configure a job in JMeter with such a duration. Then you have to lear how to launch it via command line instead of via the graphical interface (see this answer for example)
to launch your JMeter test every day, you can use a Continuous Integration tool like Jenkins. In this tool, you will be able to create some jobs with a specific schedule (every day in your case) and a specific task (launch my JMeter test via command line)

How to strategize your test automation?

I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.
You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.

how to run my selenium script on multiple servers at the same time so that i would saved one by one execution time?

how to run my selenium script on multiple servers at the same time so that i would save one by one execution time ?
Scenario: I have 1 hour of total testing time and 20 servers to test and 1 test script takes about 30mins to execute so i want to run my test script simultaneously on all 20 servers in order to save time.
The answer to your question is parallel execution
There are multiple ways to achieve this like 1> Creating a Jenkins Job, Registering all the server machines as slave and executing jobon servers
Or
2> A comparitively simpler and more widely used concept, using Selenium Grid
To Implement Selenium Grid ( for instance with java as a programming language and TestNG framework) we need to take care that
A. we have implemented Threading for our webdriver, so that each execution works on its own driver instance.
B. We have our Testng.xml with attribute set as parallel=tests
You can easily get step by step guide to establish selenium grid to summarize it, We have one machine as hub(master), we have multiple machines or nodes(slave) registered to the hub, your automation code is supplied to the hub, Hub routes test on multiple nodes keeps track of individual execution and gets you the result on the Hub machine itself

Best way to handle batch jobs in selenium automation

I am implementing a Cucumber - JVM based Selenium automation framework.
One of the workflow in the webapps i test, requires a long wait so that a batch job that is scheduled as frequently as in every 3 minutes, runs, and creates a login id, which the user can utilize, to continue with the workflow.
I am currently handling it in such a way that i execute the initial part test case first and continue with other test cases, so that the framework gets ample time to wait for the user id to be created.
After all other test cases are run the second part of the test case is run. But, before running the second part of the test case, i query the database and verify whether the id is created. If the id is created then the execution continues else, fails saying that the user id was not created.
Although this works for now, i am sure there are better ways to handle such scenarios. Have any one of you come across such a scenario? How did you handle it ?
I think I understand your problem. You actually would like to have an execution sequence like this probably:
Test 1
Test 2
Test 3
But if you implement Test 1 "correctly" it will take very long because it has to wait for the system under test to do several long running things, right?
Therefore you split Test 1 into to parts and run the tests like this:
Test 1 (Part A)
Test 2
Test 3
Test 1 (Part B)
So that your system under test has time to finish the tasks triggered by Test 1 (Part A).
As you acknowledged before, this is considered bad test design, as your test cases are no longer independent from each other. (Generally speaking no test case should rely on side effects created by another test case beforehand.)
My recommendation for that scenario would be to leave Test 1 atomic, i.e. avoid splitting it in two parts, but still run the rest of the tests in parallel. Of course whether or ot this is feasible depends on your environment and on how you trigger the tests, but that would allow you to have a better structure of your tests plus the benefit of fast execution. So you would end up with this:
Test 1 starts Test 2 starts
Test 1 waits Test 2 finishes
Test 1 waits Test 3 starts
Test 1 runs Test 3 finishes
Test 1 finishes
I am not sure about the start—>wait—>wait—>run approach. It might work for few tests and may not work well for hundreds of tests, as the wait time would be more. Even if we run it in parallel mode it would consume some time. What if we wait for more such time taking components in the same flow? More the components more the wait time, I guess. You may also need to consider timeout of the system, if you wait for longer time...
I feel even the first approach should be fine. No need to create multiple files for a test case. You can structure it in the same file in a way that you run first part and end it. And, after ensuring the batch processing you can start with second part of your test case(file). The first part can be run in parallel mode and after waiting time part 2 can also be executed in parallel.