I am implementing a Cucumber - JVM based Selenium automation framework.
One of the workflow in the webapps i test, requires a long wait so that a batch job that is scheduled as frequently as in every 3 minutes, runs, and creates a login id, which the user can utilize, to continue with the workflow.
I am currently handling it in such a way that i execute the initial part test case first and continue with other test cases, so that the framework gets ample time to wait for the user id to be created.
After all other test cases are run the second part of the test case is run. But, before running the second part of the test case, i query the database and verify whether the id is created. If the id is created then the execution continues else, fails saying that the user id was not created.
Although this works for now, i am sure there are better ways to handle such scenarios. Have any one of you come across such a scenario? How did you handle it ?
I think I understand your problem. You actually would like to have an execution sequence like this probably:
Test 1
Test 2
Test 3
But if you implement Test 1 "correctly" it will take very long because it has to wait for the system under test to do several long running things, right?
Therefore you split Test 1 into to parts and run the tests like this:
Test 1 (Part A)
Test 2
Test 3
Test 1 (Part B)
So that your system under test has time to finish the tasks triggered by Test 1 (Part A).
As you acknowledged before, this is considered bad test design, as your test cases are no longer independent from each other. (Generally speaking no test case should rely on side effects created by another test case beforehand.)
My recommendation for that scenario would be to leave Test 1 atomic, i.e. avoid splitting it in two parts, but still run the rest of the tests in parallel. Of course whether or ot this is feasible depends on your environment and on how you trigger the tests, but that would allow you to have a better structure of your tests plus the benefit of fast execution. So you would end up with this:
Test 1 starts Test 2 starts
Test 1 waits Test 2 finishes
Test 1 waits Test 3 starts
Test 1 runs Test 3 finishes
Test 1 finishes
I am not sure about the start—>wait—>wait—>run approach. It might work for few tests and may not work well for hundreds of tests, as the wait time would be more. Even if we run it in parallel mode it would consume some time. What if we wait for more such time taking components in the same flow? More the components more the wait time, I guess. You may also need to consider timeout of the system, if you wait for longer time...
I feel even the first approach should be fine. No need to create multiple files for a test case. You can structure it in the same file in a way that you run first part and end it. And, after ensuring the batch processing you can start with second part of your test case(file). The first part can be run in parallel mode and after waiting time part 2 can also be executed in parallel.
Related
In Jenkins, I have job created for executing automated tests from xray (Test management tool from jira) . In my test execution there are three test cases, each test case needs to wait for one hour or more to run next test cases . how can i handle the wait times automatically without triggering from jira manually.
I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.
You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.
I have created a performance test suit using JMETER 4.0 and i have multiple test cases which are divided in 2 fragment and i am calling them from a single thread. Following are the type of test cases which are in 2 fragments.
Test Fragment 1: CURD operation on User
Test Fragment 2: Getting User counts from MongoDB and API's and comparing them
and test cases from Test Fragment 1 runs first multiple time based on thread count and then test case from second fragment runs
In Test Fragment 2 i am having these two test cases
TC1: Fetching user count from mongoDB(using JSR223 Sampler)
TC2: Fetching user count using API
When 2nd Test Fragment runs then test case to fetch user count from mongoDb gives different count compared to test case which fetch count using API directly. API's are talking time to update data in mongoDB as there could be some layers which takes time to update data in database(i am not sure which layer exists and why it takes time exactly). The Scripts work fine when i run it for single user so there is not doubt that something is wrong with script.
someone please suggest what approach we can use here to get the same count.
a. Is it a good approach to add timers/delay or something else can be used?
b. If use use timer/delay is it effects performance test report as well, are those delays going to add up in our performance test reports?
It might be the case you're facing a race condition, i.e. while you're performing read operation from database with one thread the number was already updated with another thread.
The options are in:
Amend your queries so your DB checks would be user-specific.
Use Critical Section Controller to ensure that your DB check is being executed only by 1 thread at a time
Use Inter-Thread Communication plugin in order to implement synchronisation across threads based on certain conditions. The latter one can be installed using JMeter Plugins Manager
I'm using bamboo to automate performance tests that should be run every night. I implemented two tests: first that run big queries and second that checks performance results.
First test (running queries) should be executed and after two hours second one (checking performance results) should be run. Obviously I don't want compile these tests into one test that run queries, waits 2 hours and checks results.
My solution is to have two bamboo plans: first plan with running queries test scheduled for 1:00 AM and second plan with checking performance results test scheduled for 3:00 AM. That works.
Is it possible to execute those tests within one bamboo plan (for example by setting two stages (with one test each) and setting delay between stages execution)?
Edit:
I have working solution that doesn't block agent for delay time (two scheduled plans). It works. I'm just wondering if it's possible to achieve same effect within one plan - sounds like functionality that could be available in Bamboo.
If blocking the build agent for 2 hours is not an issue, you may add a script task at the end of the first stage, so that it waits for 2 hours until the next stage is started.
sleep 2h
You may also define your result plan as a child plan (in Dependencies tab) and then introduce a sleep time using script task at the end of the first plan.
This way, your first plan will finish executing after 2 hours followed by the child plan.
Update: If your plan A is connected to a repository and is triggered when there is a new commit, you may connect the same repository in plan B and introduce a quiet period that will wait for 2 hours before it executes. This way, your agent is not blocked for 2 hours.
I execute automation Selenium Scripts on remote machine which is set to perform 5 jobs at a time (5 scripts can be executed at a time by instantiating 5 browser instances).But everytime atleast 1 or 2 random scripts fail with random errors such as null pointer exception, Element not visible or not able to click the element. But it won't happen if only 3 jobs are run at a time. What's the best way i can prevent the scripts from failing.
very vague question. Cant really comment on why they are failing without seeing the implementation of classes.
It can happen for a lot of things including latency in network when you are running 5 instances. If that is the case you might considering using selenium grid and distribute your tests across 2 or more nodes in each of which you can run 3 instances.