Cucumber java - How to handle timeout of any step defifnition - cucumber-jvm

I am using old version of cucumber jvm (not the latest one 5.x). I am using below way to handle timeout in step definition. But currently it does not fail or stop execution if my step definition execution takes more than 5 seconds.
Any suggestion how to handle timeout for cucumber java?
#Then(value = "^verify (\\d+) events sent$", timeout = 5000)

From the release-notes:
It was possible to provide a timeout to step definitions.
Unfortunately the semantics are complicated. Cucumbers implementation
would attempt to interrupt the long running step but would not stop if
the step was stuck indefinitely.
Additionally Cucumber would not consider a step failed if it did not
terminate within the given timeout. To remove the confusion and
complexity we removed timeout from Cucumber.
Consider replacing this functionality with the features provided by
one of these libraries instead:
JUnit 5 Assertions.assertTimeout*
Awaitility
Guava TimeLimiter

Related

Bazel test size: Local dev box vs CI?

I have some tests that give me on my local workstation the following Bazel WARNING:
WARNING: //okapi/rendering/sensor:film_test: Test execution time
(25.1s excluding execution overhead) outside of range for MODERATE
tests. Consider setting timeout="short" or size="small".
When I change the size of the test to small, e.g. buildozer 'set size small' //okapi/rendering/sensor:film_test
My CI job fails with a timeout:
//okapi/rendering/sensor:film_test
TIMEOUT in 60.1s
/home/runner/.cache/bazel/_bazel_runner/7ace7ed78227d4f92a8a4efef1d8aa9b/execroot/de_vertexwahn/bazel-out/k8-fastbuild/testlogs/okapi/rendering/sensor/film_test/test.log
My CI Job is running on GitHub via GitHub-hosted runners - those runners are slower than my local dev box.
What is the best practice here? Choose test size always according to CI and ignore Bazel warnings on local machine? Get a better CI?
Get a better CI?
One of the main purposes of software testing is to simulate the software behavior in an environment that reliably represents the production environment. The better the representation, the easier it is to spot possible issues and fix them before the software is deployed. My opinion is that you are qualified more than any of us to say is the CI you are currently using, a reliable representation of the production environment of your software.
//okapi/rendering/sensor:film_test: Test execution time (25.1s excluding execution overhead)
You can always recheck if your test target is packed correctly, i.e. ask yourself do all of these tests really belong to a single test target. What will be gained/lost if those tests are divided into several test targets?
My CI Job is running on GitHub via GitHub-hosted runners - those runners are slower than my local dev box.
Have you tried employing the test_sharding?
Size vs timeout
When it comes to test targets and their execution, for Bazel it is the question of the underlying resources (CPU, RAM,..) and their utilization.
For that purpose, Bazel exposes two main test attributes, size and timeout
The size attribute is mainly used to define how many resources are needed for the test to be executed, but Bazel uses size attribute to determine a default timeout of a test. The timeout value can be overridden by specifying the timeout attribute.
When using the timeout attribute you are specifying both the minimal and maximal execution time of a test. In Bazel 6.0.0 those values in seconds are:
0 <= short <= 60
30 <= moderate <= 300
300 <= long <= 900
900 <= eternal <= 3600
Since at the time of writing this answer the BUILD file is not shown, I'm guessing that your test target has at least one of these (not that size = medium is the default setting if the attribute is not specified):
size = "medium" or timeout = "moderate"
"All combinations of size and timeout labels are legal, so an "enormous" test may be declared to have a timeout of "short". Presumably it would do some really horrible things very quickly."
There is another option that I don't see being used quite often but might be helpful in your case and that is to specify --test_timeout value as written here
"The test timeout can be overridden with the --test_timeout bazel flag when manually running under conditions that are known to be slow. The --test_timeout values are in seconds. For example, --test_timeout=120 sets the test timeout to two minutes."
One last employable option is, just like you wrote, ignoring Bazel test timeout warnings with --test_verbose_timeout_warnings

How to trigger the same request more than once parallel at same time in karate DSL [duplicate]

I am using karate for automating the things in my project and I am so much exited to say that the way karate gives solutions on API testing. I have a requirement in my project where I need to check the effect on the system when multiple users are performing the same task at the same time(exactly same time including fraction of seconds). I want to identify the issues like deadlock, increased response time, application crashes etc... using this testing. Give me a glint that how can I get concurrent testing solution in karate?
There is something called karate-gatling, please read: https://github.com/intuit/karate/tree/master/karate-gatling

How to use Jmeter with timer

I am having a problem with the JMETER, using it with Timer causes Crash to the Jmeter
The case is : I want to create a load of requests to be executed every half hour
Is that something you can do with Jmeter?
every-time i try it it causes Jmeter to keep loading and hangs and require a shut down
If you want to leave JMeter up and running forever make sure to follow JMeter Best Practices as certain test elements might cause memory leaks
If you need to create "spikes" of load each 30 minutes it might be a better idea to consider your operating system scheduling mechanisms to execute "short" tests each half an hour like:
Windows Task Scheduler
Unix cron
MacOS launchd
Or even better go for Continuous Integration server like Jenkins, it has the most powerful trigger mechanism allowing defining flexible criteria regarding when to start the job and you can also benefit from the Performance Plugin which allows automatically marking build as unstable or failed depending on test metrics and building performance trend charts

Jmeter test always freeze when tested server gives up

When trying to run load test in JMeter 5.1.1, tests always freeze at the end if server gives up. Test completes correctly if server does not give up. Now this is terrible because the point of test is to see at what point server gives up but as mentioned test never ends and it is necessary to kill it by hand.
Example:
Test running 500 threads for local server goes smoothly and it finish
with tiding up message
Exactly the same test running 500 threads for cloud based server
at some points results in error test goes to about 99 % then
freezes on summary as in below example:
summary + 99 in 00:10:11 = 8.7/s Avg: 872 Min: 235 Max:
5265 Err: 23633 (100.00%) Active: 500 Started: 500 Finished: 480
and that's it you can wait forever and it will just be stuck at this point.
Tried to use different thread types without success. Next step was to change Sampler error behavior and yes changing it from Continue to Start Next Thread Loop or Stop thread helps and test is ending but then results in html look bizarre and inaccurate. I even tried to set timeout setting to 60000 ms in HTTP request Defaults but this also has given strange results.
That said can someone tell me how to successful run load test for server so that is always completes regardless of issues and is accurate> Also I did see few old question about the same issue and they did not have any answer that would be helpful. Or is there any other more reliable open source testing app that also has GUI to create tests?
You're having 100% of errors which looks "strange" to me in any case.
If setting the connect and response timeouts in the HTTP Request Defaults doesn't help - most probably the reason for "hanging" lives somewhere else and the only way to determine it is taking a thread dump and analyzing the state of the threads paying attention to the ones which are BLOCKED and/or WAITING. Then you should be able to trace this down to the JMeter Test Element which is causing the problem and closely look into what could go wrong.
Other hints include:
look for suspicious entries in jmeter.log file
make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network sockets, etc. It can be done using i.e. JMeter PerfMon Plugin
make sure to follow recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

Best way to handle batch jobs in selenium automation

I am implementing a Cucumber - JVM based Selenium automation framework.
One of the workflow in the webapps i test, requires a long wait so that a batch job that is scheduled as frequently as in every 3 minutes, runs, and creates a login id, which the user can utilize, to continue with the workflow.
I am currently handling it in such a way that i execute the initial part test case first and continue with other test cases, so that the framework gets ample time to wait for the user id to be created.
After all other test cases are run the second part of the test case is run. But, before running the second part of the test case, i query the database and verify whether the id is created. If the id is created then the execution continues else, fails saying that the user id was not created.
Although this works for now, i am sure there are better ways to handle such scenarios. Have any one of you come across such a scenario? How did you handle it ?
I think I understand your problem. You actually would like to have an execution sequence like this probably:
Test 1
Test 2
Test 3
But if you implement Test 1 "correctly" it will take very long because it has to wait for the system under test to do several long running things, right?
Therefore you split Test 1 into to parts and run the tests like this:
Test 1 (Part A)
Test 2
Test 3
Test 1 (Part B)
So that your system under test has time to finish the tasks triggered by Test 1 (Part A).
As you acknowledged before, this is considered bad test design, as your test cases are no longer independent from each other. (Generally speaking no test case should rely on side effects created by another test case beforehand.)
My recommendation for that scenario would be to leave Test 1 atomic, i.e. avoid splitting it in two parts, but still run the rest of the tests in parallel. Of course whether or ot this is feasible depends on your environment and on how you trigger the tests, but that would allow you to have a better structure of your tests plus the benefit of fast execution. So you would end up with this:
Test 1 starts Test 2 starts
Test 1 waits Test 2 finishes
Test 1 waits Test 3 starts
Test 1 runs Test 3 finishes
Test 1 finishes
I am not sure about the start—>wait—>wait—>run approach. It might work for few tests and may not work well for hundreds of tests, as the wait time would be more. Even if we run it in parallel mode it would consume some time. What if we wait for more such time taking components in the same flow? More the components more the wait time, I guess. You may also need to consider timeout of the system, if you wait for longer time...
I feel even the first approach should be fine. No need to create multiple files for a test case. You can structure it in the same file in a way that you run first part and end it. And, after ensuring the batch processing you can start with second part of your test case(file). The first part can be run in parallel mode and after waiting time part 2 can also be executed in parallel.