Karate Execution getting stuck in the report generation step - karate

I am executing my karate suite from teamcity. I have started facing an issue when i had to add some data csv files with 1700 rows and around 10 columns.
I got Out of memory error while local execution. I added argLine params and increased heapSize to 6G. In local I managed to solve the error.
When I moved this to continuous integration environment even with argline params 6G heap size, its getting stuck. Interesting fact is even if I exclude these large files tests using tags its getting stuck.
I am using parallel executer with 2 threads(I also tried with 1 thread). Also I use cucumber reports.
From the analysis what i understand is karate completes the test execution just before generating the reports json and cucumber reports it gets stuck.
I have tried to remove those huge CSV files and tried to put the data directly in examples inside my feature file. Still it gets stuck.
I have managed to fix this in my local, but it seems to be potetial issue. Any suggestions.
Total number of tests am running is 4500.

I am no expert on this but I would say break down your tests into many classes (you could start with having 2 runners instead of just 1) and have each class only call a portion of the .feature files you have. It is possible breaking your tests into multiple classes running parts of your test cases might relieve the memory problem.
For example:
https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/greeting/GreetingRunner.java

Related

Issue with multithreading using Karate version higher than v0.9.6: http call failed after 55 milliseconds for url:

I'm running a big campaign with about 500 tests targeting the API of an orchestration tool using both a parallel and a sequential runner for some of the tests which need to be executed in sequence.
This works fine using the Karate version 0.9.6.
As soon as I upgrade the Karate version to anything higher than v0.9.6, the results of the tests in the parallel runner always consists of a significant portion of failing tests. The sequential runner does not have any issues. Also the parallel runner seems to work fine only if I set it's threadCount = 1.
If the threadCount is higher than 1, the runner seems always to start well but after some time there are quite some failed transactions without any further details like this: http call failed after 55 milliseconds for url: http://...
This is all the error log I have as there is not more than this entry for each failed test (around 40% of the full campaign).
As soon as one thread starts with an issue like that, basically all other threads follow to do the same and make their tests fail for the same reason (only with a slightly different number of milliseconds). While investigating the problem I was not able to identify a common pattern (like always starting with the same test failing, etc.).
Did anybody else face similar issues with Multithreading using a higher version than v0.9.6?
Is there a way to get more detailed logs? I use the value DEBUG in the logback-test.xml
Is there any recommendation of what to try to make it work?
Please don't hesitate in case you need more information.
I would be happy for any kind of help regarding this as I would like to benefit of the new karate-gatling facilities for Performance Testing which is only available in the latest versions of Karate.
Many Thanks for your reply!
we run 8 threads on karate 1.1.0 with no issues at all. Do you have some extra logs? So you don't get same timeout error http call failed after 55 milliseconds for url: http://... in lower version the running multithreads ?
Shot in the dark here, but do some of your scenarios have the same title? If so, try making them all unique and see if that solves the issue.
More logs will be useful. Its very little information here.
I am not sure of the exact error but I faced similar issues sometime ago but that was caused due to race conditions especially if you are trying to use variables set by java methods. Are you using Java interop in your tests? If so worth checking how these are being used. A test may be trying to execute with a variable/parameter that is still in use by other tests when running in parallel. This wouldn't be an issue in sequential.

'No data to report' when i execute coverage report

I use django.test to do unittest
At first i run
coverage run ./manage.py test audit.lib.tests.test_prune
And works well
----------------------------------------------------------------------
Ran 1 test in 1.493s
OK
But when i run coverage report, unexpected happens, it should show some reports as expected, but No data to report
root#0553f9cad609:/opt/buildaudit# coverage report
No data to report.
I have no ideas, it has confused me whole day.. Thank you all!
Some of my tests executed helper programs, and I wanted to gather coverage results for those programs too. That meant coverage had to gather and store metrics in multiple processes at the same time. Normally, it stores them in a single file named .coverage, which doesn't work when gathering metrics in parallel. Instead, coverage needs to be told to store results in separate files, one per process, giving them unique file names. Per the docs, that can be done by adding this to .coveragerc.
[run]
parallel = True
The report generators, like coverage html, expect those results to be combined into a single file. That can be done by running this after the tests have finished, and before trying to create a report from them.
% coverage combine
Not doing so produces the No data to report error in the question. Credit goes to #PengQunZhong for first suggesting this.
Going beyond the question a bit, this actually wasn't enough for me to get measurements from all sub-processes. The docs have a good description of the subtleties and solutions, but I'll summarize what I chose. I use the multiprocessing module to start some of the sub-processes, so I had to add the following in the [run] section of .coveragerc.
concurrency = multiprocessing
Also, sub-processes needed to tell coverage to gather metrics since, unlike top-level tests, sub-processes are not run by coverage. I did this by adding the following at the top of the code for each sub-process. See the reference for other options.
import os
if "COVERAGE_PROCESS_START" in os.environ:
import coverage
coverage.process_startup()
The environment variable used here is recognized by coverage; don't rename it. Also, I ran my tests with the following. I use pytest, but other test frameworks would be done similarly. There's also a pytest plug-in that can help.
% COVERAGE_PROCESS_START=.coveragerc coverage run pytest
Finally, some tests and their sub-processes needed small changes to ensure coverage was allowed to save its results when the process was terminated. An ungraceful exit, SIGKILL, etc. prevent this. coverage writes its results in an atexit hook, and if you have coverage 6.3 or newer, also in a signal handler for SIGTERM. If your sub-processes are terminated any other way, coverage will not be able to save its results. In my case, I usually sent a SIGTERM to the sub-process from its parent. A parent that used subprocess.Popen objects, for example, did this.
kid.terminate()

Running Google Test cases non parallel

Because of the resource exhaustion there is a need to run test cases serially, without threads (integration tests for CUDA code). Went through source code (e.g. tweaking GetThreadCount()) and trying to find other ways how to influence gmock/gtest framework to run tests serially but found no way out.
Apparently at first did not find any command line arguments that could influence it. Feel like the only way out is to create many binaries or create a script that utilizes --gtest-filter. I would not like to mess with secret synchronization primitives between test cases.

Iteration over the same automated test case

I am using selenium webdriver and testNG to create automated test case. I am running the same test case multiple times for different set of data. The execution is slowing down after each iteration and at some point it becomes very slow and the process stops.
The code is very straightforward: iterating over the same testNG method containing selenese scripts (example:driver.findElement(By.id(target)).click();)
Any idea why the execution is getting slower and after multiple iterations it stops.
#Anna Clearing temp files solved a similar issue for me. My test was generating a lot of log files, screenshots, windows temp files, among others. Now I make automation clear my temp files and my results have been way better.
If that does not solve your issue, please share more information on how your automation is setup (testNG, Jenkins, Maven, etc) and the code that initiates the runs.

Ordering tests in TFS 2012

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.