'No data to report' when i execute coverage report - django-testing

I use django.test to do unittest
At first i run
coverage run ./manage.py test audit.lib.tests.test_prune
And works well
----------------------------------------------------------------------
Ran 1 test in 1.493s
OK
But when i run coverage report, unexpected happens, it should show some reports as expected, but No data to report
root#0553f9cad609:/opt/buildaudit# coverage report
No data to report.
I have no ideas, it has confused me whole day.. Thank you all!

Some of my tests executed helper programs, and I wanted to gather coverage results for those programs too. That meant coverage had to gather and store metrics in multiple processes at the same time. Normally, it stores them in a single file named .coverage, which doesn't work when gathering metrics in parallel. Instead, coverage needs to be told to store results in separate files, one per process, giving them unique file names. Per the docs, that can be done by adding this to .coveragerc.
[run]
parallel = True
The report generators, like coverage html, expect those results to be combined into a single file. That can be done by running this after the tests have finished, and before trying to create a report from them.
% coverage combine
Not doing so produces the No data to report error in the question. Credit goes to #PengQunZhong for first suggesting this.
Going beyond the question a bit, this actually wasn't enough for me to get measurements from all sub-processes. The docs have a good description of the subtleties and solutions, but I'll summarize what I chose. I use the multiprocessing module to start some of the sub-processes, so I had to add the following in the [run] section of .coveragerc.
concurrency = multiprocessing
Also, sub-processes needed to tell coverage to gather metrics since, unlike top-level tests, sub-processes are not run by coverage. I did this by adding the following at the top of the code for each sub-process. See the reference for other options.
import os
if "COVERAGE_PROCESS_START" in os.environ:
import coverage
coverage.process_startup()
The environment variable used here is recognized by coverage; don't rename it. Also, I ran my tests with the following. I use pytest, but other test frameworks would be done similarly. There's also a pytest plug-in that can help.
% COVERAGE_PROCESS_START=.coveragerc coverage run pytest
Finally, some tests and their sub-processes needed small changes to ensure coverage was allowed to save its results when the process was terminated. An ungraceful exit, SIGKILL, etc. prevent this. coverage writes its results in an atexit hook, and if you have coverage 6.3 or newer, also in a signal handler for SIGTERM. If your sub-processes are terminated any other way, coverage will not be able to save its results. In my case, I usually sent a SIGTERM to the sub-process from its parent. A parent that used subprocess.Popen objects, for example, did this.
kid.terminate()

Related

Karate Execution getting stuck in the report generation step

I am executing my karate suite from teamcity. I have started facing an issue when i had to add some data csv files with 1700 rows and around 10 columns.
I got Out of memory error while local execution. I added argLine params and increased heapSize to 6G. In local I managed to solve the error.
When I moved this to continuous integration environment even with argline params 6G heap size, its getting stuck. Interesting fact is even if I exclude these large files tests using tags its getting stuck.
I am using parallel executer with 2 threads(I also tried with 1 thread). Also I use cucumber reports.
From the analysis what i understand is karate completes the test execution just before generating the reports json and cucumber reports it gets stuck.
I have tried to remove those huge CSV files and tried to put the data directly in examples inside my feature file. Still it gets stuck.
I have managed to fix this in my local, but it seems to be potetial issue. Any suggestions.
Total number of tests am running is 4500.
I am no expert on this but I would say break down your tests into many classes (you could start with having 2 runners instead of just 1) and have each class only call a portion of the .feature files you have. It is possible breaking your tests into multiple classes running parts of your test cases might relieve the memory problem.
For example:
https://github.com/intuit/karate/blob/master/karate-demo/src/test/java/demo/greeting/GreetingRunner.java

Handling Expected Changes in Regression Tests

I am working on using continuous deployment for my service which generates XML files as output. To achieve this, we are planning to add Regression Tests to our deployment flow, where we compare the XML file generated with this code change v/s the one without this code change.
But since some code changes might lead to differences between the output, leading to the test failure.
One approach could be to allow the tests to fail and generate a Diff report which would then be manually approved.
How are such cases handled generally in continuous deployment?
You could use something like this xmldiff tool, which creates human-readable diffs between XML files. If a code change was made that causes a test failure, the diff report would already be generated for you.
I've used similar utilities for screenshot comparison, and although they still require manual review in the end when there are unexpected changes, it speeds up the process quite a bit.

Aggregating code coverage from different testing frameworks

In modern programming workflow numerous testing frameworks are used at once. For example, in PHP world, it is de-facto standard way to use unit tests, integration tests and functional/acceptance tests at once. Most of the time different frameworks are used for different test type. I am using combination of PHPSpec for unit, PHPunit for integration and CodeCeption for functional tests.
Is it possible to aggregate code coverage results that each of these frameworks return? Is there any tool that aggregates code coverage reports from different frameworks?
Or it is only possible to view individual results for each framework while they are incorrect because each code coverage report doesn't take into account other tests.
It is actually quite simple to perfrom this task. All your frameworks rely on the same library to generate the code coverage.
As you can see the generator in sebastianbergmann/php-code-coverage already supports a merge function (line 335) to merge different aggregates. Since you are part of a team using tests I assume it will be easy for you to change the test execution layer slightly to gather the code coverage in a single php process and just merge em.
There is a tool for this: phpcov. It allows to merge many coverage files with merge option:
$ parallel --gnu ::: \
'phpunit --coverage-php /tmp/coverage/FooTest.cov tests/FooTest' \
'phpunit --coverage-php /tmp/coverage/BarTest.cov tests/BarTest'
$ phpcov merge /tmp/coverage --clover /tmp/clover.xml
phpcov 2.0.0 by Sebastian Bergmann.
Generating code coverage report in Clover XML format ... done
I think we are on same boat. How we can tell how much we have converge using this all different testing tool. We start discuss with team and decide to go for
SonarSource. - For PHP Plugin and Live demo
PHP Report Stlyle - I advice you to visit live demo. It will help more.
It is very robust tool. It give us all inside of code.
The PHP Test Coverage Tool from Semantic Designs (my company) collects and combines test coverage from any
framework
test set
individual test
even ad hoc manual tests.
After running some set of tests, our tool is can be easily triggered to dump test coverage vectors to a file; you need to modify the framework slightly to invoke
TCVDump();
when the framework completes, or you can invoke a TCVDDump() by touching an easily found, special web page added by the test coverage tool. Each such call produces a time-stamped or user-named file (e.g, after the framework or test set) so they are easily distinguished
The graphical test coverage display included as part of the tool will interactively select and merge small or large sets of such files to produce a coherent whole, both display and summary. It will also compare test coverage vectors to enable one to decide if coverage from one test set include/intersects another, etc.
The test coverage display component will also export text or XML/HTML summaries of the coverage results.
You can even run tests on different subsystems and combine them. This test coverage tool is part of larger family of tools for many languages other than PHP; tests run on a multilingual application system can also be combined to provide an overview of coverage for the multilingual application.

Ordering tests in TFS 2012

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.

Grails, Hudson, and Cobertura, which tests are covering my code?

I just started working on an existing grails project where there is a lot of code written and not much is covered by tests. The project is using Hudson with the Cobertura plugin which is nice. As I'm going through things, I'm noticing that even though there are not specific test classes written for code, it is being covered. Is there any easy way to see what tests are covering the code? It would save me a bit of time if I was able to know that information.
Thanks
What you want to do is collect test coverage data per test. Then when some block of code isn't exercised by a test, you can trace it back to the test.
You need a test coverage tool which will do that; AFAIK, this is straightforward to organize. Just run one test and collect test coverage data.
However, most folks also want to know, what is the coverage of the application given all the tests? You could run the tests twice, once to get what-does-this-test-cover information, and then the whole batch to get what-does-the-batch-cover. Some tools ( ours included) will let you combine the coverage from the individual tests, to produce covverage for the set, so you don't have to run them twice.
Our tools have one nice extra: if you collect test-specific coverage, when you modify the code, the tool can tell which individual tests need to be re-run. You need a bit of straightforward scripting for this, to compare the results of the instrumentation data for the changed sources to the results for each test.