Post-process xcodebuild test output in bamboo - bamboo

I am looking for a way to export the test results from a test run with xcodebuild test ... to the Atlassian bamboo CI server. This is what I have found so far:
ocunit2junit: Consumes the raw output and produces a set of JUnit *.xml files that can be read by the bamboo JUnit reporter. Unfortunately, it's not working well with Xcode 11 (doesn't pick up all the test results). It hasn't been updated in the past eight years which makes it likely that the xcodebuild output has changed sufficiently to render the parser fragile.
trainer: This appears to be a smaller project that uses the xcresults file. The last update is 13 months ago. My concern is that this might end up even being even more fragile in case Apple decides to change some internals.
xcpretty: The top dog that is widely recommended and referenced. Unfortunately, it hasn't been updated in more than two years and this issue suggests it won't happen in the future. I have also had trouble exporting the test results in JUnit format and error reporting isn't working properly.
All of these export to JUnit format which is then picked up by bamboo, maybe that's not the best choice? Apart from these options are there any alternatives I have missed that export xcodebuild test results to bamboo?

Related

Aggregating code coverage from different testing frameworks

In modern programming workflow numerous testing frameworks are used at once. For example, in PHP world, it is de-facto standard way to use unit tests, integration tests and functional/acceptance tests at once. Most of the time different frameworks are used for different test type. I am using combination of PHPSpec for unit, PHPunit for integration and CodeCeption for functional tests.
Is it possible to aggregate code coverage results that each of these frameworks return? Is there any tool that aggregates code coverage reports from different frameworks?
Or it is only possible to view individual results for each framework while they are incorrect because each code coverage report doesn't take into account other tests.
It is actually quite simple to perfrom this task. All your frameworks rely on the same library to generate the code coverage.
As you can see the generator in sebastianbergmann/php-code-coverage already supports a merge function (line 335) to merge different aggregates. Since you are part of a team using tests I assume it will be easy for you to change the test execution layer slightly to gather the code coverage in a single php process and just merge em.
There is a tool for this: phpcov. It allows to merge many coverage files with merge option:
$ parallel --gnu ::: \
'phpunit --coverage-php /tmp/coverage/FooTest.cov tests/FooTest' \
'phpunit --coverage-php /tmp/coverage/BarTest.cov tests/BarTest'
$ phpcov merge /tmp/coverage --clover /tmp/clover.xml
phpcov 2.0.0 by Sebastian Bergmann.
Generating code coverage report in Clover XML format ... done
I think we are on same boat. How we can tell how much we have converge using this all different testing tool. We start discuss with team and decide to go for
SonarSource. - For PHP Plugin and Live demo
PHP Report Stlyle - I advice you to visit live demo. It will help more.
It is very robust tool. It give us all inside of code.
The PHP Test Coverage Tool from Semantic Designs (my company) collects and combines test coverage from any
framework
test set
individual test
even ad hoc manual tests.
After running some set of tests, our tool is can be easily triggered to dump test coverage vectors to a file; you need to modify the framework slightly to invoke
TCVDump();
when the framework completes, or you can invoke a TCVDDump() by touching an easily found, special web page added by the test coverage tool. Each such call produces a time-stamped or user-named file (e.g, after the framework or test set) so they are easily distinguished
The graphical test coverage display included as part of the tool will interactively select and merge small or large sets of such files to produce a coherent whole, both display and summary. It will also compare test coverage vectors to enable one to decide if coverage from one test set include/intersects another, etc.
The test coverage display component will also export text or XML/HTML summaries of the coverage results.
You can even run tests on different subsystems and combine them. This test coverage tool is part of larger family of tools for many languages other than PHP; tests run on a multilingual application system can also be combined to provide an overview of coverage for the multilingual application.

How to display a short test report/counters in travis-ci?

I mean, it would be very useful if I can see how many tests passed/failed just by one line, without reading build logs.
I use karma as test runner. It have a lot of reporter, but which one should I use?
Example from TeamCity:
This seems like a useful feature but the current user interface doesn't seem to support it.
You can file it as a feature request on Travis CI's GitHub page using the link below:
https://github.com/travis-ci/travis-ci/issues
Although Travis CI doesn't have its own interface for counting the number of tests passed, they do work with CodeClimate, which has it's it's interface and metrics for test coverage. It shows overall test coverage for the whole project and coverage for each file. There's some more info on that here, though it looks like their free version allows local testing only.
There are other tools out there for tracking and analyzing coverage as well, including Coveralls, which is pretty good as well. They have a free version for open source, like Travis CI, so that's can be a plus. They also show coverage as a percent and file-by-file.

BDD framework to work for with good reports

My project requirements are
1.The framework must produce detailed Step Reports - which can be sent to the client through email.
2.The execution time must be less
3.Easy to write
I know behat and Cucumber
Please suggest me which framework is good ??
I would say Behat+Mink+Selenium combination. I've been using for very long time.
Behat will give you report as you wanted. We always send reports to clients where every single line is printed and either marked as success or failure. At the end of it, you get a full result where you can see overall report.
e.g. bin/behat #YourBundleName -f pretty,html --out ,report-path/behat.html. You can even get screen-shots of failed steps.
Every program can be considered as fast or slow. Result will depend on how you do things. You have a lot of options to make behat tests run fast. e.g. if you use phantomJs to run the tests and symfony2 as default session.
Behat uses Gherkin language which is easy to understand and write. You don't have to be a programmer at all.
One framework known for its pretty reports is Concordion. Please, have a look at the example to view one such report: http://concordion.org/Example.html
The Java version of Concordion utilizes JUnit to execute its tests. So you get a good integration in your development environment. Concordion support multiple technologies such as .NET, Ruby, Python, etc. http://concordion.org/Ports.html
Which technology are you using?
Concordion based on specification by example has been designed with a short learning-curve as a top priority. The purposely small command-set is simple to learn: http://concordion.org/Tutorial.html

Running test coverage separately from test execution

We're configuring build steps in TeamCity. Since we have huge problems with test coverage reports (they were there and then inexplainably vanished), we're trying to find a work-trough (asking and bouting a question directly related to our issue yielded very cold response).
Please note that I'm not looking for an opinion but rather a technical knowledge base to support (or kill) the choice of ours. And yes, I've checked the build logs - these are posted in the other thread. This question is about the (in?)sanity of trying an alternative approach. :)
Is it recommended to run a build step for test and then another build step for test coverage?
Does it even makes sense to run these in separate build steps?!
What gains and disadvantages are there to running coverage bundled with/separately from the tests themselves?
Test coverage reports are generated during unit test runs. Unless your problem is with reading generated reports, it doesn't make sense to "run them in separate build steps". Test coverage tells you what parts of your code were run WHILE the tests were running- I don't see how they could be independent.
It may make more sense to ask for help with the test coverage reports no longer being generated...

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.