Does Codeception have an equivalent to PHPUnit's "strict coverage"? - codeception

When using PHPUnit, you can annotate a test case with #covers SomeClass::someMethod to ensure that only code inside of that method is recorded as covered when running the test. I like to use this feature because it helps me separate code that was incidentally executed during a test from code that was actually tested.
After using Codeception to implement some acceptance tests for my project, I decided I would rather use it than PHPUnit to run my unit tests. I would like to remove PHPUnit from the project if possible.
I am using Codeception's Cest format for my unit tests, and the #covers and #codeCoverageIgnore annotations no longer work. Code coverage reports show executed code outside of the methods specified with #covers as covered. Is there any way to mimic that "strict coverage" functionality using Codeception?
Edit: I have submitted an enhancement request to the Codeception project's Github.

It turns out that strict coverage was not possible using Cest-format tests when I asked the question. I have implemented it and the pull request has been merged.
For anyone migrating tests from PHPUnit and looking for this feature as I was, this means that a later release of Codeception should provide support for #covers, #uses, #codeCoverageIgnore, and other related test annotations.
The current version (2.2.4 at the time this was written) doesn't support it but 2.2.x-dev should.

Related

How do I call a function when all tests are finished running? [duplicate]

In Rust, is there any way to execute a teardown function after all tests have been run (i.e. at the end of cargo test) using the standard testing library?
I'm not looking to run a teardown function after each test, as they've been discussed in these related posts:
How to run setup code before any tests run in Rust?
How to initialize the logger for integration tests?
These discuss ideas to run:
setup before each test
teardown before each test (using std::panic::catch_unwind)
setup before all tests (using std::sync::Once)
One workaround is a shell script that wraps around the cargo test call, but I'm still curious if the above is possible.
I'm not sure there's a way to have a global ("session") teardown with Rust's built-in testing features, previous inquiries seem to have yielded little, aside from "maybe a build script". Third-party testing systems (e.g. shiny or stainless) might have that option though, might be worth looking into their exact capabilities
Alternatively, if nightly is suitable there's a custom test frameworks feature being implemented, which you might be able to use for that purpose.
That aside, you may want to look at macro_rules! to cleanup some boilerplate, that's what folks like burntsushi do e.g. in the regex package.

How to display a short test report/counters in travis-ci?

I mean, it would be very useful if I can see how many tests passed/failed just by one line, without reading build logs.
I use karma as test runner. It have a lot of reporter, but which one should I use?
Example from TeamCity:
This seems like a useful feature but the current user interface doesn't seem to support it.
You can file it as a feature request on Travis CI's GitHub page using the link below:
https://github.com/travis-ci/travis-ci/issues
Although Travis CI doesn't have its own interface for counting the number of tests passed, they do work with CodeClimate, which has it's it's interface and metrics for test coverage. It shows overall test coverage for the whole project and coverage for each file. There's some more info on that here, though it looks like their free version allows local testing only.
There are other tools out there for tracking and analyzing coverage as well, including Coveralls, which is pretty good as well. They have a free version for open source, like Travis CI, so that's can be a plus. They also show coverage as a percent and file-by-file.

BDD framework to work for with good reports

My project requirements are
1.The framework must produce detailed Step Reports - which can be sent to the client through email.
2.The execution time must be less
3.Easy to write
I know behat and Cucumber
Please suggest me which framework is good ??
I would say Behat+Mink+Selenium combination. I've been using for very long time.
Behat will give you report as you wanted. We always send reports to clients where every single line is printed and either marked as success or failure. At the end of it, you get a full result where you can see overall report.
e.g. bin/behat #YourBundleName -f pretty,html --out ,report-path/behat.html. You can even get screen-shots of failed steps.
Every program can be considered as fast or slow. Result will depend on how you do things. You have a lot of options to make behat tests run fast. e.g. if you use phantomJs to run the tests and symfony2 as default session.
Behat uses Gherkin language which is easy to understand and write. You don't have to be a programmer at all.
One framework known for its pretty reports is Concordion. Please, have a look at the example to view one such report: http://concordion.org/Example.html
The Java version of Concordion utilizes JUnit to execute its tests. So you get a good integration in your development environment. Concordion support multiple technologies such as .NET, Ruby, Python, etc. http://concordion.org/Ports.html
Which technology are you using?
Concordion based on specification by example has been designed with a short learning-curve as a top priority. The purposely small command-set is simple to learn: http://concordion.org/Tutorial.html

ScalaTest and Maven: getting started

I have a Maven/Java project I've been working on for years, and I wanted to take JavaPosse's advice and start writing my tests in Scala. I've written a few tests following ScalaTest's JUnit4 quick start, and now I want these tests to be executed while running "mvn test". How should I do this? What should I put into my pom.xml to allow the tests in src/test/scala to be run side-by-side my old JUnit4 tests?
Cheers
Nik
PS, yes, I've been Googling, but all I could find on the topic were some pre-v1.0 suggestions that I didn't get working
PPS, bonus question: how can I run these tests one-at-a-time by rightclicking them in Eclipse/STS and say "Debug As... ScalaTest" or something similar where I've so far said "Debug As... JUnit Test"?
PPPS, I expect the answer has changed since July '09?
The second answer in one of the questions you linked to SHOULD work:
Is there a Scala unit test tool that integrates well with Maven?
You annotate your tests with a junit #RunWith annotation and give it the scalatest http://www.artima.com/docs-scalatest-2.0.RC3/#org.scalatest.junit.JUnitRunner
If your Tests also adhere to any naming conventions possibly enforced by Maven, this should work fine.
Note: It doesn't matter what kind of scalatest trait you use. All of them should work. If they don't and Bill Venners doesn't answer to this question, contact him on the ScalaTest mailing list.
Other Note: you can run such test suites in Eclipse using the normal JUnit plugin. But you can't run single tests, since the plugin expects to deduct a method name from the test name, which doesn't work with all types of scalatest tests.

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.