I just started a new Haskell project and wanted to set up a good testing workflow from the beginning. It seems like Haskell has a lot of excellent and unique testing tools and many different ways to integrate them.
I have looked into:
HUnit
QuickCheck
benchpress
HPC
complexity
Which all seem to work very well in their domains, but I'm looking for a comprehensive approach to testing and was wondering what has worked well for other people.
Getting unit testing, code coverage, and benchmarks right is mostly about picking the right tools.
test-framework provides a one-stop shop to run all your HUnit test-cases and QuickCheck properties all from one harness.
Code coverage is built into GHC in the form of the HPC tool.
Criterion provides some pretty great benchmarking machinery
I'll use as a running example a package that I just started enabling with unit testing, code coverage, and benchmarks:
http://github.com/ekmett/speculation
You can integrate your tests and benchmarks directly into your cabal file by adding sections for them, and masking them behind flags so that they don't make it so that every user of your library has to have access to (and want to use for themselves) the exact version of the testing tools you've chosen.
http://github.com/ekmett/speculation/blob/master/speculation.cabal
Then, you can tell cabal about how to run your test suite. As cabal test doesn't yet exist -- we have a student working on it for this year's summer of code! -- the best mechanism we have is Here is how to use cabal's user hook mechanism. This means switching to a 'Custom' build with cabal and setting up a testHook. An example of a testHook that runs a test program written with test-framework, and then applies hpc to profile it can be found here:
http://github.com/ekmett/speculation/blob/master/Setup.lhs
And then you can use test-framework to bundle up QuickCheck and HUnit tests into one program:
http://github.com/ekmett/speculation/blob/master/Test.hs
The cabal file there is careful to turn on -fhpc to enable code coverage testing, and then the testHook in Setup.lhs manually runs hpc and writes its output into your dist dir.
For benchmarking, the story is a little more manual, there is no 'cabal benchmark' option. You could wire your benchmarks into your test hook, but I like to run them by hand, since Criterion has so many graphical reporting options. You can add your benchmarks to the cabal file as shown above, give them separate compilation flags, hide them behind a cabal flag, and then use Criterion to do all the heavy lifting:
http://github.com/ekmett/speculation/blob/master/Benchmark.hs
You can then run your benchmarks from the command line and get pop-up KDE windows with benchmark results, etc.
Since in practice you're living in cabal anyways while developing Haskell code, it makes a lot of sense to integrate your toolchain with it.
Edit: Cabal test support now does exist. See http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/developing-packages.html#test-suites
The approach is advocate in RWH ch 11 and in XMonad is approximately:
State all properties of the system in QuickCheck
Show test coverage with HPC.
Confirm space behavior with heap profiling.
Confirm thread/parallel behavior with ThreadScope.
Confirm microbenchmark behavior with Criterion.
Once your major invariants are established via QuickCheck, you can start refactoring, moving those tests into type invariants.
Practices to support your efforts:
Run a simplified QuickCheck regression on every commit.
Publish HPC coverage details.
The test-framework package is really awesome. You can easily integrate HUnit and QuickCheck tests, and get executables that run specified suites only, based on command-line flags, with multiple output targets.
Testing and profiling are different beasts though. For profiling I'd set up a separate executable that stresses just the section you want to profile, and just looking carefully at the results of profiling builds and runs (with -prof-auto-all for compilation and +RTS -p for a runtime flag).
For testing, I rely on HUnit and QuickCheck properties and use the Haskell Test Framework to collect all unit tests and all QuickCheck properties automatically.
Disclaimer: I'm the main developer of the Haskell Test Framework.
Related
I think the best practice would be to maintain the test suite in the same repo as the source code to keep the tests in sync with the code changes. But what if the infrastructure or the coding policy doesn't allow adding irrelevant files to source code? Is there a better way to keep sync between both code and tests by having a separate repo for testsuite? Thanks in advance
I think it depends on your goal/team and project. I have worked in both of the models and I found advantages and disadvantages of working with both.
Automation in the same repository:
Advantages:
You can share the code and element locators(for example with Espresso), so it is easy to maintain
It is easy for the developers to help with the maintenance (in case they want/have to)
It is more visible for developers to do the code review and check the PRs
Shared knowlodged about the code and tests between Devs and QAs
Devs can accidentally break automation code (but if qas are doing reviews this should be rare)
Test automation will have the same language as the development code
Disadvantages:
You can share code, so if you have a function with a bug and you are using the same function on your tests, your tests are going to have a bug as well
QAs can accidentally break development code (but if devs are doing reviews this should be rare)
E2E tests between projects will not make sense since the tests are placed in a repo of one project and having integration with others
E2E tests between projects will need to have mocks to test scenarios on the other products/projects otherwise, as said above, it won't make sense to have the test project in a project repo
Can't share code/functions between projects as it will be confusing to have a test project sharing functions with other test projects in different repos (Unless you create a test repo with these shared functions) + you may have a test automation coded in javascript for the web project and for the mobile project you are going to have the same code as the development team, which could be different from the web, like kotlin or swift
Automation in separated repository:
Advantages:
You can share the code between test projects
You will reduce the maintenance cost as the project can be shared between different platform projects
Test automation can have the language that is more known for the team who is going to code or the language which has more advantages when maintaining the code between projects in different platforms
Disadvantages:
Not really visible for the developers as they might not follow
You can proper create an E2E tests involving all the projects in different platforms without mocking them
It is not a big motivation for the developers to follow and maintain the test automation
Anyway, I may be missing something, but I just tried to remember all the key points. In the end the team should decide this together as this again depends if the developers are going to maintain the tests as well and if you are going to perform e2e tests with or without the need of mocking the other projects
In my current Test project, we are using TestNG as a testing framework. We have the Test suites in a separate folder structure, but still they are part of the project.
what if the infrastructure or the coding policy doesn't allow adding irrelevant files to source code
Also are organized in test_suite.xml files (represented by one XML file, as suite is the feature of execution) for different scenarios, since by default they cannot be defined in the testing source code.
The main advantage of this is the flexible configuration of the tests to be run. Also they can be supported by a tester with very little domain knowledge of the Test project.
I only recently started working on my latest Haskell project, and would really like to test it. I was wondering where testing currently stands, with regards to the cutting edge frameworks, test running procedures and test code organization. It seems that previously tests were just a separate binary that returned a different exit code if tests passed or failed - is this still the currently adopted setup, or are there other ways to integrate with cabal now?
Quickcheck may not be cutting edge anymore (at least for Haskell practitioners).
But in combination with HUnit it's quite easy to get almost 100% coverage (I use HPC for converage analysis).
I'm in charge of the automation of our builds, tests, etc. in my company. We are very much a multi-platform shop. We are compiling .NET code, Java for android and XCode for iPhone applications. We run a build on every check in. All of our automation is done with a combination of Jenkins, NANT and ANT We have a project coming up to enforce our code standards so that variable naming, indentation, etc are all consistent within each code base.
To this end, I'm looking to add a code standard enforcement into the check-in policy. I would like either a pre-commit hook in SVN or a tool that runs during the check-in build that fails the build on violation. The problem I am finding is that every tool, CheckStyle, StyleCop, etc are really designed for one language. I'd prefer not to have to maintain three separate tools. Is there good multi-language tool that I can use for this purpose?
There's at least one such tool: Coverity. It is extremely powerful, expensive and slow.
That said, I personally would pick tools for each language separately. You're running automated tests to discover errors. You may find that tools which focus on a single language uncover more errors, faster and cheaper.
Also, you can significantly reduce some costs by using in the headless build the same tool that can be run rapidly or continuously by developers in their IDE.
I’m starting a new firmware project in C++ for Texas Instrument C283xx and C6xxx targets.
The unit tests will not run on the target, but will be compiled with gcc/gcov on a PC with windows (and run as well on PC) with simple metrics for tested code coverage.
The whole project will be part of Cruise Control.NET for continuous integrations.
My question is: what are the consistent IDE / framework / tools to work together?
A/ One of the developers says CodeComposerStudio V3.1 for application and CodeBlocks + CxxUnit for the Unit tests.
B/ I’m more attracted with CodeComposerStudio V4 for application, Eclipse CDT (well, as CCS V4) and CppUnit for unit test + MockCpp for mocks.
I don’t want the best in class tools for each process, but a global, consistent and easy solution (or group of tools if you prefer).
In my opinion, google C++ Test Framework and google C++ Mocking Framework might be a better option. It works with eclipse cdt, and the output can be generated in the xml format for CI servers.
I understand the Unit tests not running on the target. But you might want test coverage collected in the target anyway.
See SD C++ Test Coverage for a tool that operates with minimal footprint in a practical way inside most targets. You have to customize a provided small data collection procedure for this to work; normally an afternoons' straightforward exercise.
Am looking for a regression test framework where I can add tests to.. Tests could be any sort of binaries that poke an application..
This really depends on what you're trying to do, but one of the features of the new Test::Harness (disclaimer: I'm the original author and still a core developer) is that if your tests output TAP (the Test Anything Protocol), you can use Test::Harness to run test suites written in multiple languages. As a result, you don't have to worry about getting "locked in" to a particular language because that's all your testing software supports. In one of my talks on the subject, I even give an example of a test suite written in Perl, C, Ruby, and HTML (yes, HTML -- you'd have to see it).
Just thought I would tell you guys what I ended up using..
QMtest ::=> http://mentorembedded.github.io/qmtest/
I found QMTest to full fill my needs. Its extensible framework, allows you to write very flexible test classes. Then, these test classes could be instantiated to large test suites to do regression testing.
QMTest is also very forward thinking, it allows for weak test dependencies and the creation of test resources. After a while of using QMTest, I started writing better quality tests. However, like any other piece of complex software, it requires some time to learn and understand the concepts, the API is documented and the User Manual give a good introduction. With sometime in your hand, I think QMTest is well worth it.
You did not indicate what language you are working in, but the xUnit family is available for a lot of different languages.
/Allan
It also depends heavily what kind of application you're working on. For a commandline app, for example, its probably easy enough to just create a shell script that calls it with a whole bunch of different options and compares its result to a previously known stable version, warning you if any of the output differs so that you can check whether the change is intentional or not.
If you want something more fancy, of course, you'll probably want some sort of dedicated testing framework.
I assume you are regression-testing a web application?
There are some tools in this kb article from Microsoft
And if I remember correctly, certain editions of Visual Studio also offer its own flavor of regression testing tools as well.
But if you just want a unit testing framework, the xUnit family does it pretty well.
Here's JUnit and NUnit.