I'm using cabal to build and test my projects with the commands:
cabal configure --enable-tests
cabal build
cabal test
As a framework I use testing-framework (https://batterseapower.github.io/test-framework/).
Everything works, however the number of QuickCheck-tests defaults to 50 which in my use-case is very little because I have to filter the generated data to fit certain properties.
Is there any possibility to pass something like
--maximum-generated-tests=5000
to the test-executable via cabal? I tried things like
cabal test --test-options='maximum-generated-tests=5000'
but no luck so far. Is there any possibility to achieve this?
Many thanks in advance!
jules
You missed the dashes:
cabal test --test-options=--maximum-generated-tests=5000
Also, if too few generated tests satisfy your property, you may have better luck with SmallCheck. It's not random and thus will find all inputs satisfying the condition in the given search space. (Disclosure: I'm the maintainer of SmallCheck.)
Related
I have a pretty nice Gulp based Karma-Jasmine unit test workflow going. My question is about how to avoid source conflicts in one's tests. Without having to do anything, Karma-Jasmine has auto-magically exposed my src files to Jasmine and detected my tests. I can't see how this would be useful in a real codebase where things don't fit the happy-path.
Example: Create two files you would like to test that both implement a function with the same name, Test(). One returns true, the other false. Which one is your test actually testing? Do I have any control over this? I want to be able to test both (forget telling me about better JS design, that is obvious).
I mean, it would be very useful if I can see how many tests passed/failed just by one line, without reading build logs.
I use karma as test runner. It have a lot of reporter, but which one should I use?
Example from TeamCity:
This seems like a useful feature but the current user interface doesn't seem to support it.
You can file it as a feature request on Travis CI's GitHub page using the link below:
https://github.com/travis-ci/travis-ci/issues
Although Travis CI doesn't have its own interface for counting the number of tests passed, they do work with CodeClimate, which has it's it's interface and metrics for test coverage. It shows overall test coverage for the whole project and coverage for each file. There's some more info on that here, though it looks like their free version allows local testing only.
There are other tools out there for tracking and analyzing coverage as well, including Coveralls, which is pretty good as well. They have a free version for open source, like Travis CI, so that's can be a plus. They also show coverage as a percent and file-by-file.
I have implemented several packages for a web API, each with their own test cases. When each package is tested using go test ./api/pkgname the tests pass. If I want to run all tests at once with go test ./api/... test cases always fail.
In each test case, I recreate the entire schema using DROP SCHEMA public CASCADE followed by CREATE SCHEMA public and apply all migrations. The test suite reports errors back at random, saying a relation/table does not exist, so I guess each test suite (per package) is run in parallel somehow, thus messing up the DB state.
I tried to pass along some test flags like go test -cpu 1 -parallel 0 ./src/api/... with no success.
Could the problem here be tests running in parallel, and if yes, how can I force serial execution?
Update:
Currently I use this workaround to run the tests, but I still wonder if there's a better solution
find <dir> -type d -exec go test {} \;
As others have pointed out, -parallel doesn't do the job (it only works within packages). However, you can use the flag -p=1 to run through the package tests in series. This is documented here:
http://golang.org/src/cmd/go/testflag.go
but (afaict) not on the command line, go help, etc. I'm not sure it is meant to stick around (although I'd argue that if it is removed, -parallel should be fixed.)
The go tool is provided to make running unit tests easier using the convention that *_test.go files contain unittests in them. Because it assumes they are unittests it also assumes they are hermetic. It sounds like your tests either aren't unittests or they are but violate the assumptions that a unittest should fulfill.
In the case that you mean for these tests to be unittests then you probably need a mock database for your unittests. A mock, preferrably in memory, of your database will ensure that the unittest is hermetic and can't be interfered with by other unittests.
In the case that you mean for these tests to be integration tests you are probably better off not using the go tool for these tests. What you probably want is to create a seperate test binary whose running you can control and write you integration test scripts in there.
The good news is that creating a mock in Go is insanely easy. Change your code to take an interface with the methods you care about for the databases and then write an in memory implementation of that interface for testing purposes and pass it into your application code that you want to test.
Just to clarify, #Jeremy's answer is still the accepted one:
Since my integration tests were only run on one package (api), I removed the separate test binary in the end and created a pattern to separate test types by:
Unit tests use the normal TestX name
Integration tests use Test_X
I created shell scripts (utest.sh/itest.sh) to run either of those.
For unit tests go test -run="^(Test|Benchmark)[^_](.*)"
For integration tests go test -run"^(Test|Benchmark)_(.*)"
Run both using the normal go test
I am developing a command line utility that has a LOT of flags. A typical command looks like this:
mycommand --foo=A --bar=B --jar=C --gnar=D --binks=E
In most cases, a 'success' message is printed but I still want to verify against other sources like an external database to ensure actual success.
I'm starting to create integration tests and I am unsure of the best way to do this. My main concerns are:
There are many many flag combinations, how do I know which combinations to test? If you do the math for the 10+ flags that can be used together...
Is it necessary to test permutations of flags?
How to build a framework capable of automating the tests and then verifying results.
How to keep track of a large number of flags and providing an order so it is easy to tell what combinations have been implemented and what has not.
The thought of manually writing out individual cases and verifying results in a unit-test like format is daunting.
Does anyone know of a pattern that can be used to automate this type of test? Perhaps even software that attempts to solve this problem? How did people working on GNU commandline tools test their software?
I think this is very specific to your application.
First, how do you determine the success of the execution of you application? Is it a result code? Is it something printed to the console?
For question 2, it depends how you parse those flags in your application. Most of the time, order of flags isn't important, but there are cases where it is. I hope you don't need to test for permutations of flags, because it would add a lot of cases to test.
In a general case, you should analyse what is the impact of each flag. It is possible that a flag doesn't interfere with the others, and then it just need to be tested once. This is also the case for flags that are meant to be used alone (--help or --version, for example). You also need to analyse what values you should test for each flag. Usually, you want to try each kind of possible valid value, and each kind of possible invalid values.
I think a simple bash script could be written to perform the tests, or any scripting language, like Python. Using nested loops, you could try, for each flag, possibles values, including tests for invalid values and the case where the flag isn't set. I will produce a multidimensional matrix of results, that should be analysed to see if results are conform to what expected.
When I write apps (in scripting languages), I have a function that parses a command line string. I source the file that I'm developing and unit test that function directly rather than involving the shell.
Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.