Use quickCheck tests with Cabal? - testing

What's the current state of test suites in cabal, and, more importantly, where can I keep atop them?
I've done some poking around, and the latest information that I can find implies that I can't trust the documentation, and I haven't been able to find anyone talking about it for the better part of a year. I've heard rumor of a cabal-test-quickcheck library but can't seem to find one on hackage and can find no examples of how to set it up.
What's the standard way to hook up quickCheck tests into a cabal test suite these days?

Cabal-1.14.0.0 has come out since then and the detailed test suite seems to be available.
The cabal-test-quickcheck package doesn't seem to have been released yet though, maybe you can get similar functionality from test-framework-quickcheck2?
Alternatively, you can ignore the inbuilt test support in Cabal and just use a flag to determine whether or not to build a test executable.

Related

Should Cypress Automation testing be written for local environment or staging?

I am a beginner in Cypress Automation Testing. I have one confusion. When we need to add our Automation scripts to run with GitHub workflows to trigger when we push a commit, for what environment should we write tests? In the local environment at localhost or for the staging site of the project?
Could anyone please get my confusion cleared on this Automation Testing and how it should be written and How can we add Cypress Automation Tests with GitHub CI/CD?
Thanks.
Ok, let me give this a shot. Of course, I do not know the exact setup of the project that you are working on, but let me give you some pointers, so you can decide for yourself what works best in your setting.
My answer is based on the assumption that you are building an automated regression test set in Cypress with the primary goal to prevent production incidents. In addition, it aims save you tons of 'manual testing' for each release to production because you want to make sure everything is still working properly.
First of all, you want your automated tests to run on a stable environment(*). If the environment is not stable, many tests will fail for many reasons, and those are usually not the right ones. You'll spend more time figuring out why your tests are failing, than actually catching issues with it. This makes a local, dev environment not really suited for the task, so I would not pick a localhost environment for this. Especially not when you have multiple developers working in your team, each with their own localhost.
A test environment is already a way more stable environment. You want your tests to only fail when you have an actual issue on your hands. As a rule of thumb, the 'higher' you go, the more stable.
Second, you want to catch the issues early in the game, so I would definitely make sure that the tests can run on the environment where all code comes together for the first time (in other words, the environment that has the master branch or whatever your team calls that branch). This is usually the test environment. In my projects, I initially build the set for this environment, and ideally, I run it daily. Your tests won't always pass here (bonus if they do), and that is OK... as long as you understand why they don't ;-)
Some things to keep in mind are integrations or connecting systems, and whether you need those for your tests to pass. In general, you don't want to be (too) dependent on (third-party) integrations for you test cases to go green. Sometimes, when those integrations are vital to the process that you need to test, it is inevitable. However, integrations are often not (fully) set up on test/lower environments. There are workarounds for this, like stubs, but let's not get into that now - that's a whole different topic.
Third, you want your tests to run on a production-like environment on the code exactly in the state that it goes to production. This is usually the acceptance, staging or pre-production environment, i.e. the last one before production. These environments often have all integrations in place and are often very similar to production. If you find an issue here, it's almost guaranteed that it is also an issue in production. This is IMO where you want to integrate your tests into your CI/CD pipeline. Ideally, your full automated set is in the pipeline, but in practice, you should only add the tests that are stable and robust, otherwise your production deployments will be blocked very often.
So, long story short, my advice: write your tests for your test environment, where you do your 'manual testing' (I hate that term BTW, all testing is manual... as if there is such a thing as 'manual coding') and run it early and often. Then put the stable ones in the pipeline of the production deployment. If you only have local, staging and production, it should be staging.
If your developers want to run the set on their local environments, they can still do that - you can share the tests with them or even better, they can take it from the repository and run it locally - but I don't think you should make it part of the deployment process always and everywhere. It will slow down your process massively.
You can work with environment variables to easily switch for the environment where you want to run your tests: https://docs.cypress.io/guides/guides/environment-variables#Setting
I hope this helps. I'm looking forward to read what others have to say about this, too.
Happy Testing!
Jackie
PS. I see that you also asked about how to add Cypress to your CI/CD pipeline. I think that should be a completely separate topic. It is also way too high level to answer. Maybe it's best to start here: https://docs.cypress.io/guides/continuous-integration/introduction#What-you-ll-learn
(*) I'm talking stable environment here, but this also includes stable code and even a stable application. If your application and code is in a very early stage, really ask yourself whether you already want to start automating your functional UI tests in Cypress - chances are that many things will change (many times) and you'll spend hours updating your tests. Maybe it is better to only think about the scenarios that you want to automate at that stage of the project.

how multiple automation testers work in same selenium project

We are three testers and going to prepare automation project with selenium and java code so what are the steps for environment setup , scripts integration and running the testcases and getting the results for the whole project suits
So there are a few things we have to use in order to allow multiple engineers to work on the same framework.
Step 1) Creating the framework, assuming you know how to do this already, you have working tests you can skip this stage, however if not please follow the tutorial i link below.
http://toolsqa.com/selenium-webdriver/
Step 2) Creating a REPO, my preference is GitHub, you can use any git repo however i will post the guide to set one up with GitHub, its a similar process for all. This will allow you to merge code properly without causing conflicts.
https://help.github.com/articles/create-a-repo/
Step 3) Source Control program - to push, pull and fetch from your GitHub Repo, you can do this from Command Prompt however i find cloning the repo into a program like 'SourceTree' is really easy, so i've posted that below.
https://confluence.atlassian.com/get-started-with-sourcetree
If you follow these 3 guides, you will be able to have your automation test scripts on GitHub by the end of the day.
If you have any more questions please do not hesitate to ask.
All the best, Jack
The easiest and most logical way to do this would be to create one branch in your CVS (git or SVN, etc) and have each person setup the dev environment in the same way. Work exactly like developers and pull code before you check-in/commit (this will ensure that one small error does not break your framework) and swear to resolve conflicts during merge (to ensure you don't step on each others' toes).
Also, before you kick off, agree on a standard of coding (including package naming, design pattern usage, filename and methodname usage) and if this is in sync with the dev coding standards in your company, even better.
There will be a few hiccups along the way, but experience is the best way to create a process for your development and check-in practices.
Good luck with your new project and happy coding!
You have asked two questions, in my opinion the answer of your questions is.
how multiple automation testers work in same selenium project - You can use any version control system, Git Hub is the best option which gives you a lot of facilities. You all three can work on same project at same time or you can go for any centralized version control system like tortoise svn which is not much likely used now a days. I will suggest Git Hub for that.
what are the steps for environment setup , scripts integration and running the test cases and getting the results for the whole project suits - It depends on various factors like application and the kind of framework you want to use, there are many frameworks which are widely used for automation testing like Modular Framework, Data Driven, Keyword Driven, BDD, Cucumber, TestNg etc or if you have bandwidth and time you can design your custom framework as per the needs.
I hope I put some glimpse on your queries.
Thanks

What is the most modern way to handle Haskell testing?

I only recently started working on my latest Haskell project, and would really like to test it. I was wondering where testing currently stands, with regards to the cutting edge frameworks, test running procedures and test code organization. It seems that previously tests were just a separate binary that returned a different exit code if tests passed or failed - is this still the currently adopted setup, or are there other ways to integrate with cabal now?
Quickcheck may not be cutting edge anymore (at least for Haskell practitioners).
But in combination with HUnit it's quite easy to get almost 100% coverage (I use HPC for converage analysis).

Haskell testing workflow

I just started a new Haskell project and wanted to set up a good testing workflow from the beginning. It seems like Haskell has a lot of excellent and unique testing tools and many different ways to integrate them.
I have looked into:
HUnit
QuickCheck
benchpress
HPC
complexity
Which all seem to work very well in their domains, but I'm looking for a comprehensive approach to testing and was wondering what has worked well for other people.
Getting unit testing, code coverage, and benchmarks right is mostly about picking the right tools.
test-framework provides a one-stop shop to run all your HUnit test-cases and QuickCheck properties all from one harness.
Code coverage is built into GHC in the form of the HPC tool.
Criterion provides some pretty great benchmarking machinery
I'll use as a running example a package that I just started enabling with unit testing, code coverage, and benchmarks:
http://github.com/ekmett/speculation
You can integrate your tests and benchmarks directly into your cabal file by adding sections for them, and masking them behind flags so that they don't make it so that every user of your library has to have access to (and want to use for themselves) the exact version of the testing tools you've chosen.
http://github.com/ekmett/speculation/blob/master/speculation.cabal
Then, you can tell cabal about how to run your test suite. As cabal test doesn't yet exist -- we have a student working on it for this year's summer of code! -- the best mechanism we have is Here is how to use cabal's user hook mechanism. This means switching to a 'Custom' build with cabal and setting up a testHook. An example of a testHook that runs a test program written with test-framework, and then applies hpc to profile it can be found here:
http://github.com/ekmett/speculation/blob/master/Setup.lhs
And then you can use test-framework to bundle up QuickCheck and HUnit tests into one program:
http://github.com/ekmett/speculation/blob/master/Test.hs
The cabal file there is careful to turn on -fhpc to enable code coverage testing, and then the testHook in Setup.lhs manually runs hpc and writes its output into your dist dir.
For benchmarking, the story is a little more manual, there is no 'cabal benchmark' option. You could wire your benchmarks into your test hook, but I like to run them by hand, since Criterion has so many graphical reporting options. You can add your benchmarks to the cabal file as shown above, give them separate compilation flags, hide them behind a cabal flag, and then use Criterion to do all the heavy lifting:
http://github.com/ekmett/speculation/blob/master/Benchmark.hs
You can then run your benchmarks from the command line and get pop-up KDE windows with benchmark results, etc.
Since in practice you're living in cabal anyways while developing Haskell code, it makes a lot of sense to integrate your toolchain with it.
Edit: Cabal test support now does exist. See http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/developing-packages.html#test-suites
The approach is advocate in RWH ch 11 and in XMonad is approximately:
State all properties of the system in QuickCheck
Show test coverage with HPC.
Confirm space behavior with heap profiling.
Confirm thread/parallel behavior with ThreadScope.
Confirm microbenchmark behavior with Criterion.
Once your major invariants are established via QuickCheck, you can start refactoring, moving those tests into type invariants.
Practices to support your efforts:
Run a simplified QuickCheck regression on every commit.
Publish HPC coverage details.
The test-framework package is really awesome. You can easily integrate HUnit and QuickCheck tests, and get executables that run specified suites only, based on command-line flags, with multiple output targets.
Testing and profiling are different beasts though. For profiling I'd set up a separate executable that stresses just the section you want to profile, and just looking carefully at the results of profiling builds and runs (with -prof-auto-all for compilation and +RTS -p for a runtime flag).
For testing, I rely on HUnit and QuickCheck properties and use the Haskell Test Framework to collect all unit tests and all QuickCheck properties automatically.
Disclaimer: I'm the main developer of the Haskell Test Framework.

What is a good regression testing framework for software applications?

Am looking for a regression test framework where I can add tests to.. Tests could be any sort of binaries that poke an application..
This really depends on what you're trying to do, but one of the features of the new Test::Harness (disclaimer: I'm the original author and still a core developer) is that if your tests output TAP (the Test Anything Protocol), you can use Test::Harness to run test suites written in multiple languages. As a result, you don't have to worry about getting "locked in" to a particular language because that's all your testing software supports. In one of my talks on the subject, I even give an example of a test suite written in Perl, C, Ruby, and HTML (yes, HTML -- you'd have to see it).
Just thought I would tell you guys what I ended up using..
QMtest ::=> http://mentorembedded.github.io/qmtest/
I found QMTest to full fill my needs. Its extensible framework, allows you to write very flexible test classes. Then, these test classes could be instantiated to large test suites to do regression testing.
QMTest is also very forward thinking, it allows for weak test dependencies and the creation of test resources. After a while of using QMTest, I started writing better quality tests. However, like any other piece of complex software, it requires some time to learn and understand the concepts, the API is documented and the User Manual give a good introduction. With sometime in your hand, I think QMTest is well worth it.
You did not indicate what language you are working in, but the xUnit family is available for a lot of different languages.
/Allan
It also depends heavily what kind of application you're working on. For a commandline app, for example, its probably easy enough to just create a shell script that calls it with a whole bunch of different options and compares its result to a previously known stable version, warning you if any of the output differs so that you can check whether the change is intentional or not.
If you want something more fancy, of course, you'll probably want some sort of dedicated testing framework.
I assume you are regression-testing a web application?
There are some tools in this kb article from Microsoft
And if I remember correctly, certain editions of Visual Studio also offer its own flavor of regression testing tools as well.
But if you just want a unit testing framework, the xUnit family does it pretty well.
Here's JUnit and NUnit.