Haskell IO Testing - testing

I've been trying to figure out if there is already an accepted method for testing file io operations in Haskell, but I have yet to find any information that is useful for what I am trying to do.
I'm writing a small library that performs various file system operations (recursively traverse a directory and return a list of all files; sync multiple directories so that each directory contains the same files using inodes as the equality test and hardlinks...) and I want to make sure that they actually work, but the only way I can think of to test them is to create a temporary directory with a known structure and compare the results from the functions executed on this temporary directory with the known results. The thing is, I would like to get as much test coverage as possible while still being mainly automated: I don't want to have create the directory structure by hand.
I have searched google and hackage, but the packages that I have seen on hackage do not use any testing -- maybe I just picked the wrong ones -- and anything I find on google does not deal with IO testing.
Any help would be appreciated
Thanks, James

Maybe you can find a way to make this work for you.
EDIT:
the packages that I have seen on hackage do not use any testing
I have found an unit testing framework for Haskell on Hackage. Including this framework, maybe you could use assertions to verify that the files you require are present in the directories that you want them to be and they correspond to their intended purpose.

HUnit is the usual library for IO-based tests. I don't know of a set of properties/combinators for file actions -- that would be useful.

There is no reason why your test code cannot create a temporary directory, and check its contents after running your impure code.

If you want mainly automated testing of monadic code, you might want to look into Monadic QuickCheck. You can write down properties that you think should be true, such as
If you create a file with read permission, it will be possible to open the file for reading.
If you remove a file, it won't open.
Whatever else you think of...
QuickCheck will then generate random tests.

Related

how to define order for execution of feature files in karate framework

I have few feature files in specific subfolders, And I want to execute those feature files according to my defined order.
So how can we run the feature files in a specific order?
Thank you in Advance!
You can create one feature and then make calls to the other features. But this means you will lose the biggest benefit of Karate which is that you can run tests in parallel.
Any needs beyond this, please assume are not supported (or will never be supported) by Karate. Also read this: https://stackoverflow.com/a/46080568/143475

Can we run multiple feature files in the same package using Karate-gatling

I read in the documentation that we can run multiple feature files by adding newer lines for different classpaths in the simulation class. Is there a way wherein we can run multiple feature files belonging to the same package just like we run in FeatureRunner files?
No, I personally think that will introduce maintainability issues. We will consider PR-s though if anyone wants to contribute this.
If you really want this behavior, you should be able to write a small bit of Java code that scans a folder, loops over them and builds the Gatling "scenarios".

OCaml - testing functions not included in the signature

I'm writing tests for an OCaml module. Some of the functions in the module are not meant to be publicly visible, and so they're not included in the signature (.mli file).
I can't call these functions from my tests, because they're not visible outside of the module. So I'm having a hard time testing them. Is there a good way to get around this? For example, a way to tell ocamlc not to read the signature from the .mli file when it's compiling tests?
Some ideas:
Actually export the test functions, but use ocamldoc's stop comment (**/**) feature to avoid displaying the exports in the documentation.
Put all of your tests entirely in another module. However, this is difficult if you have abstract types because your tests may very well need access to the internal implementation.
Create a submodule Test, where all your tests go. That way it is clear what functions are just for testing. Possibly combine this with the (**/**) feature to also hide the sub-module from documentation.
I've heard that people sometimes separate their .mli files from their .ml files (in a different directory) so that they can compile with or without them (by telling ocamlc to look in the separate directory or not). I just tried a few experiments with this. I think it can be made to work, but it seems a little bit error prone to me. Maybe you could put the tests of the internal functions into the module. Exporting the test functions might not violate the modularity too badly. (Though of course it clutters up the module.)

How to do File I/O in Opa?

After reading (nearly) the whole ebook and taking a look at the API
i am still asking myself how to realize "traditional" web server behaviour with opa.
I understand (at least i believe that) that opa links external resources specified at
compile time into the executable, making them immutable and permament.
But what if, say, i would want to change the stylesheet of an application without recompiling it?
There seems to be a few methods in the stdlib (apidoc) but they are not covering
what i am used to from other programming languages.
A possible solution i could think of is making use of the internal database,
but that looks like a bit of an overkill for something simple like traditional File I/O.
Edit: this blog post explains more about dealing with external resources in Opa.
Long story short: you'll rarely work with external files in Opa.
Let me try to break this down. Opa will indeed embed resources. But for development mode you indeed just want to be able to tweak them (mainly CSS) and see changes immediately. If you compiled your program in a non-release mode then it will support this kind of actions (try --help, below is an excerpt)
Debugging Resources : dynamic edition:
[...]
--debug-editable-css
Export the CSS files embedded in the server to the file
system, so that they can be viewed and edited during
execution of the application
For many other editable&changing resources one would indede use the database.
And if you really need to work with files (again: with Opa you'll need it much less than with traditional web languages) then take a look at stdlib.io and, for advanced use, at BslFile module with bindings to Ocaml functions for file manipulation.
I think this module is for you :
http://opalang.org/resources/doc/index.html#file.opa.html/!/value_stdlib.io
import stdlib.io
my_css = File.content("css/file.css")
I am not seeing some way to write file, but I think if you need to write you should use the db.
But to read I think this is the solution :)

CDash Custom Dynamic Analysis

I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.