How to write ExUnit test cases in elixir for an escript project - testing

I have an escript project done in Elixir using mix.
The project has two or three .ex files that needs to be executed using certain arguments using the "escript" command
It is like a client server based project where one escript run, starts the server(that keeps running) and a another escript run (in another terminal) connects to the server and does operations.
How to write a test script using ExUnit (and run using mix run test) and call the client functions in the test file after starting server.?

I think the way I would recommend is to have the actual escript be a very thin wrapper around some Elixir module. That way you can just test the module itself and the amount of code that is untested will be very small.

Related

Why does multiprocessing Julia break my module imports?

My team is trying to run a library (Cbc with JuMP) with multiprocessing and using the julia -p # argument. Our code is in a julia package and so we can run our code fine using julia --project, it just runs with one process. Trying to specify both at once however julia --project -p 8 breaks our ability to run the project since running using PackageName after results in an error. We also intend to compile this using the PackageCompiler library so getting it to work with a project is necessary.
We have our project in a folder with a src directory, a Project.toml, and a Manifest.toml
src contains: main.jl and Solver.jl
Project.toml contains:
name = "Solver"
uuid = "5a323fe4-ce2a-47f6-9022-780aeeac18fe"
authors = ["..."]
version = "0.1.0"
Normally, our project works fine starting this way (single threaded):
julia --project
julia> using Solver
julia> include("src/main.jl")
If we add the -p 8 argument when starting Julia, we get an error upon typing using Solver:
ERROR: On worker 2:
ArgumentError: Package Solver [5a323fe4-ce2a-47f6-9022-780aeeac18fe] is required but does not seem to be installed:
- Run `Pkg.instantiate()` to install all recorded dependencies.
We have tried running using Pkg; Pkg.instantiate(); using Solver but this doesn't help as another error just happens later (at the include("src/main.jl") step):
ERROR: LoadError: On worker 2:
ArgumentError: Package Solver not found in current path:
- Run `import Pkg; Pkg.add("Solver")` to install the Solver package.
and then following that suggestion produces another error:
ERROR: The following package names could not be resolved:
* Solver (not found in project, manifest or registry)
Please specify by known `name=uuid`.
Why does this module import work fine in single process mode, but not with -p 8?
Thanks in advance for your consideration
First it is important to note that you are NOT using multi-thread parallelism, you are using distributed parallelism. When you initiate with -p 2 you are launching two different processes that do not share the same memory. Additionally, the project is only being loaded in the master process, that is why the other processes cannot see whatever is in the project. You can learn more about the different kinds of parallelism that Julia offers in the official documentation.
To load the environment in all the workers, you can add this to the beginning of your file.
using Distributed
addprocs(2; exeflags="--project")
#everywhere using Solver
#everywhere include("src/main.jl")
and remove the -p 2 part of the line which you launch julia with. This will load the project on all the processes. The #everywhere macro is used to indicate all the process to perform the given task. This part of the docs explains it.
Be aware, however, that parallelism doesn't work automatically, so if your software is not written with distributed parallelism in mind, it may not get any benefit from the newly launched workers.
There is an issue with Julia when an uncompiled module exists and several parallel processes try to compile it at the same time for the first use.
Hence, if you are running your own module across many processes on a single machine you always need to run in the following way (this assumes that Julia process is run in the same folder where your project is located):
using Distributed, Pkg
#everywhere using Distributed, Pkg
Pkg.activate(".")
#everywhere Pkg.activate(".")
using YourModuleName
#everywhere using YourModuleName
I think this approach is undocumented but I found it experimentally to be most robust.
If you do not use my pattern sometimes (not always!) a compiler chase occurs and strange things tend to happen.
Note that if you are running a distributed cluster you need to modify the code above to run the initialization on a single worker from each node and than on all workers.

How do I configure my unit tests to run automatically with Elm-Live?

How do I configure my unit tests to run automatically with Elm-Live?
I currently run elm-live as follows:
elm-live Home.elm --open --output=home.js
In addition to having automated compilations per modification of my web app, I would also like to ensure that I did not introduce breaking changes as well by having unit tests execute automatically after compiling.
Any suggestions?
You can use concurrently to run both processes in the same terminal instance.
The downside is that the stdout will probably not preserve the colors, so reading errors will be a little tricky.
concurrently 'elm-live Home.elm --open --output=home.js' 'elm-test --watch'
Example
I've made an example of this setup, check it out on GitHub.
UPD: I have updated the example to be Windows-compatible. Apparently, it should have escaped double quotes on the package.json instead of single quotes.

Symfony2 - reset envirnonment based on PHPUnit configuration

I am trying to set up two specific PHPUnit test environments for Symfony2.
I am using Ant to run PHPUnit, so when I run the two commands below I would expect the following results:
ant test
Runs the test suite using MySQL databse. (matching staging and production envs)
ant ramdisk
Runs the test suite using a SQLite ramdisk. (super fast!)
I can figure out how to set up the MySQL & SQLite for PHPUnit individually, quite easily.
How do I get PHPUnit to specify a particular environment to use?
Our base class for testing sets up the environment, but I cannot figure out how to get a argument passed into here from PHPUnit so that I can conditionally set the environment.
I have tried a different bootstrap for ramdisk, but since the base test class sets the environment I could not make much progress.
Any ideas?
I eventually solved this.
PHPUnit can pass variables through to Symfony: http://phpunit.de/manual/3.7/en/appendixes.configuration.html#appendixes.configuration.php-ini-constants-variables

Running a set of actions before every test-file in mocha

I've started recently working with mocha to test my expressjs server.
My tests are separated to multiple files and most of them contain some duplicated segments (Mostly before statements that load all the fixtures to the DB, etc) and that's really annoying.
I guess I could export them all to a single file and import them on each and every test, but I wonder if there are some more elegant solutions - such as running a certain file that has all the setup commands , and another file that contains all the tear-down commands.
If anyone knows the answer that would be awesome :)
There are three basic levels of factoring out common functionality for mocha tests. If you want to load in some fixtures once for a bunch of tests (and you've written each test to be independent), use the before function to load the fixtures at the top of the file. You can also use beforeEach function if you need the fixtures re-initialized each time.
The second option (which is related), is to pull out common functionality into a separate file or set of files and require that file.
Finally, note that mocha has a root level hook:
You may also pick any file and add "root"-level hooks. For example, add beforeEach() outside of all describe() blocks. This will cause the callback to beforeEach() to run before any test case, regardless of the file it lives in (this is because Mocha has an implied describe() block, called the "root suite").
We use that to start an Express server once (and we use an environment variable so that it runs on a different port than our development server):
before(function () {
process.env.NODE_ENV = 'test';
require('../../app.js');
});
(We don't need a done() here because require is synchronous.) This was, the server is started exactly once, no matter how many different test files include this root-level before function.
The advantage of splitting things up in this way is that we can run npm test which runs all tests, or run mocha on any specific file or any specific folder, or any specific test or set of tests (using it.only and describe.only) and all of the prerequisites for the selected tests will run.
Why not mocha -r <module> or even using mocha.opts?
Have your common file in, say, init.js and then run
mocha -r `./init`
Which will cause mocha to run and load ./init.js before loading any of the mocha test files.
You could also put it in mocha.opts inside your tests directory, and have its contents be
--require ./init

Why would a native program run fine when executed directly, but fail with a seg fault when submitted through condor

I have a third party library that I'm attempting to incorporate into a simulation. We have the static library (.a), along with all of it's runtime dependencies (shared objects). I've created a very simple application (in C) that is linked against the library. All it does is call an initialization function that is part of the third party library's API, and exits. When I run this directly from the command line, it works fine. If I submit the executable to our Condor grid, it fails with a seg fault on strncpy (libc.so.6). I've forced condor to only run the executable on a particular machine, and if I run it directly on that machine, it works fine.
I'm mostly a Java programmer... limited amount of native coding experience. I'm familiar with tools such as nm, ldd, catchsegv, etc... to the point where I can run them. I don't really know where to start looking for an issue though.
I've run ldd directly on the executing machine, and via a script submitted through condor, along with my executable. ldd reports the same files in both cases.
I don't understand how running it directly would work, but it would fail being run by condor. The process that ultimately executes the program, condor_startd, is a process that starts as root, and changes its effective uid to the submitter. Perhaps this has something to do with it?
Don't know why this would cause an issue, but the culprit was the LANG environment variable. It was not set when running under Condor, but was set to US_EN.UTF-8 when running locally. Adding this value to the condor execution environment fixed the problem.