List location of executables within the CTest test suite - cmake

I have some source code that is compiled using CMake, with unit tests that are added to CTest via the CMake directive add_test(). I want the list of executables (absolute/relative path) that are used within this test suite.
Since tests are added as follows:
add_test(NAME ${A} COMMAND ${execA})
add_test(NAME ${B} COMMAND ${execB})
add_test(NAME ${C} COMMAND ${execA} ${addOptions})
there are only two distinct executables (${execA}, ${execB}) for three tests (${A}, ${B}, and ${C}).
I am totally fine having duplicates and ignoring options or having options.
So, the ideal output would be the following (but I certainly can do some parsing manually if needed):
src/folder1/test/testThisFunction
src/folder2/test/testThatFunction
src/folder1/test/testThisFunction -WithThisFlag
The closest I was able to get was with:
ctest -N,--show-only
which does not run the tests, but simply shows them:
Start 1: testA
1/3 Test #1: testA ....................... Passed 0.01 sec
Start 2: testB
2/3 Test #2: testB ....................... Passed 0.01 sec
Start 3: testC
3/3 Test #3: testC ........................ Passed 0.01 sec
Unfortunately, this output does not contain the information about the path to the executable.
In this example above, it is assumed that
${execA} = testThisFunction
${execB} = testThatFunction
where testThisFunction and testThatFunction are CMake targets (unit tests) and
${A} = "testA"
${B} = "testB"
${C} = "testC"
${addOptions} = "-WithThisFlag"
store names of the tests and options, respectively.
While I have the access to CMakeLists.txt, I strongly prefer to do this solely on ctest level after CMake configuring and subsequent compilation are already completed (thus, not generating list of executables using CMake commands in CMakeLists.txt).
If this is relevant, I am using CTest 3.10.2, but open to upgrade.

Firstly, you only need the -N or the --show-only option in the ctest command, not both. In your case, CTest silently ignored your command line option -N,--show-only because it was not recognized. The output indicates that your tests did run. To simply list them, use:
ctest --show-only
Getting to your question: If you upgrade to CMake 3.14 or higher, you gain the JSON-formatted ctest --show-only option.
ctest --show-only=json-v1
This will print information about each of your tests, including the arguments passed to each. Your output from this may contain something like this:
"tests" :
[
{
"backtrace" : 1,
"command" :
[
"src/folder1/test/testThisFunction"
],
"config" : "Debug",
"name" : "testA",
"properties" :
[
{
"name" : "WORKING_DIRECTORY",
"value" : "src"
}
]
},
{
"backtrace" : 3,
"command" :
[
"src/folder2/test/testThatFunction"
],
"config" : "Debug",
"name" : "testB",
"properties" :
[
{
"name" : "WORKING_DIRECTORY",
"value" : "src"
}
]
},
{
"backtrace" : 5,
"command" :
[
"src/folder1/test/testThisFunction",
"-WithThisFlag"
],
"config" : "Debug",
"name" : "testC",
"properties" :
[
{
"name" : "WORKING_DIRECTORY",
"value" : "src"
}
]
}
],
In the output, the full path to the executable and the test's command line arguments are listed after "command" :.

Related

Overriding Nextflow Parameters with Commandline Arguments

Given the following nextflow.config:
google {
project = "cool-project"
region = "europe-west4"
lifeSciences {
bootDiskSize = "200 GB"
debug = true
preemptible = true
}
}
Is it possible to override one or more of those settings using command line arguments. For example, if I wanted to specify that no preemptible machines should be used, can I do the following:
nextflow run main.nf -c nextflow.config --google.lifeSciences.preemptible false
?
Overriding pipeline parameters can be done using Nextflow's command line interface by prefixing the parameter name with a double dash. For example, put the following in a file called 'test.nf':
#!/usr/bin/env nextflow
params.greeting = 'Hello'
names = Channel.of( "foo", "bar", "baz" )
process greet {
input:
val name from names
output:
stdout result
"""
echo "${params.greeting} ${name}"
"""
}
result.view { it.trim() }
And run it using:
nextflow run -ansi-log false test.nf --greeting 'Bonjour'
Results:
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [backstabbing_cajal] - revision: 431ef92cef
[46/22b4f0] Submitted process > greet (1)
[ca/32992c] Submitted process > greet (3)
[6e/5880b0] Submitted process > greet (2)
Bonjour bar
Bonjour foo
Bonjour baz
This works fine for pipeline params, but AFAIK there's no way to directly override executor config like you describe on the command line. You can however, just parameterize these values and set them on the command line like described above. For example, in your nextflow.config:
params {
gc_region = false
gc_preemptible = true
...
}
profiles {
'test' {
includeConfig 'conf/test.config'
}
'google' {
includeConfig 'conf/google.config'
}
...
}
And in a file called 'conf/google.config':
google {
project = "cool-project"
region = params.gc_region
lifeSciences {
bootDiskSize = "200 GB"
debug = true
preemptible = params.gc_preemptible
}
}
Then you should be able to override these in the usual way:
nextflow run main.nf -profile google --gc_region "europe-west4" --gc_preemptible false
Note that you can also specify multiple configuration profiles by separating the profile names with a comma:
nextflow run main.nf -profile google,test ...

Using #BASENAME# with install_dir of custom_target() in Meson

#BASENAME# does not appear to work in the install_dir: parameter of the Meson custom_target() function.
protoc = find_program('protoc')
protobuf_sources= [
'apples.proto',
'oranges.proto',
'pears.proto'
]
protobuf_generated_go = []
foreach protobuf_definition : protobuf_sources
protobuf_generated_go += custom_target('go_' + protobuf_definition,
command: [protoc, '--proto_path=#CURRENT_SOURCE_DIR#', '--go_out=paths=source_relative:#OUTDIR#', '#INPUT#'],
input: protobuf_definition,
output: '#BASENAME#.pb.go',
install: true,
install_dir: 'share/gocode/src/github.com/foo/bar/protobuf/go/#BASENAME#/'
)
endforeach
I need the generated files to end up in at directory based on the basename of the input file:
share/gocode/src/github.com/foo/bar/protobuf/go/apples/apples.pb.go
share/gocode/src/github.com/foo/bar/protobuf/go/oranges/oranges.pb.go
share/gocode/src/github.com/foo/bar/protobuf/go/pears/pears.pb.go
If I use #BASENAME# in install_dir: to try and create the directory needed, it does not expand, and instead just creates a literal '#BASENAME#' directory.
share/gocode/src/github.com/foo/bar/protobuf/go/#BASENAME#/apples.pb.go
share/gocode/src/github.com/foo/bar/protobuf/go/#BASENAME#/oranges.pb.go
share/gocode/src/github.com/foo/bar/protobuf/go/#BASENAME#/pears.pb.go
How can the required installed directory location based on the basename be achieved?
(just 3 files in the above example, I actually have 30+ files)
Yes, it looks as there is no support for placeholders like BASENAME for install_dir parameter since this feature aims at file names not directories. But you can process iterator that is string in a loop:
foreach protobuf_definition : protobuf_sources
...
install_dir: '.../go/#0#'.format(protobuf_definition.split('.')[0])
endforeach

channel checks as empty even if it has content

I am trying to have a process that is launched only if a combination of conditions is met, but when checking if a channel has a path to a file, it always returns it as empty. Probably I am doing something wrong, in that case please correct my code. I tried to follow some of the suggestions in this issue but no success.
Consider the following minimal example:
process one {
output:
file("test.txt") into _chProcessTwo
script:
"""
echo "Hello world" > "test.txt"
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.toList().view()
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(!_chProcessTwoCheck.toList().isEmpty())
script:
def test = _chProcessTwoUse.toList().isEmpty() ? "I'm empty" : "I'm NOT empty"
println "The outcome is: " + test
}
I want to have process two run if and only if there is a file in the _chProcessTwo channel.
If I run the above code I obtain:
marius#dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [infallible_gutenberg] - revision: 9f57464dc1
[c8/bf38f5] process > one [100%] 1 of 1 ✔
[- ] process > two -
[/home/marius/pipeline/work/c8/bf38f595d759686a497bb4a49e9778/test.txt]
where the last line are actually the contents of _chProcessTwoView
If I remove the when directive from the second process I get:
marius#mg-dev:~/pipeline$ ./bin/nextflow run test.nf
N E X T F L O W ~ version 19.09.0-edge
Launching `test.nf` [modest_descartes] - revision: 5b2bbfea6a
[57/1b7b97] process > one [100%] 1 of 1 ✔
[a9/e4b82d] process > two [100%] 1 of 1 ✔
[/home/marius/pipeline/work/57/1b7b979933ca9e936a3c0bb640c37e/test.txt]
with the contents of the second worker .command.log file being: The outcome is: I'm empty
I tried also without toList()
What am I doing wrong? Thank you in advance
Update: a workaround would be to check _chProcessTwoUse.view() != "" but that is pretty dirty
Update 2 as required by #Steve, I've updated the code to reflect a bit more the actual conditions i have in my own pipeline:
def runProcessOne = true
process one {
when:
runProcessOne
output:
file("inputProcessTwo.txt") into _chProcessTwo optional true
file("inputProcessThree.txt") into _chProcessThree optional true
script:
// this would replace the probability that output is not created
def outputSomething = false
"""
if ${outputSomething}; then
echo "Hello world" > "inputProcessTwo.txt"
echo "Goodbye world" > "inputProcessThree.txt"
else
echo "Sorry. Process one did not write to file."
fi
"""
}
// making a copy so I check first if something in the channel or not
// avoids raising exception of MultipleInputChannel
_chProcessTwo.into{
_chProcessTwoView;
_chProcessTwoCheck;
_chProcessTwoUse
}
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
process two {
input:
file(myInput) from _chProcessTwoUse
when:
(runProcessOne)
script:
"""
echo "The outcome is: ${myInput}"
"""
}
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(inputFromProcessTwo) from _chProcessThree
script:
def extra_parameters = _chProcessThree.isEmpty() ? "" : "--extra-input " + inputFromProcessTwo
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
As #Steve mentioned, I should not even check if a channel is empty, NextFlow should know better to not initiate the process. But I think in this construct I will have to.
Marius
I think part of the problem here is that process 'one' creates only optional outputs. This makes dealing with the optional inputs in process 'three' a bit tricky. I would try to reconcile this if possible. If this can't be reconciled, then you'll need to deal with the optional inputs in process 'three'. To do this, you'll basically need to create a dummy file, pass it into the channel using the ifEmpty operator, then use the name of the dummy file to check whether or not to prepend the argument's prefix. It's a bit of a hack, but it works pretty well.
The first step is to actually create the dummy file. I like shareable pipelines, so I would just create this in your baseDir, perhaps under a folder called 'assets':
mkdir assets
touch assets/NO_FILE
Then pass in your dummy file if your '_chProcessThree' channel is empty:
params.dummy_file = "${baseDir}/assets/NO_FILE"
dummy_file = file(params.dummy_file)
process three {
input:
file(defaultInput) from _chUpstreamProcesses
file(optfile) from _chProcessThree.ifEmpty(dummy_file)
script:
def extra_parameters = optfile.name != 'NO_FILE' ? "--extra-input ${optfile}" : ''
"""
echo "Hooray! We got: ${extra_parameters}"
"""
}
Also, these lines are problematic:
//print contents of channel
println "Channel contents: " + _chProcessTwoView.view()
println _chProcessTwoView.view() ? "Me empty" : "NOT empty"
Calling view() will emit all values from the channel to stdout. You can ignore whatever value it returns. Unless you enable DSL2, the channel will then be empty. I think what you're looking for here is a closure:
_chProcessTwoView.view { "Found: $it" }
Be sure to append -ansi-log false to your nextflow run command so the output doesn't get clobbered. HTH.

platform dependent linker flags in bazel (for glut)

I am trying to build the c++ app with glut using bazel. It should work on both macos and linux. Now the problem is that on macos it requires passing "-framework OpenGL", "-framework GLUT" to linker flags, while on linux I should probably do soemthing like
cc_library(
name = "glut",
srcs = glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
...
in glut.BUILD.
So the question is
1. How to provide platform-dependent linker options to cc_library rules in general?
2. And in particular how to link to glut in platform-independent way using bazel?
You can do this using the Bazel select() function. Something like this might work:
config_setting(
name = "linux_x86_64",
values = {"cpu": "k8"},
visibility = ["//visibility:public"],
)
config_setting(
name = "darwin_x86_64",
values = {"cpu": "darwin_x86_64"},
visibility = ["//visibility:public"],
)
cc_library(
name = "glut",
srcs = select({
":darwin_x86_64": [],
":linux_x86_64": glob(["local/lib/libglut*.dylib", "lib/libglut*.so"]),
}),
linkopts = select({
":darwin_x86_64": [
"-framework OpenGL",
"-framework GLUT"
],
":linux_x86_64": [],
})
...
)
Dig around in the Bazel github repository, it's got some good real world examples of using select().
I had a similar problem but with picking the right compiler depending on the platform and #zlalanne's solution didn't work for me. After 2 days of frustration, I finally found the following solution:
config_setting (
name = "darwin",
constraint_values = [ "#bazel_tools//platforms:osx" ]
)
config_setting (
name = "windows",
constraint_values = [ "#bazel_tools//platforms:windows" ]
)
I didn't have any need for linux, but adding this to your BUILD file should work:
config_setting (
name = "linux",
constraint_values = [ "#bazel_tools//platforms:linux" ]
)
Use ":darwin", ":windows" and ":linux" in your selects and you should have a solution that works.

How to set default values for Tcl variables?

I have some Tcl scripts that are executed by defining variables in the command-line invocation:
$ tclsh84 -cmd <script>.tcl -DEF<var1>=<value1> -DEF<var2>=<value2>
Is there a way to check if var1 and var2 are NOT defined at the command line and then assign them with a set of default values?
I tried the keywords global, variable, and set, but they all give me this error when I say "if {$<var1>==""}": "can't read <var1>: no such variable"
I'm not familiar with the -def option on tclsh.
However, to check if a variable is set, instead of using 'catch', you can also use 'info exist ':
if { ![info exists blah] } {
set blah default_value
}
Alternatively you can use something like the cmdline package from tcllib. This allows you to set up defaults for binary flags and name/value arguments, and give them descriptions so that a formatted help message can be displayed. For example, if you have a program that requires an input filename, and optionally an output filename and a binary option to compress the output, you might use something like:
package require cmdline
set sUsage "Here you put a description of what your program does"
set sOptions {
{inputfile.arg "" "Input file name - this is required"}
{outputfile.arg "out.txt" "Output file name, if not given, out.txt will be used"}
{compressoutput "0" "Binary flag to indicate whether the output file will be compressed"}
}
array set options [::cmdline::getoptions argv $sOptions $sUsage]
if {$options(inputfile) == ""} {puts "[::cmdline::usage $sOptions $sUsage]";exit}
The .arg suffix indicates this is a name/value pair argument, if that is not listed, it will assume it is a binary flag.
You can catch your command to prevent error from aborting the script.
if { [ catch { set foo $<var1> } ] } {
set <var1> defaultValue
}
(Warning: I didn't check the exact syntax with a TCL interpreter, the above script is just to give the idea).