CMake: Enforcing Execution Order between Sibling Target Dependencies - cmake

I would like to enforce the execution order of sibling dependencies. Let's assume we have the following top-level targets:
add_custom_target(test_all)
add_custom_target(test_coverage)
add_custom_target(test_coverage_zero)
add_custom_target(test_coverage_collect)
I'm using test_all to execute unit tests (and possibly build them and their dependencies). test_coverage should execute test_coverage_zero, then test_all, and finally test_coverage_collect. test_coverage_zero will cleanup leftover coverage data from a previous run of test_all, while test_coverage_collect uses the current coverage data and produces some kind human-readable output. The reason for this setup is to allow test_all to be executed without actual coverage data processing. On the other hand test_coverage needs to execute test_all to produce coverage data.
[Detail: I'm using gcov/lcov for the coverage data and added custom commands to test_coverage_zero and test_coverage_collect for the actual processing.]
I've setup the following dependencies to achieve this behavior:
add_dependencies(test_coverage test_coverage_zero test_all test_coverage_collect)
That does not work. The actual execution order is test_all, test_coverage_zero, and test_coverage_collect, which removes the coverage data before the collection step.
My question: How do I enforce the order of target dependencies (on the sibling level) in CMake?

Looks like the answer is:
add_dependencies(test_all test_coverage_zero)
add_dependencies(test_coverage_collect test_all)
add_dependencies(test_coverage test_coverage_zero test_all test_coverage_collect)
However, removing either test_coverage_zero or test_all from the last dependency will also not work, even though there's a clear dependency chain test_coverage<-test_coverage_collect<-test_all<-test_coverage_zero.

Related

Run a fallback script when liquibase script fails in gradle

I'm using Liquibase with gradle in order to apply database changes.
I have three activities in runList:
runList='stop_job, execute_changes, start_job'
It works fine in case that I don't have any exception, but if something fails on the second step (execute_changes) it stops there and does not execute "start_job" activity.
Is it possible to introduce something like a fallback activity or "finally" block?
You could use failOnError:false. It defines whether the migration will fail if an error occurs while executing the changeset. Default value is true.

karate.abort() in v0.9.4 results in Failed scenario in cucumber html reports

karate.abort() results in skipped steps. There was a fix previous for this . However, cucumber reporting treats skipped tests as Failed.
Is there any workaround where I can use karate.abort() and not have Failed scenario, as I am using it deliberately to skip some DB checks.
Or is there any alternative to karate.abort()?
Yes we need some community help to resolve how third party reports treat skipped steps, please read this - and maybe you can be the one to find a solution: https://github.com/intuit/karate/issues/755#issuecomment-488710450
A workaround is to split into a second feature and then:
* if (condition) karate.call('second.feature')

How do you use the benchmark flags for the go (golang) gocheck testing framework?

How does one use the flag options for benchmarks with the gocheck testing framework? In the link that I provided it seems to be that the only example they provide is by running go test -check.b, however, they do not provide additional comments on how it works so its hard to use it. I could not even find the -check in the go documentation when I did go help test nor when I did go help testflag. In particular I want to know how to use the benchmark testing framework better and control how long it runs for or for how many iterations it runs for etc etc. For example in the example they provide:
func (s *MySuite) BenchmarkLogic(c *C) {
for i := 0; i < c.N; i++ {
// Logic to benchmark
}
}
There is the variable c.N. How does one specify that variable? Is it through the actual program itself or is it through go test and its flags or the command line?
On the side note, the documentation from go help testflag did talk about -bench regex, benchmem and benchtime t options, however, it does not talk about the -check.b option. However I did try to run these options as described there but it didn't really do anything I could notice. Does gocheck work with the original options for go test?
The main problem I see is that there is no clear documentation for how to use the gocheck tool or its commands. I accidentally gave it a wrong flag and it threw me a error message suggesting useful commands that I need (which limited description):
-check.b=false: Run benchmarks
-check.btime=1s: approximate run time for each benchmark
-check.f="": Regular expression selecting which tests and/or suites to run
-check.list=false: List the names of all tests that will be run
-check.v=false: Verbose mode
-check.vv=false: Super verbose mode (disables output caching)
-check.work=false: Display and do not remove the test working directory
-gocheck.b=false: Run benchmarks
-gocheck.btime=1s: approximate run time for each benchmark
-gocheck.f="": Regular expression selecting which tests and/or suites to run
-gocheck.list=false: List the names of all tests that will be run
-gocheck.v=false: Verbose mode
-gocheck.vv=false: Super verbose mode (disables output caching)
-gocheck.work=false: Display and do not remove the test working directory
-test.bench="": regular expression to select benchmarks to run
-test.benchmem=false: print memory allocations for benchmarks
-test.benchtime=1s: approximate run time for each benchmark
-test.blockprofile="": write a goroutine blocking profile to the named file after execution
-test.blockprofilerate=1: if >= 0, calls runtime.SetBlockProfileRate()
-test.coverprofile="": write a coverage profile to the named file after execution
-test.cpu="": comma-separated list of number of CPUs to use for each test
-test.cpuprofile="": write a cpu profile to the named file during execution
-test.memprofile="": write a memory profile to the named file after execution
-test.memprofilerate=0: if >=0, sets runtime.MemProfileRate
-test.outputdir="": directory in which to write profiles
-test.parallel=1: maximum test parallelism
-test.run="": regular expression to select tests and examples to run
-test.short=false: run smaller test suite to save time
-test.timeout=0: if positive, sets an aggregate time limit for all tests
-test.v=false: verbose: print additional output
is writing wrong commands the only way to get some help with this tool? it doesn't have a help flag or something?
I'm 5 years late, but to specify how many N times to run. Use the option -benchtime Nx.
Example:
go test -bench=. -benchtime 100x
BenchmarkTest 100 ... ns/op
Please read more about all go testing flags here.
see the Description_of_testing_flags:
-bench regexp
Run benchmarks matching the regular expression.
By default, no benchmarks run. To run all benchmarks,
use '-bench .' or '-bench=.'.
-check.b works the same way as -test.bench.
E.g. to run all benchmarks:
go test -check.b=.
to run a specific benchmark:
go test -check.b=BenchmarkLogic
more information about testing in Go can be found here

How to tell whether elaboration has completed at a breakpoint?

When I hit a breakpoint in a VLAB script, how can I find out if I have caused elaboration to finish or not, yet?
My script reaches a statement that raises an error:
Error: (E529) insert module failed: elaboration done
(The command that causes this is vlab.instantiate("stim", "stim"))
So obviously elaboration was unexpectedly (for me) already completed. I need to somehow go back in the process and find out where that happened - so I need some way of asking "is elaboration complete?" at the point where I set breakpoints earlier in the script.
SystemC provides the following function to query the current phase of elaboration or simulation.
sc_status sc_get_status();
It returns either SC_ELABORATION, SC_BEFORE_END_OF_ELABORATION, SC_END_OF_ELABORATION, SC_START_OF_SIMULATION, SC_RUNNING, SC_PAUSED, SC_STOPPED, or SC_END_OF_SIMULATION.
See section 4.5.8 in the SystemC Language Reference Manual for more details. Note that this function was only added in the most recent version of the standard, IEEE Standard 1666-2011.
In VLAB, the SystemC API is available from the sysc Python package, so the following script can be used to test if the current phase is at elaboration:
import sysc
print "Is elaboration phase:", sysc.sc_get_status() == sysc.SC_ELABORATION

Interpretting MSBuild output numbers

What do these numbers (10:4, 37, 10:5) mean in the following MSBUILD output?
10:4>Done Building Project "C:\Foo.csproj" (default targets).
37>Project "C:\Bar.csproj" (37) is building "C:\Foo.csproj" (10:5) on node 3.
10:5>Building with tools version "4.0".
When building a solution, multiple projects are built, and you'll see each with a unique number (the '37' above). Calling out to the MSBuild task from one project to another will give a similar result, it is similar to a 'recursion depth' indicator. The 10:4> is typically related to which 'node' is being used in a multi-processor build, though I'm unfamiliar with the ':' syntax for this indicator and have only seen it with a single number. Are you doing a build with /m, from a solution, or what other differences might be in play in your situation (e.g. TFS)?