Nextflow in cmd process is expanded - nextflow

I just created a new environment for nextflow, but when I run the script, the process steps are expanded like this:
[warm up] executor > local
[9c/f50613] Submitted process > bowtie2_target_genome (2)
[1f/1b1f43] Submitted process > bowtie2_vector (1)
[d0/4b99a5] Submitted process > bowtie2_vector (3)
[ef/183556] Submitted process > bowtie2_target_genome (1)
[4a/330246] Submitted process > bowtie2_target_genome (4)
[0b/1e00ff] Submitted process > bowtie2_vector (4)
[bd/ad292b] Submitted process > bowtie2_vector (2)
[98/1caa92] Submitted process > bowtie2_vector (5)
[55/2f25bb] Submitted process > bowtie2_target_genome (5)
[05/5890e2] Submitted process > bowtie2_target_genome (3)
It used to be look like this:
executor > local (1)
[14/0418b6] process > bowtie2_target_genome (3) [100%] 5 of 5
[a3/7b824b] process > bowtie2_vector (1) [100%] 5 of 5
[fc/1b8883] process > aln_sum (5) [100%] 5 of 5
[57/0762c0] process > create_bedgraph_and_bigwig (2) [100%] 5 of 5

Looks like you've added -ansi-log false to your run command. The default value for this option is 'true', so either use that or remove the option altogether.
Setting -ansi-log false is helpful when running Nextflow in a non-interactive shell. Your command-line might look like:
nextflow run -ansi-log false main.nf

Related

`errorStrategy` setting to stop current process but continue pipeline

I have a lot of samples that go through a process which sometimes fail (deterministically). In such a case, I would want the failing process to stop, but all other samples to still get submitted and processed independently.
If I understand correctly, setting errorStrategy 'ignore' will continue the script within the failing process, which is not what I want. And errorStrategy 'finish' would stop submitting new samples, even though there is no reason for the other samples to fail too. And while errorStrategy 'retry' could technically work (by repeating the failing processes while the good ones get through), that doesn't seem like a good solution.
Am I missing something?
If a process can fail deterministically, it might be better to handle this situation somehow. Setting the errorStrategy directive to 'ignore' will mean any processes execution errors are ignored and allow your workflow continue. For example, you might get a process execution error if a process exits with a non-zero exit status or if one or more expected output files are missing. The pipeline will continue, however downstream processes will not be attempted.
Contents of test.nf:
nextflow.enable.dsl=2
process foo {
tag { sample }
input:
val sample
output:
path "${sample}.txt"
"""
if [ "${sample}" == "s1" ] ; then
(exit 1)
fi
if [ "${sample}" == "s2" ] ; then
echo "Hello" > "${sample}.txt"
fi
"""
}
process bar {
tag { txt }
input:
path txt
output:
path "${txt}.gz"
"""
gzip -c "${txt}" > "${txt}.gz"
"""
}
workflow {
Channel.of('s1', 's2', 's3') | foo | bar
}
Contents of nextflow.config:
process {
// this is the default task.shell:
shell = [ '/bin/bash', '-ue' ]
errorStrategy = 'ignore'
}
Run with:
nextflow run -ansi-log false test.nf
Results:
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [drunk_bartik] - revision: e2103ea23b
[9b/56ce2d] Submitted process > foo (s2)
[43/0d5c9d] Submitted process > foo (s1)
[51/7b6752] Submitted process > foo (s3)
[43/0d5c9d] NOTE: Process `foo (s1)` terminated with an error exit status (1) -- Error is ignored
[51/7b6752] NOTE: Missing output file(s) `s3.txt` expected by process `foo (s3)` -- Error is ignored
[51/267685] Submitted process > bar (s2.txt)

How to call a process in workflow.onError

I have this small pipeline:
process test {
"""
echo 'hello'
exit 1
"""
}
workflow.onError {
process finish_error{
script:
"""
echo 'blablabla'
"""
}
}
I want to trigger a python script in case the pipeline has an error using the finish error process, but this entire process does not seem to be triggered even when using a simple echo blabla example.
nextflow run test.nf
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [cheesy_banach] - revision: 9020d641ca
executor > local (1)
[56/994298] process > test [100%] 1 of 1, failed: 1 ✘
[- ] process > finish_error [ 0%] 0 of 1
Error executing process > 'test'
Caused by:
Process `test` terminated with an error exit status (1)
Command executed:
echo 'hello'
exit 1
Command exit status:
1
Command output:
hello
Command wrapper:
hello
Work dir:
/home/joost/nextflow/work/56/9942985fc9948fd9bf7797d39c1785
Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line
How can I trigger this finish_error process, and how can I view its output?
The onError handler is invoked when a process causes pipeline execution to terminate prematurely. Since a Nextflow pipeline is really just a series of processes joined together, launching another pipeline process from within an event handler doesn't make much sense to me. If your python script should be run using the local executor, you can just execute it in the usual way. This example assumes your script is executable and has an appropriate shebang:
process test {
"""
echo 'hello'
exit 1
"""
}
workflow.onError {
def proc = "${baseDir}/test.py".execute()
proc.waitFor()
println proc.text
}
Run using:
nextflow run -ansi-log false test.nf

Running unit test target on XCode9 returns "Early unexpected exit" error

I'm learning how to add unit tests to an objective-c project using XCode9. So I've created a command line project from scratch called Foo and afterwards I've added a new target to the project called FooTests. Afterwards I've edited Foo's scheme to add FooTests. However, whenever I run the tests (i.e., menu "Product" -> "Tests" ) XCode9 throws the following error:
Showing All Messages
Test target FooTests encountered an error (Early unexpected exit, operation never finished bootstrapping - no restart will be attempted)
However, when I try to run tests by calling xcode-build from the command line, it seems that all unit tests are executed correctly. Here's the output;
a483e79a7057:foo ram$ xcodebuild test -project foo.xcodeproj -scheme foo
2020-05-15 17:39:30.496 xcodebuild[53179:948485] IDETestOperationsObserverDebug: Writing diagnostic log for test session to:
/var/folders/_z/q35r6n050jz5fw662ckc_kqxbywcq0/T/com.apple.dt.XCTest/IDETestRunSession-E7DD2270-C6C2-43ED-84A9-6EBFB9A4E853/FooTests-8FE46058-FC4A-47A2-8E97-8D229C5678E1/Session-FooTests-2020-05-15_173930-Mq0Z8N.log
2020-05-15 17:39:30.496 xcodebuild[53179:948484] [MT] IDETestOperationsObserverDebug: (324DB265-AD89-49B6-9216-22A6F75B2EDF) Beginning test session FooTests-324DB265-AD89-49B6-9216-22A6F75B2EDF at 2020-05-15 17:39:30.497 with Xcode 9F2000 on target <DVTLocalComputer: 0x7f90b2302ef0 (My Mac | x86_64h)> (10.14.6 (18G4032))
=== BUILD TARGET foo OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
=== BUILD TARGET FooTests OF PROJECT foo WITH CONFIGURATION Debug ===
Check dependencies
Test Suite 'All tests' started at 2020-05-15 17:39:30.845
Test Suite 'FooTests.xctest' started at 2020-05-15 17:39:30.846
Test Suite 'FooTests' started at 2020-05-15 17:39:30.846
Test Case '-[FooTests testExample]' started.
Test Case '-[FooTests testExample]' passed (0.082 seconds).
Test Case '-[FooTests testPerformanceExample]' started.
/Users/ram/development/objective-c/foo/FooTests/FooTests.m:36: Test Case '-[FooTests testPerformanceExample]' measured [Time, seconds] average: 0.000, relative standard deviation: 84.183%, values: [0.000006, 0.000002, 0.000001, 0.000002, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001, 0.000001], performanceMetricID:com.apple.XCTPerformanceMetric_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
Test Case '-[FooTests testPerformanceExample]' passed (0.660 seconds).
Test Suite 'FooTests' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.743) seconds
Test Suite 'FooTests.xctest' passed at 2020-05-15 17:39:31.589.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.744) seconds
Test Suite 'All tests' passed at 2020-05-15 17:39:31.590.
Executed 2 tests, with 0 failures (0 unexpected) in 0.742 (0.745) seconds
** TEST SUCCEEDED **
Does anyone know how to add unit tests to an xcode9 project for a command line application? If you happen to know, what's the right way of doing this and what am I doing wrong?

Spinnaker: Deciding which stage to go next

I'm creating this scenario where:
Stage A which is a Jenkins,
Stage B which will run if Stage A is successful,
Stage C which will run if Stage A fail.
In the desired pipeline select Stage A > Execution Options > If stage fails > ignore the failure
Stage B > Execution Options > Conditional on Expression > Add your expression when succesful
Stage C > Execution Options > Conditional on Expression > Add your expression when Stage A fails...
i made a short video tutorial feel free to watch it.

open MPI - ring_c on multiple hosts fails

I have recently installed open MPI on two Ubuntu 14.04 hosts and I am now testing its functionality with the two provided test functions hello_c and ring_c. The hosts are called 'hermes' and 'zeus' and they both have the user 'mpiuser' to log in non-interactively (via ssh-agent).
The functions mpirun hello_c and mpirun --host hermes,zeus hello_c both work properly.
Calling the function mpirun --host zeus ring_c locally also works. Output for both hermes and zeus:
mpiuser#zeus:/opt/openmpi-1.6.5/examples$ mpirun --host zeus ring_c
Process 0 sending 10 to 0, tag 201 (1 processes in ring)
Process 0 sent to 0
Process 0 decremented value: 9
Process 0 decremented value: 8
Process 0 decremented value: 7
Process 0 decremented value: 6
Process 0 decremented value: 5
Process 0 decremented value: 4
Process 0 decremented value: 3
Process 0 decremented value: 2
Process 0 decremented value: 1
Process 0 decremented value: 0
Process 0 exiting
But calling the function mpirun --host zeus,hermes ring_c fails and gives following output:
mpiuser#zeus:/opt/openmpi-1.6.5/examples$ mpirun --host hermes,zeus ring_c
Process 0 sending 10 to 1, tag 201 (2 processes in ring)
[zeus:2930] *** An error occurred in MPI_Recv
[zeus:2930] *** on communicator MPI_COMM_WORLD
[zeus:2930] *** MPI_ERR_TRUNCATE: message truncated
[zeus:2930] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
Process 0 sent to 1
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 2930 on
node zeus exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
I haven't found any documentation on how to solve such a problem and I don't have a clue where to look for the mistake on the basis of the error output. How can I fix this?
You've changed two things between the first and second runs - you've increased the number of processes from 1 to 2, and run on multiple hosts rather than a single host.
I'd suggest you first check you can run on 2 processes on the same host:
mpirun -n 2 ring_c
and see what you get.
When debugging on a cluster it's often useful to know where each process is running. You should always print out the total number of processes as well. Try using the following code at the top of ring_c.c:
char nodename[MPI_MAX_PROCESSOR_NAME];
int namelen;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Get_processor_name(nodename, &namelen);
printf("Rank %d out of %d running on node %s\n", rank, size, nodename);
The error you're getting is saying that the incoming message is too large for the receive buffer, which is weird given that the code always sends and receives a single integer.