Nextflow : Is it possible to tranform a queue channel to a value channel? - nextflow

I have a process A which outputs a file into a channel outA. I want to use that file as input for 3 downstream processes B, C and D. As the channel outA created is a queue channel by default, I cannot directly use the file more than once (unlike value channels).
Currently, I use the into operator to duplicate the channel outA as described here (see the code below).
I also know that you can create a value channel from a file by doing Channel.value(file('/path/to/file.txt')).
My code currently :
// Upstream process creating a queue channel with one file
process A {
output:
file outA
"echo 'Bonjour le monde !' > $outA"
}
// Queue channel triplication
outA.into {inB; inC; inD}
// Downstream processes all using the same file
process B {
input:
file inB
"script of process B $inB"
}
process C {
input:
file inC
"script of process C $inC"
}
process D {
input:
file inD
"script of process D $inD"
}
I works fine as it is, but I wonder if it is possible to transform the queue channel outA into a value channel, so that I can use the same channel as input for processes B, C and D.

You can use the first() operator to do that, e.g.:
inX = outA.first()
process B {
input:
file inX
"script of process B $inX"
}
etc
Also note that when a process has no input (like process A) its outputs are implicitly value channels.

Related

Nextflow input how to declare tuple in tuple

I am working with a nextflow workflow that, at a certain stage, groups a series of files by their sample id using groupTuple(), and resulting in a channel that looks like this:
[sample_id, [file_A, file_B, ... , file_N]]
[sample_id, [file_A, file_B, ... , file_N]]
...
[sample_id, [file_A, file_B, ... , file_N]]
Note that this is the same channel structure that you get from .fromFilePairs().
I want to use these channel items in a process in such a way that, for each item, the process reads the sample_id from the first field and all the files from the inner tuple at once.
The nextflow documentation is somewhat cryptic about this, and it is hard to find how to declare this type of input in a channel, so I thought I'd create a question on stack overflow and then answer it myself for anyone who will ever be looking for this answer.
How does one declare the inner tuple in the input section of a nextflow process?
In the example given above, my inner tuple contains items of only one type (files). I can therefore pass the whole second term of the tuple (i.e. the inner tuple) as a single input item under the file() qualifier. Like this:
input:
tuple \
val(sample_id), \
file(inner_tuple) \
from Input_channel
This will ensure that the tuple content is read as file (one by one), the same way as performing .collect() on a channel of files, in the sense that all files will then be available in the nextflow temp directory where the process is executed.
The question is how you come up with sample_id, but in case they just have different file extensions you might use something like this:
all_files = Channel.fromPath("/path/to/your/files/*")
all_files.map { it -> [it.simpleName, it] }
.groupTuple()
.set { grouped_files }
The path qualifier (previously the file qualifier) can be used to stage a single (file) value or a collection of (file) values into the process execution directory. The note at the bottom of the multiple input files section in the docs also mentions:
The normal file input constructs introduced in the input of files
section are valid for collections of multiple files as well.
This means, you can use a script variable, e.g.:
input:
tuple val(sample_id), path(my_files)
In which case, the variable will hold the list of files (preserving the original filenames). You could use it directly to refer to all of the files in the list, or, you could access specific (file) elements (if you need them) using square bracket (slice) notation.
This is the syntax you will want most of the time. However, if you need predicable filenames or if you need to deal with files with the identical filenames, you may need a different approach:
Alternatively, you could specify a target filename, e.g.:
input:
tuple val(sample_id), path('my_file')
In the case where a single file is received by the process, the file would be staged with the target filename. However, when a collection of files is received by the process, the filename will be appended with a numerical suffix representing its ordinal position in the list. For example:
process test {
tag { sample_id }
debug true
stageInMode 'rellink'
input:
tuple val(sample_id), path('fastq')
"""
echo "${sample_id}:"
ls -g --time-style=+"" fastq*
"""
}
workflow {
readgroups = Channel.fromFilePairs( '*_{1,2}.fastq' )
test( readgroups )
}
Results:
$ touch {foo,bar,baz}_{1,2}.fastq
$ nextflow run .
N E X T F L O W ~ version 22.04.4
Launching `./main.nf` [scruffy_caravaggio] DSL2 - revision: 87a80d6d50
executor > local (3)
[65/66f860] process > test (bar) [100%] 3 of 3 ✔
baz:
lrwxrwxrwx 1 users 20 fastq1 -> ../../../baz_1.fastq
lrwxrwxrwx 1 users 20 fastq2 -> ../../../baz_2.fastq
foo:
lrwxrwxrwx 1 users 20 fastq1 -> ../../../foo_1.fastq
lrwxrwxrwx 1 users 20 fastq2 -> ../../../foo_2.fastq
bar:
lrwxrwxrwx 1 users 20 fastq1 -> ../../../bar_1.fastq
lrwxrwxrwx 1 users 20 fastq2 -> ../../../bar_2.fastq
Note that the names of staged files can be controlled using the * and ? wildcards. See the links above for a table that shows how the wildcards are replaced depending on the cardinality of the input collection.

Passing list of filenames to nextflow process

I am a newcomer to Nextflow and I am trying to process multiple files in a workflow. The number of these files is more than 300, so I would like to not to paste it into a command line as an option. So what I have done is I've created a file with every filename of the files I need to process, but I am not sure how to pass it into the process. This is what I've tried:
params.SRRs = "srr_ids.txt"
process tmp {
input:
file ids
output:
path "*.txt"
script:
'''
while read id; do
touch ${id}.txt;
echo ${id} > ${id}.txt;
done < $ids
'''
}
workflow {
tmp(params.SRRs)
}
The script is supposed to read in the file srr_ids.txt, and create files that have their ids in it (just testing on a smaller task). The error log says that the id variable is unbound, but I don't understand why. What is the conventional way of passing lots of filenames to a pipeline? Should I write some other process that parses the list?
Maybe there's a typo in your question, but the error is actually that the ids variable is unbound:
Command error:
.command.sh: line 5: ids: unbound variable
The problem is that when you use a single-quote script string, you will not be able to access Nextflow variables in your script block. You can either define your script using a double-quote string and escape your shell variables:
params.SRRs = "srr_ids.txt"
process tmp {
input:
path ids
output:
path "*.txt"
script:
"""
while read id; do
touch "\${id}.txt"
echo "\${id}" > "\${id}.txt"
done < "${ids}"
"""
}
workflow {
SRRs = file(params.SRRs)
tmp(SRRs)
}
Or, use a shell block which uses the exclamation mark ! character as the variable placeholder for Nextflow variables. This makes it possible to use both Nextflow and shell variables in the same piece of code without having to escape each of the shell variables:
params.SRRs = "srr_ids.txt"
process tmp {
input:
path ids
output:
path "*.txt"
shell:
'''
while read id; do
touch "${id}.txt"
echo "${id}" > "${id}.txt"
done < "!{ids}"
'''
}
workflow {
SRRs = file(params.SRRs)
tmp(SRRs)
}
What is the conventional way of passing lots of filenames to a
pipeline?
The conventional way, I think, is to actually supply one (or more) glob patterns to the fromPath channel factory method. For example:
params.SRRs = "./path/to/files/SRR*.fastq.gz"
workflow {
Channel
.fromPath( params.SRRs )
.view()
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.4
Launching `main.nf` [sleepy_bernard] DSL2 - revision: 30020008a7
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1910483.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1910482.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448795.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448793.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448794.fastq.gz
/home/steve/working/stackoverflow/73702711/path/to/files/SRR1448792.fastq.gz
If instead you would prefer to pass in a list of filenames, like in your example, use either the splitCsv or the splitText operator to get what you want. For example:
params.SRRs = "srr_ids.txt"
workflow {
Channel
.fromPath( params.SRRs )
.splitText() { it.strip() }
.view()
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.4
Launching `main.nf` [fervent_ramanujan] DSL2 - revision: 89a1771d50
SRR1448794
SRR1448795
SRR1448792
SRR1448793
SRR1910483
SRR1910482
Should I write some other process that parses the list?
You may not need to. My feeling is that your code might benefit from using the fromSRA factory method, but we don't really have enough details to say one way or the other. If you need to, you could just write a function that returns a channel.

`errorStrategy` setting to stop current process but continue pipeline

I have a lot of samples that go through a process which sometimes fail (deterministically). In such a case, I would want the failing process to stop, but all other samples to still get submitted and processed independently.
If I understand correctly, setting errorStrategy 'ignore' will continue the script within the failing process, which is not what I want. And errorStrategy 'finish' would stop submitting new samples, even though there is no reason for the other samples to fail too. And while errorStrategy 'retry' could technically work (by repeating the failing processes while the good ones get through), that doesn't seem like a good solution.
Am I missing something?
If a process can fail deterministically, it might be better to handle this situation somehow. Setting the errorStrategy directive to 'ignore' will mean any processes execution errors are ignored and allow your workflow continue. For example, you might get a process execution error if a process exits with a non-zero exit status or if one or more expected output files are missing. The pipeline will continue, however downstream processes will not be attempted.
Contents of test.nf:
nextflow.enable.dsl=2
process foo {
tag { sample }
input:
val sample
output:
path "${sample}.txt"
"""
if [ "${sample}" == "s1" ] ; then
(exit 1)
fi
if [ "${sample}" == "s2" ] ; then
echo "Hello" > "${sample}.txt"
fi
"""
}
process bar {
tag { txt }
input:
path txt
output:
path "${txt}.gz"
"""
gzip -c "${txt}" > "${txt}.gz"
"""
}
workflow {
Channel.of('s1', 's2', 's3') | foo | bar
}
Contents of nextflow.config:
process {
// this is the default task.shell:
shell = [ '/bin/bash', '-ue' ]
errorStrategy = 'ignore'
}
Run with:
nextflow run -ansi-log false test.nf
Results:
N E X T F L O W ~ version 20.10.0
Launching `test.nf` [drunk_bartik] - revision: e2103ea23b
[9b/56ce2d] Submitted process > foo (s2)
[43/0d5c9d] Submitted process > foo (s1)
[51/7b6752] Submitted process > foo (s3)
[43/0d5c9d] NOTE: Process `foo (s1)` terminated with an error exit status (1) -- Error is ignored
[51/7b6752] NOTE: Missing output file(s) `s3.txt` expected by process `foo (s3)` -- Error is ignored
[51/267685] Submitted process > bar (s2.txt)

Nextflow: how do you pass an output (multiple files) from the publishdir to the next process?

I have a process generating two files that I am interested in, hitsort.cls and contigs.fasta.
I output these using publishdir:
process RUN_RE {
publishDir "$baseDir/RE_output", mode: 'copy'
input:
file 'interleaved.fq'
output:
file "${params.RE_run}/seqclust/clustering/hitsort.cls"
file "${params.RE_run}/contigs.fasta"
script:
"""
some_code
"""
}
Now, I need these two files to be an input for another process but I don't know how to do that.
I have tried calling this process with
NEXT_PROCESS(params.hitsort, params.contigs)
while specifying the input as:
process NEXT_PROCESS {
input:
path hitsort
path contigs
but it's not working, because only the basename is used instead of the full path. Basically what I want is to wait for RUN_RE to finish, and then use the two files it outputs for the next process.
Best to avoid accessing files in the publishDir, since:
Files are copied into the specified directory in an asynchronous manner, thus they may not be immediately available in the published directory at the end of the process execution. For this reason files published by a process must not be accessed by other downstream processes.
The recommendation is therefore to ensure your processes only access files in the working directory, (i.e. ./work). What this means is: it's best to avoid things like absolute paths in your input and output declarations. This will also help ensure your workflows are portable.
nextflow.enable.dsl=2
params.interleaved_fq = './path/to/interleaved.fq'
params.publish_dir = './results'
process RUN_RE {
publishDir "${params.publish_dir}/RE_output", mode: 'copy'
input:
path interleaved
output:
path "./seqclust/clustering/hitsort.cls", emit: hitsort_cls
path "./contigs.fasta", emit: contigs_fasta
"""
# do something with ${interleaved}...
ls -l "${interleaved}"
# create some outputs...
mkdir -p ./seqclust/clustering
touch ./seqclust/clustering/hitsort.cls
touch ./contigs.fasta
"""
}
process NEXT_PROCESS {
input:
path hitsort
path contigs
"""
ls -l
"""
}
workflow {
interleaved_fq = file( params.interleaved_fq )
NEXT_PROCESS( RUN_RE( interleaved_fq ) )
}
The above workflow block is effectively the same as:
workflow {
interleaved_fq = file( params.interleaved_fq )
RUN_RE( interleaved_fq )
NEXT_PROCESS( RUN_RE.out.hitsort_cls, RUN_RE.out.contigs_fasta )
}

check if nextflow channel is empty

I am trying to figure out how to check if a channel is empty or not.
For instance, I have two processes. The first process runs only if a combination of parameters/flags are set and if so, checks also if its input file from another process (input via a channel) is not empty, then it creates a new input file for a second process (to eventually replace the default one). As a simplified example:
.....
.....
// create the channel here to force nextflow to wait for the first process
_chNewInputForProcessTwo = Channel.create()
process processOne {
when:
params.conditionOne && parameters.conditionTwo
input:
file inputFile from _channelUpstreamProcess
output:
file("my.output.file") into _chNewInputForProcessTwo
script:
"""
# check if we need to produce new input for second process (i.e., input file not empty)
if [ -s ${inputFIle} ]
then
<super_command_to_generate_new_fancy_input_for_second_process> > "my.output.file"
else
echo "No need to create new input"
fi
"""
}
// and here I would like to check if new input was generated or leave the "default" one
_chInputProcessTwo = Channel.from(_chNewInputForProcessTwo).ifEmpty(Channel.value(params.defaultInputProcessTwo))
process secondProcess {
input:
file inputFile from _chInputProcessTwo
......
......
etc.
When I try running with this approach it fails because the channel _chNewInputForProcessTwo contains DataflowQueue(queue=[]) therefore, not being actually empty.
I've tried several things looking at the documentation and the threads on google groups and on gitter. trying to set it to empty, but then it complains i am trying to use the channel twice. putting create().close(), etc.
Is there a clean/reasonable way to do this? I could do it using a value channel and have the first process output some string on the stdout to be picked up and checked by the second process, but that seems pretty dirty to me.
Any suggestions/feedback is appreciated. Thank you in advance!
Marius
Best to avoid trying to check if the channel is empty. If your channel could be empty and you need a default value in your channel, you can use the ifEmpty operator to supply one. Note that a single value is implicitly a value channel. I think all you need is:
myDefaultInputFile = file(params.defaultInputProcessTwo)
chInputProcessTwo = chNewInputForProcessTwo.ifEmpty(myDefaultInputFile)
Also, calling Channel.create() is usually unnecessary.