snakemake parameter function lambda - snakemake

I want to use a function on params.
Snakemake:
def mitico(x):
res =int(x)+1
return res
I I have a wildcard {sample} that are integer. And I want to use {sample}+1
How can do this inside the snakemake params?
In the function:
rule create_pt:
input:
read="CALL2/{sample}.vcf",
output:
out="OUT/{sample}.txt
conda:
"envs/mb.yml"
params:
db_ens = "/mnt/mpwor2k/",
fst = "/Homo_sapiens.GRCh37.75.dna.primary_assembly.fa",
tumor_id="{sample}",
normal_id=lambda wildcards: mitico('{sample}')
shell:
I have this error
ValueError: invalid literal for int() with base 10: '{sample}'
Wildcards:
sample=432

{sample} in your lambda function is just a string and not wildcard. This is how to use wildcard in lambda
lambda wildcards: mitico(wildcards.sample)

Related

Snakemake: Only input files can be specified as functions

Snakemake complains that "Only input files can be specified as functions" in the shell line.
def get_filename(wildcards):
sampleid = wildcards.sample.split['-'][1]
GeneFuse_vcf= f"{sampleid}.fusion.vcf"
return GeneFuse_vcf
rule GeneFuse:
input:
bam_path = f"{outputdir}/"+"{sample}/13_genefusion"
params:
svabaflow = config["svabaflow"],
output:
GeneFuse_vcf = get_filename
shell:
"{params.svabaflow} {input} {wildcards.sample}"
In the rule GenefUSE, my {sample} format is ctn-305A26000547
and i want to tell snakemake that my outputfile(GeneFuse_vcf) is named 305A26000547.fusion.vcf
Ofcourse,if the {sample} is ctn-367A23594285,the filename should be "367A23594285.fusion.vcf"
Any suggestion to fix it? Thanks.
Assuming you already have the list of SAMPLEIDS as you state in the comment, you can construct an rule all which calls rule GeneFuse like this:
rule all:
input:
expand("{sample}.fusion.vcf", sampleid=SAMPLEIDS),
default_target: True
rule GeneFuse:
input:
bam_path=f"{outputdir}/" + "{sample}/13_genefusion",
params:
svabaflow=config["svabaflow"],
output:
GeneFuse_vcf="{sample}.fusion.vcf",
shell:
"{params.svabaflow} {input} {wildcards.sample}"
rule all:
input:
expand("{sample}.fusion.vcf", sampleid=SAMPLEIDS),
default_target: True
dictionary = {"305A26000547": "ct1-305A26000547",
"367A23594285": "ct5-367A23594285",
"02A67458112": "ct9-302A67458112"}
def get_path(wildcards):
ss = dictionary[wildcards.sample]
bam_path= f"{outputdir}/{ss}/13_genefusion"
return bam_path
rule GeneFuse:
input:
get_path,
params:
svabaflow=config["svabaflow"],
output:
GeneFuse_vcf="{sample}.fusion.vcf",
shell:
"{params.svabaflow} {input} {wildcards.sample}"

Nextflow DSL2 output from different processes mixed up as input in later processes

I have a DSL2 Nextflow pipeline that branches out to 2 FILTER processes. Then in the CONCAT process, I reuse the two previous process outputs as input. Also in the SUMMARIZE process, I reuse previous process ouputs as input.
I am finding that when I run the pipeline with 2 or more pairs of fastq samples, that the inputs are mixed up.
For example, at the CONCAT step, I end up concating the bwa_2_ch output of one pair of fastq samples with the filter_1_ch of another pair of fastq samples instead of samples with the same pair_id.
I believe am not writing the workflow { } channels and inputs entirely correctly the workflow runs through the steps properly without mixing samples. But I am not sure how to define the inputs so that there is no mix up.
//trimmomatic read trimming
process TRIM {
tag "trim ${pair_id}"
publishDir "${params.outdir}/$pair_id/trim_results"
input:
tuple val(pair_id), path(reads)
output:
tuple val(pair_id), path("trimmed_${pair_id}_...")
script:
"""
"""
}
//bwa alignment
process BWA_1 {
tag "align-1 ${pair_id}f"
publishDir "${params.outdir}/$pair_id/..."
input:
tuple val(pair_id), path(reads)
path index
output:
tuple val(pair_id), path("${pair_id}_...}")
script:
"""
"""
}
process FILTER_1 {
tag "filter ${pair_id}"
publishDir "${params.outdir}/$pair_id/filter_results"
input:
tuple val(pair_id), path(reads)
output:
tuple val(pair_id),
path("${pair_id}_...")
script:
"""
"""
}
process FILTER_2 {
tag "filter ${pair_id}"
publishDir "${params.outdir}/$pair_id/filter_results"
input:
tuple val(pair_id), path(reads)
output:
tuple val(pair_id),
path("${pair_id}_...")
script:
"""
"""
}
//bwa alignment
process BWA_2 {
tag "align-2 ${pair_id}"
publishDir "${params.outdir}/$pair_id/bwa_2_results"
input:
tuple val(pair_id), path(reads)
path index
output:
tuple val(pair_id), path("${pair_id}_...}")
script:
"""
"""
}
//concatenate pf and non_human reads
process CONCAT{
tag "concat ${pair_id}"
publishDir "${params.outdir}/$pair_id"
input:
tuple val(pair_id), path(program_reads)
tuple val(pair_id), path(pf_reads)
output:
tuple val(pair_id), path("${pair_id}_...")
script:
"""
"""
}
//summary
process SUMMARY{
tag "summary ${pair_id}"
publishDir "${params.outdir}/$pair_id"
input:
tuple val(pair_id), path(trim_reads)
tuple val(pair_id), path(non_human_reads)
output:
file("summary_${pair_id}.csv")
script:
"""
"""
}
workflow {
Channel
.fromFilePairs(params.reads, checkIfExists: true)
.set {read_pairs_ch}
// trim reads
trim_ch = TRIM(read_pairs_ch)
// map to pf genome
bwa_1_ch = BWA_1(trim_ch, params.pf_index)
// filter mapped reads
filter_1_ch = FILTER_1(bwa_1_ch)
filter_2_ch = FILTER_2(bwa_1_ch)
// map to pf and human genome
bwa_2_ch = BWA_2(filter_2_ch, params.index)
// concatenate non human reads
concat_ch = CONCAT(bwa_2_ch,filter_1_ch)
// summarize
summary_ch = SUMMARY(trim_ch,concat_ch)
}
Mix-ups like this usually occur when a process erroneously receives two or more queue channels. Most of the time, what you want is one queue channel and one or more value channels when you require multiple input channels. Here, I'm not sure exactly what pair_id would be bound to, but it likely won't be what you expect:
input:
tuple val(pair_id), path(program_reads)
tuple val(pair_id), path(pf_reads)
What you want to do is replace the above with:
input:
tuple val(pair_id), path(program_reads), path(pf_reads)
And then use the join operator to create the required inputs. For example:
workflow {
Channel
.fromFilePairs( params.reads, checkIfExists: true )
.set { read_pairs_ch }
pf_index = file( params.pf_index )
bwa_index = file( params.bwa_index )
// trim reads
trim_ch = TRIM( read_pairs_ch )
// map to pf genome
bwa_1_ch = BWA_1( trim_ch, pf_index)
// filter mapped reads
filter_1_ch = FILTER_1(bwa_1_ch)
filter_2_ch = FILTER_2(bwa_1_ch)
// map to pf and human genome
bwa_2_ch = BWA_2(filter_2_ch, bwa_index)
// concatenate non human reads
concat_ch = bwa_2_ch \
| join( filter_1_ch ) \
| CONCAT
// summarize
summary_ch = trim_ch \
| join( concat_ch ) \
| SUMMARY
}

Dynamic Branching/Plumbling

Is it possible to use Dynamic Branching/Plumbing in a snakefile?
I wish to perform the following:
A -> B -> D
or
A -> C -> D
Depending on whether a config variable is true.
for example:
*(rules.B if config["deblur"] == True else rules.B),
In this instance it runs both rules B and C.
I have tried
if config["deblur"] == True:
rules.B,
else:
rules.C,
But this gives me a syntax error.
In the next rule the input is as follows.
input:
qiime_feature_table_input = rules.qiime_deblur.output.qiime_deblur_table if config["deblur"] == "True" else rules.qiime_denoise.output.qiime_denoise_table
Thanks for your help!
Since the value of the configuration variable is known before runtime, there's no need for dynamic modification of the DAG in this case. Here's a simple snakefile that will run rules a -> b -> d if config_var is true and rules a -> c -> d if config_var is false:
config_var = True
rule all:
input:
"d/out.txt",
rule a:
output:
"a/a.txt",
shell:
"""
echo 'a' > '{output}'
"""
rule b:
input:
rules.a.output,
output:
"b/b.txt",
shell:
"""
echo 'b' > '{output}'
"""
rule c:
input:
rules.a.output,
output:
"c/c.txt",
shell:
"""
echo 'c' > '{output}'
"""
rule d:
input:
rules.b.output if config_var else rules.c.output,
output:
"d/out.txt",
shell:
"""
cat '{input}' > '{output}'
"""
Not sure if this applies to your case, but one option could be to have these two rules produce the same file (it could be a dummy file), but define only one rule at a time with a conditional. Here's a rough pseudocode:
config_var = True
rule all:
input: 'test.txt'
if config_var:
rule B:
output: 'test.txt'
else:
rule C:
output: 'test.txt'

Snakemake: output file name seems to require a static path portion

I'm finding that the name of the output file per rule seems to need a static portion, e.g. "data/{wildcard}_data.csv" vs. "{wildcard}_data.csv"
For example, the script below returns the following error on dryrun:
Building DAG of jobs...
MissingInputException in line 12 of /home/rebecca/workflows/exploring_tools/affymetrix_preprocess/snakemake/Snakefile:
Missing input files for rule getDatFiles:
GSE4290
Script:
rule all:
input: expand("{geoid}_datout.scaled.expr.csv", geoid = config['geoid'], out_dir = config['out_dir'])
benchmark: "benchmark.csv"
rule getDatFiles:
input: "{geoid}"
output: temp("{geoid}_datFiles.RData")
shell:
"Rscript scripts/getDatFiles.R"
rule maskProbes:
input: "{geoid}_datFiles.RData"
output: temp("{geoid}_datFiles.masked.RData")
params:
probeFilterFxn = lambda x: config['probeFilterFxn'],
minProbeNumber = lambda x: config['minProbeNumber'],
probeSingle = lambda x: config['probeSingle']
script: "scripts/maskProbes.R"
rule runExpresso:
input: "{geoid}_datFiles.masked.RData"
output: temp("{geoid}_datout.RData")
params:
bgcorrect_method = lambda x: config['bgcorrect_method'],
normalize = lambda x: config['normalize'],
pmcorrect_method = lambda x: config['pmcorrect_method'],
summary_method = lambda x: config['summary_method']
script: "scripts/runExpresso.R"
rule scaleData:
input: "{geoid}_datout.RData"
output: temp("{geoid}_datout.scaled.RData")
params: sc = lambda x: config['sc']
script: "scripts/scaleData.R"
rule getExpr:
input: "{geoid}_datout.scaled.RData"
output: temp("{geoid}_datout.scaled.expr.csv")
script: "scripts/getExpr.R"
... While the following script runs without error (the difference being including "output/" ahead of the output file names:
rule all:
input: expand("output/{geoid}_datout.scaled.expr.csv", geoid = config['geoid'], out_dir = config['out_dir'])
benchmark: "output/benchmark.csv"
rule getDatFiles:
input: "output/{geoid}"
output: temp("output/{geoid}_datFiles.RData")
shell:
"Rscript scripts/getDatFiles.R"
rule maskProbes:
input: "output/{geoid}_datFiles.RData"
output: temp("output/{geoid}_datFiles.masked.RData")
params:
probeFilterFxn = lambda x: config['probeFilterFxn'],
minProbeNumber = lambda x: config['minProbeNumber'],
probeSingle = lambda x: config['probeSingle']
script: "scripts/maskProbes.R"
rule runExpresso:
input: "output/{geoid}_datFiles.masked.RData"
output: temp("output/{geoid}_datout.RData")
params:
bgcorrect_method = lambda x: config['bgcorrect_method'],
normalize = lambda x: config['normalize'],
pmcorrect_method = lambda x: config['pmcorrect_method'],
summary_method = lambda x: config['summary_method']
script: "scripts/runExpresso.R"
rule scaleData:
input: "output/{geoid}_datout.RData"
output: temp("output/{geoid}_datout.scaled.RData")
params: sc = lambda x: config['sc']
script: "scripts/scaleData.R"
rule getExpr:
input: "output/{geoid}_datout.scaled.RData"
output: temp("output/{geoid}_datout.scaled.expr.csv")
script: "scripts/getExpr.R"
I'm having a hard time understanding why this might be happening. Ultimately, I'd like to workflows that are as possible, and ideally, that entails making the output directory variable.
Any insight would be much appreciated.
You have:
rule getDatFiles:
input: "{geoid}"
which means there should be a file in the current directory named just {geoid}, e.g. ./GSE4290. I suspect what you want is:
rule getDatFiles:
input: "data/{geoid}_data.csv"
...
input: "output/{geoid}" works maybe because there is already a file named output/GSE4290 created elsewhere.
(I haven't looked the rest of the scripts)
Are you running them in the same directory?

What is a callable in Tensorflow?

I thought a callable is just a function from the tf-library that I call. This:
tensor = tf.while_loop(tf.less(tf.rank(tensor), ndims), # cond
tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars
errors to TypeError: cond must be callable.
What is a callable condition if not tf.less()?
A callable is anything that can be called. See here.
The cond should be a function. You can use lambda (See here) to make your condition callable.
Here there is a minimal example of how to use tf.while_loop:
i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
And in the end, not a bad idea to post a minimal code that actually runs and generates your error.
tf.less is an Operation object. To make it callable, just use a lambda:
tensor = tf.while_loop(lambda tensor: tf.less(tf.rank(tensor), ndims), # cond
lambda tensor: tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars