How can I pair up input data for rules in snakemake if the naming isn't consistent and they are all in the same folder?
For example if I want to use each pair of samples as input for each rule:
PT1 T5
S6 T7
S1 T20
In this example I would want to have 3 pairs PT1 & T5, S6 & T7, S1 & T20 so to start with, I would want to create 3 folders:
PT1vsT5
S6vsT7
S1vsT20
And then perform an analysis with manta and output the results into these 3 folders accordingly.
In the following pipeline I want the GERMLINE sample to be the first element in each line (PT1, S6, S1) and TUMOR the second one (T5, T7, T20).
rule all:
input:
expand("/{samples_g}vs{samples_t}", samples_g = GERMLINE, samples_t = TUMOR),
expand("/{samples_g}vs{samples_t}/runWorkflow.py", samples_g = GERMLINE, samples_t = TUMOR),
# Create folders
rule folders:
output: "./{samples_g}vs{samples_t}"
shell: "mkdir {output}"
# Manta configuration
rule manta_config:
input:
g = BAMPATH + "/{samples_g}.bam",
t = BAMPATH + "/{samples_t}.bam"
output:
wf = "{samples_g}vs{samples_t}/runWorkflow.py"
params:
ref = IND,
out_dir = "{samples_g}vs{samples_t}/runWorkflow.py"
shell:
"python configManta.py --normalBam {input.g} --tumorBam {input.t} --referenceFasta {params.ref} --runDir {params.out_dir} "
Could I do it by using as an input a .txt containing the pairs and then use a loop? If so how should I do it? Otherwise how could it be done?
You can generate the list of input or output files "manually" using any appropriate python code. For instance, you could proceed as follows to generate the first of your input lists:
In [1]: GERMLINE = ("PT1", "S6", "S1")
In [2]: TUMOR = ("T5", "T7", "T20")
In [3]: ["/{}vs{}".format(sample_g, sample_t) for (sample_g, sample_t) in zip(GERMLINE, TUMOR)]
Out[3]: ['/PT1vsT5', '/S6vsT7', '/S1vsT20']
So this would be applied as follows:
rule all:
input:
["/{}vs{}".format(sample_g, sample_t) for (sample_g, sample_t) in zip(GERMLINE, TUMOR)],
["/{}vs{}/runWorkflow.py".format(sample_g, sample_t) for (sample_g, sample_t) in zip(GERMLINE, TUMOR)],
(Note that I put sample_g and sample_t in singular form, as it sounded more logical in this context, where those variable represent individual sample names, and not lists of several samples)
Related
I´m currently trying to write a Snakemake workflow that can check automatically via a sample.tsv file if a given sample is a biological or technical replicate. And then use in this case at some point of my workflow a rule to merge technical/biological replicates.
My tsv file looks like this:
|sample | unit_bio | unit_tech | fq1 | fq2 |
|----------|----------|-----------|-----|-----|
| bCalAnn1 | 1 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_1_R2.fastq.gz |
| bCalAnn1 | 1 | 2 | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_2_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_2_R2.fastq.gz |
| bCalAnn2 | 1 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_1_R2.fastq.gz |
| bCalAnn2 | 1 | 2 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_2_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_2_R2.fastq.gz |
| bCalAnn2 | 2 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_2_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_2_1_R2.fastq.gz |
| bCalAnn2 | 3 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_3_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_3_1_R2.fastq.gz |
My Pipeline looks like this:
import pandas as pd
import os
import yaml
configfile: "config.yaml"
samples = pd.read_table(config["samples"], dtype=str)
rule all:
input:
expand(config["arima_mapping"] + "final/{sample}_{unit_bio}_{unit_tech}.bam", zip,
sample=samples["sample"], unit_bio=samples["unit_bio"], unit_tech=samples["unit_tech"])
..
some rules
..
rule add_read_groups:
input:
config["arima_mapping"] + "paired/{sample}_{unit_bio}_{unit_tech}.bam"
output:
config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
params:
platform = "ILLUMINA",
sampleName = "{sample}",
library = "{sample}",
platform_unit ="None"
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/paired_read_groups/{sample}_{unit_bio}_{unit_tech}.log"
shell:
"picard AddOrReplaceReadGroups I={input} O={output} SM={params.sampleName} LB={params.library} PU={params.platform_unit} PL={params.platform} 2> {log}"
rule merge_tech_repl:
input:
config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
output:
config["arima_mapping"] + "merge_tech_repl/{sample}_{unit_bio}_{unit_tech}.bam"
params:
val_string = "SILENT"
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/merged_tech_repl/{sample}_{unit_bio}_{unit_tech}.log"
threads:
2 #verwendet nur maximal 2
shell:
"picard MergeSamFiles -I {input} -O {output} --ASSUME_SORTED true --USE_THREADING true --VALIDATION_STRINGENCY {params.val_string} 2> {log}"
rule mark_duplicates:
input:
config["arima_mapping"] + "merge_tech_repl/{sample}_{unit_bio}_{unit_tech}.bam" if config["tech_repl"] else config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
output:
bam = config["arima_mapping"] + "final/{sample}_{unit_bio}_{unit_tech}.bam",
metric = config["arima_mapping"] + "final/metric_{sample}_{unit_bio}_{unit_tech}.txt"
#params:
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/mark_duplicates/{sample}_{unit_bio}_{unit_tech}.log"
shell:
"picard MarkDuplicates I={input} O={output.bam} M={output.metric} 2> {log}"
At the moment I have set a boolean in a config file that tells the mark_duplicates rule whether to take its input from the add_read_group or the merge_technical_replicates rule. This is of course not optimal since it could be that some samples may have duplicates (of any numbers) while others don´t. Therefore I want to create a syntax that checks the tsv table if a given sample name and unit_bio number are identical while the unit_tech number is different (and later analog to this for biological replicates), thus merging these specific samples while nonduplicate samples skip the merging rule.
EDIT
For clarification since I think I explained my goal confusingly.
My first attempt looks like this, I want "i" to be flexible, in case the duplicate number changes. I don't think that my input function returns all duplicates together that match each other but gives them one by one which is not what I want. I´m also unsure on how I would have to handle samples that do not have duplicates since they would have to skip this rule somehow.
input_function(wildcards):
return expand({sample}_{unit_bio}_{i}.bam", sample = wildcards.sample,
unit_bio = wildcards.unit_bio,
i = samples["sample"].str.count(wildcards.sample))
rule tech_duplicate_check:
input:
input_function #(that returns a list of 2-n duplicates, where n could be different for each sample)
output:
{sample}_{unit_bio}.bam
shell:
MergeTechDupl_tool {input} # input is a list
Therefore I want to create a syntax that checks the tsv table if a given sample name and unit_bio number are identical while the unit_tech number is different (and later analog to this for biological replicates), thus merging these specific samples while nonduplicate samples skip the merging rule.
rule gather_techdups_of_a_biodup:
output: "{sample}/{unit_bio}"
input: gather_techdups_of_a_biodup_input_fn
shell: "true" # Fill this in
rule gather_biodips_of_a_techdup:
output: "{sample}/{unit_tech}"
input: gather_biodips_of_a_techdup_input_fn
shell: "true" # Fill this in
After some attempts my main problem I struggle with is the table checking. As far as I know snakemake takes templates as input and checks for all samples that match this. But I would need to check the table for every sample that shares (e.g. for technical replicate) the sample name and the unit_bio number take all these samples and give them as input for the first rule run. Then I would have to take the next sample which was not already part of a previous run to prevent merging the same samples multiple times.
The logic you describe here can be implemented in the gather_techdups_of_a_biodup_input_fn and gather_biodips_of_a_techdup_input_fn functions above. For example, read your sample TSV file with pandas, filter for wildcards.sample and wildcards.unit_bio (or wildcards.unit_tech), then extract columns fq1 and fq2 from the filtered data frame.
I understand we have to use collect() when we run a process that takes as input two channels, where the first channel has one element and then second one has > 1 element:
#! /usr/bin/env nextflow
nextflow.enable.dsl=2
process A {
input:
val(input1)
output:
path 'index.txt', emit: foo
script:
"""
echo 'This is an index' > index.txt
"""
}
process B {
input:
val(input1)
path(input2)
output:
path("${input1}.txt")
script:
"""
cat <(echo ${input1}) ${input2} > \"${input1}.txt\"
"""
}
workflow {
A( Channel.from( 'A' ) )
// This would only run for one element of the first channel:
B( Channel.from( 1, 2, 3 ), A.out.foo )
// and this for all of them as intended:
B( Channel.from( 1, 2, 3 ), A.out.foo.collect() )
}
Now the question: Why can this line in the example workflow from nextflow-io (https://github.com/nextflow-io/rnaseq-nf/blob/master/modules/rnaseq.nf#L15) work without using collect() or toList()?
It is the same situation, a channel with one element (the index) and a channel with > 1 (the fastq pairs) shall be used by the same process (quant), and it runs on all fastq files. What am I missing compared to my dummy example?
You need to create the first channel with a value factory which never exhausts the channel.
Your linked example implicitly creates a value channel which is why it works. The same happens when you call .collect() on A.out.foo.
Channel.from (or the more modern Channel.of) create a sequence channel which can be exhausted which is why both A and B only run once.
So
A( Channel.value('A') )
is all you need.
I have a particular use case for which I have not found the solution in the Snakemake documentation.
Let's say in a given pipeline I have a portion with 3 rules a, b and c which will run for N samples.
Those rules handle large amount of data and for reasons of local storage limits I do not want those rules to execute at the same time. For instance rule a produces the large amount of data then rule c compresses and export the results.
So what I am looking for is a way to chain those 3 rules for 1 sample/wildcard, and only then execute those 3 rules for the next sample. All of this to make sure the local space is available.
Thanks
I agree that this is problem that Snakemake still has no solution for. However you may have a workaround.
rule all:
input: expand("a{sample}", sample=[1, 2, 3])
rule a:
input: "b{sample}"
output: "a{sample}"
rule b:
input: "c{sample}"
output: "b{sample}"
rule c:
input:
lambda wildcards: f"a{wildcards.sample-1}"
output: "c{sample}"
That means that the rule c for sample 2 wouldn't start before the output for rule a for sample 1 is ready. You need to add a pseudo output a0 though or make the lambda more complicated.
So building on Dmitry Kuzminov's answer, the following can work (both with numbers as samples and strings).
The execution order will be a3 > b3 > a1 > b1 > a2 > b2.
I used a different sample order to show it can be made different from the sample list.
samples = [1, 2, 3]
sample_order = [3, 1, 2]
def get_previous(wildcards):
if wildcards.sample != sample_order[0]: # if different from a3 in this case
previous_sample = sample_order[sample_order.index(wildcards.sample) - 1]
return f'b_out_{previous_sample}'
else: # if is the first sample in the order i.e. a3
return #here put dummy file always present e.g. the file containing those rules or the Snakemake
rule all:
expand("b_out_{S}", S=sample)
rule a:
input:
"a_in_{sample}",
get_previous
output:
"a_out_{sample}"
rule b:
input:
"a_out_{sample}"
output:
"b_out_{sample}"
I'm new to snakemake and I don't know how to figure out this problem.
I've got my rule which has two inputs:
rule test
input_file1=f1
input_file2=f2
f1 is in [A{1}$, A{2}£, B{1}€, B{2}¥]
f2 is in [C{1}, C{2}]
The numbers are wildcards that come from an expand call. I need to find a way to pass to the file f1 and f2 a pair of files that match exactly with the number. For example:
f1 = A1
f2 = C1
or
f1 = B1
f2 = C1
I have to avoid combinations such as:
f1 = A1
f2 = C2
I would create a function that makes this kind of matches between the files, but the same should manage the input_file1 and the input_file2 at the same time. I thought to make a function that creates a dictionary with the different allowed combinations but how would I "iterate" over it during the expand?
Thanks
Assuming rule test gives you in output a file named {f1}.{f2}.txt, then you need some mechanism that correctly pairs f1 and f2 and create a list of {f1}.{f2}.txt files.
How you create this list is up to you, expand is just a convenience function for that but maybe in this case you may want to avoid it.
Here's a super simple example:
fin1 = ['A1$', 'A2£', 'B1€', 'B2¥']
fin2 = ['C1', 'C2']
outfiles = []
for x in fin1:
for y in fin2:
## Here you pair f1 and f2. This is a very trivial way of doing it:
if y[1] in x:
outfiles.append('%s.%s.txt' % (x, y))
wildcard_constraints:
f1 = '|'.join([re.escape(x) for x in fin1]),
f2 = '|'.join([re.escape(x) for x in fin2]),
rule all:
input:
outfiles,
rule test:
input:
input_f1 = '{f1}.txt',
input_f2 = '{f2}.txt',
output:
'{f1}.{f2}.txt',
shell:
r"""
cat {input} > {output}
"""
This pipeline will execute the following commands
cat A2£.txt C2.txt > A2£.C2.txt
cat A1$.txt C1.txt > A1$.C1.txt
cat B1€.txt C1.txt > B1€.C1.txt
cat B2¥.txt C2.txt > B2¥.C2.txt
If you touch the starting input files with touch 'A1$.txt' 'A2£.txt' 'B1€.txt' 'B2¥.txt' 'C1.txt' 'C2.txt' you should be able to run this example.
I am new to Snakemake and have a problem in Snakemake expand function.
First, I need to have a group of combinations and use them as base to expand another vector upon them with pair-wise elements combinations of it.
Lets say the set for the pairwise combination is
setC=["A","B","C","D"]
I get the partial group as follows:
part_group1 = expand("TEMPDIR/{setA}_{setB}_", setA = config["setA"], setB = config["setB"]
Then, (if that is OK), I used this partial group, to expand another set with its pairwise combinations. But I am not sure how to expand pairwise combinations of setC as seen below. It is obviously not correct; just written to clarify the question. Also, how to input the name of the expanded estimator from shell?
rule get_performance:
input:
xdata1 = TEMPDIR + part_group1 +"{setC}.rda"
xdata2 = TEMPDIR + part_group1 +"{setC}.rda"
estimator1= {estimator}
output:
results = TEMPDIR + "result_" + part_group1 +{estimator}_{setC}_{setC}.txt"
params:
Rfile = FunctionDIR + "function.{estimator}.R"
shell:
"Rscript {params.Rfile} {input.xdata1} {input.xdata12} {input.estimator1} "
"{output.results}"
The expand function will return a list of the product of the variables used. For example, if
setA=["A","B"]
setB=["C","D"]
then
expand("TEMPDIR/{setA}_{setB}_", setA = config["setA"], setB = config["setB"]
will give you:
["TEMPDIR/A_C_","TEMPDIR/A_D_","TEMPDIR/B_C_","TEMPDIR/B_D_"]
Your question is not very clear on what you want to achieve but I'll have a guess.
If you want to make pairwise combinations of setC:
import itertools
combiC=list(itertools.combinations(setC, 2))
combiList=list()
for c in combiC:
combiList.append(c[0]+"_"+c[1])
the you (probably) want the files:
rule all:
input: expand(TEMPDIR + "/result_{A}_{B}_estim{estimator}_combi{C}.txt",A=setA, B=setB, estimator=estimators, C=combiList)
I'm putting some words like "estim" and "combi" not to confuse the wildcards here. I do not know what the list or set "estimators" is supposed to be but I suppose you have declared it above.
Then your rule get_performance:
rule get_performance:
input:
xdata1 = TEMPDIR + "/{A}_{B}_{firstC}.rda",
xdata2 = TEMPDIR + "/{A}_{B}_{secondC}.rda"
output:
TEMPDIR + "/result_{A}_{B}_estim{estimator}_combi{firstC}_{secondC}.txt"
params:
Rfile = FunctionDIR + "/function.{estimator}.R"
shell:
"Rscript {params.Rfile} {input.xdata1} {input.xdata2} {input.estimator} {output.results}"
Again, this is a guess since you haven't defined all the necessary items.