Nextflow deepTools fingerprint - nextflow

I am trying to use nextflow pipeline to do a fringerprint(bamCoverage) from deeptool. When I input the bam files and run the script. it says I don't have index files. error: [E::idx_find_and_load] Could not retrieve index file for 'Kasumi_NCOR1.genome.sorted.bam'
[E::idx_find_and_load] Could not retrieve index file for 'Kasumi_NCOR1.genome.sorted.bam'
'Kasumi_NCOR1.genome.sorted.bam' does not appear to have an index. You MUST index the file first!
process fingerprint_cov {
publishDir "${params.outdir}/fingerprint_cov", mode: 'copy'
input:
set val(sample_id), file(samples) from sorted_bam_sample_control_ch.samples
set val(sample_id_c), file(controls) from sorted_bam_sample_control_ch.controls
output:
set val(sample_id), file("${sample_id}.cov.bedgraph") into sample_cov_ch
set val(sample_id_c), file("${sample_id_c}.cov.bedgraph") into control_cov_ch
script:
"""
bamCoverage -b ${samples} -o ${sample_id}.cov.bedgraph -of bedgraph -bs 1000 -p 10
bamCoverage -b ${controls} -o ${sample_id_c}.cov.bedgraph -of bedgraph -bs 1000 -p 10
"""
}
sorted_bam_sample_control_ch.samples has all the sample bam files, and sorted_bam_sample_control_ch.control has the control bam files. How do I input the bam.bai files? I have also seen that output bam and bam.bai to a channel, but how to process this steps?
This is my sample input. but when I run the process it only runs one sample
[Kasumi_H3K36, [/mnt/Data/cut_and_tag/work/0c/24e138a92a1eb0d906e1e9fad9ba4b/Kasumi_H3K36.genome.sorted.bam, /mnt/Data/cut_and_tag/work/0c/24e138a92a1eb0d906e1e9fad9ba4b/Kasumi_H3K36.genome.sorted.bam.bai]]
[Kasumi_H4K5, [/mnt/Data/cut_and_tag/work/7e/a740e11ce39f2a310b749603c785a4/Kasumi_H4K5.genome.sorted.bam, /mnt/Data/cut_and_tag/work/7e/a740e11ce39f2a310b749603c785a4/Kasumi_H4K5.genome.sorted.bam.bai]]
[Kasumi_NCOR1, [/mnt/Data/cut_and_tag/work/b8/e91ff7c7aea0fa3a0814530ab07972/Kasumi_NCOR1.genome.sorted.bam, /mnt/Data/cut_and_tag/work/b8/e91ff7c7aea0fa3a0814530ab07972/Kasumi_NCOR1.genome.sorted.bam.bai]]
[Kasumi_JMJD1C, [/mnt/Data/cut_and_tag/work/49/99ebe402d2b1953a95968525e258f6/Kasumi_JMJD1C.genome.sorted.bam, /mnt/Data/cut_and_tag/work/49/99ebe402d2b1953a95968525e258f6/Kasumi_JMJD1C.genome.sorted.bam.bai]]
Here is the control input
[Kasumi_IgG, [/mnt/Data/cut_and_tag/work/0e/1cd7aefd90105205e58fb6ef912aa4/Kasumi_IgG.genome.sorted.bam, /mnt/Data/cut_and_tag/work/0e/1cd7aefd90105205e58fb6ef912aa4/Kasumi_IgG.genome.sorted.bam.bai]]

You'll need to index your BAM files first if the index (.bai) files don't already exist. You can use samtools index <bam> for this.
Then all you would need to do is input these into your process somehow. Rather than having a separate variable in each of your input sets/tuples, what I find works quite nicely is grouping the BAM files and their indexes in a tuple of the form: tuple( bam, bai )
Then your process might look like:
process fingerprint_cov {
publishDir "${params.outdir}/fingerprint_cov", mode: 'copy'
input:
set val(test_sample_id), file(indexed_test_bam) from sorted_bam_sample_control_ch.samples
set val(control_sample_id), file(indexed_control_bam) from sorted_bam_sample_control_ch.controls
output:
set val(test_sample_id), file("${test_sample_id}.cov.bedgraph") into sample_cov_ch
set val(control_sample_id), file("${control_sample_id}.cov.bedgraph") into control_cov_ch
script:
def test_bam = indexed_test_bam.first()
def control_bam = indexed_control_bam.first()
"""
bamCoverage -b "${test_bam}" -o "${sample_id}.cov.bedgraph" -of bedgraph -bs 1000 -p 10
bamCoverage -b "${control_bam}" -o "${sample_id_c}.cov.bedgraph" -of bedgraph -bs 1000 -p 10
"""
}

Related

use directories or all files in directories as input in snakemake

I am new to snakemake. I want to use directories or all files in directories as input in snakemake. For example, two directories with different no. of bam files,
--M1
M1-1.bam
M1-2.bam
--M2
M2-3.bam
M2-5.bam
I just want to merge M1-1.bam, M1-2.bam to M1.bam; M2-3.bam, M2-5.bam to M2.bam; I tried to use wildcards and expand followd by this and this, and the code is as follows,
config.yaml
SAMPLES:
M1:
- 1
- 2
M2:
- 3
- 5
rawdata: path/to/rawdata
outpath: path/to/output
reference: path/to/reference
snakemake file
configfile:"config.yaml"
SAMPLES=config["SAMPLES"]
REFERENCE=config["reference"]
RAWDATA=config["rawdata"]
OUTPATH=config["outpath"]
ALL_INPUT = []
for key, values in SAMPLES.items():
ALL_INPUT.append(f"Map/bwa/merge/{key}.bam")
ALL_INPUT.append(f"Map/bwa/sort/{key}.sort.bam")
ALL_INPUT.append(f"Map/bwa/dup/{key}.sort.rmdup.bam")
ALL_INPUT.append(f"Map/bwa/dup/{key}.sort.rmdup.matrix")
ALL_INPUT.append(f"SNV/Mutect2/result/{key}.vcf.gz")
ALL_INPUT.append(f"Map/bwa/result/{key}")
for value in values:
ALL_INPUT.append(f"Map/bwa/result/{key}/{key}-{value}.bam")
for num in {1,2}:
ALL_INPUT.append(f"QC/fastp/{key}/{key}-{value}.R{num}.fastq.gz")
rule all:
input:
expand("{outpath}/{all_input}",all_input=ALL_INPUT,outpath=OUTPATH)
rule fastp:
input:
r1= RAWDATA + "/{key}-{value}.R1.fastq.gz",
r2= RAWDATA + "/{key}-{value}.R2.fastq.gz"
output:
a1="{outpath}/QC/fastp/{key}/{key}-{value}.R1.fastq.gz",
a2="{outpath}/QC/fastp/{key}/{key}-{value}.R2.fastq.gz"
params:
prefix="{outpath}/QC/fastp/{key}/{key}-{value}"
shell:
"""
fastp -i {input.r1} -I {input.r2} -o {output.a1} -O {output.a2} -j {params.prefix}.json -h {params.prefix}.html
"""
rule bwa:
input:
a1="{outpath}/QC/fastp/{key}/{key}-{value}.R1.fastq.gz",
a2="{outpath}/QC/fastp/{key}/{key}-{value}.R2.fastq.gz"
output:
o1="{outpath}/Map/bwa/result/{key}/{key}-{value}.bam"
params:
mem="4000",
rg="#RG\\tID:{key}\\tPL:ILLUMINA\\tSM:{key}"
shell:
"""
bwa mem -t {threads} -M -R '{params.rg}' {REFERENCE} {input.a1} {input.a2} | samtools view -b -o {output.o1}
"""
## get sample index from raw fastq
key_ids,value_ids = glob_wildcards(RAWDATA + "/{key}-{value}.R1.fastq.gz")
# remove duplicate sample name, and this is useful when there is only one sample input
key_ids = list(set(key_ids))
rule merge:
input:
expand("{outpath}/Map/bwa/result/{key}/{key}-{value}.bam",outpath=OUTPATH, key=key_ids, value=value_ids)
output:
"{outpath}/Map/bwa/merge/{key}.bam"
shell:
"""
samtools merge {output} {input}
"""
The {input} in merge command will be
M1-1.bam M1-2.bam M1-3.bam M1-5.bam M2-1.bam M2-2.bam M2-3.bam M2-5.bam
Actually, for M1 sample, the {input} should be M1-1.bam M1-2.bam; for M2, M2-3.bam M2-5.bam. I also read this, but I have no idea if there are lots of directories with different files each.
Then I tried to use directories as input, for merge rule
rule mergebam:
input:
"{outpath}/Map/bwa/result/{key}"
output:
"{outpath}/Map/bwa/merge/{key}.bam"
log:
"{outpath}/Map/bwa/log/{key}.merge.bam.log"
shell:
"""
samtools merge {output} `ls {input}/*bam` > {log} 2>&1
"""
But this give me MissingInputException error
Missing input files for rule merge:
/{outpath}/Map/bwa/result/M1
Any idea will be appreciated.
I haven't fully parsed your question but I'll give it a shot anyway... In rule merge you have:
expand("{outpath}/Map/bwa/result/{key}/{key}-{value}.bam",outpath=OUTPATH, key=key_ids, value=value_ids)
This means that you collect all combinations of outpath, key and value.
Presumably you want all combinations of value within each outpath and key. So use:
expand("{{outpath}}/Map/bwa/result/{{key}}/{{key}}-{value}.bam", value=value_ids)
if you change your config.yaml to the following, can you make the implementation easier by using expand?
SAMPLES:
M1:
- M1-1
- M2-2
M2:
- M2-3
- M2-5

Nextflow multiple inputs with different number of files

I'm trying to input two channels. However, the seacr_res_ch2 has 4 files, bigwig_ch3 has 5 files which contain a control and 4 samples. So I was trying to run the following process to compute the peak center.
When I ran this process I have got this error: unexpected EOF while looking for matching `"'
process compute_matrix_peak_center {
input:
set val(sample_id), file(seacr_bed) from seacr_res_ch2
set val(sample_id), file(bigwig) from bigwig_ch3
output:
set val(sample_id), file("${sample_id}.peak_centered.mat.gz") into peak_center_ch
script:
"""
"computeMatrix reference-point \
-S ${bigwig} \
-R ${seacr_bed} \
-a 1000 \
-b 1000 \
-o ${sample_id}.peak_centered.mat.gz \
--referencePoint center \
-p 10
"""
}
Likely the input files are not file objects. Try replacing the file in the declaration with path, eg:
input:
set val(sample_id), path(seacr_bed) from seacr_res_ch2
set val(sample_id), path(bigwig) from bigwig_ch3
Check the documentation for details https://www.nextflow.io/docs/latest/process.html#input-of-type-path
Your input block declares twice a value called sample_id. There's no guarantee that these values will be the same if the value is derived from two (or more) channels. One value will simply clobber the other(s). You'll need to join() these channels first:
input:
set val(sample_id), file(seacr_bed), file(bigwig) from seacr_res_ch2.join(bigwig_ch3)

Snakemake: HISAT2 alignment of many RNAseq reads against many genomes UPDATED

I have several genome files with suffix .1.ht2l to .8.ht2l
bob.1.ht2l
...
bob.8.ht2l
steve.1.ht2l
...
steve.8.ht2l
and sereval RNAseq samples
flower_kevin_1.fastq.gz
flower_kevin_2.fastq.gz
flower_daniel_1.fastq.gz
flower_daniel_2.fastq.gz
I need to align all rnaseq reads against each genome.
UPDATED as dariober suggested:
workdir: "/path/to/aligned"
(HISAT2_INDEX_PREFIX,)=glob_wildcards("/path/to/index/{prefix}.1.ht2l")
(SAMPLES,)=glob_wildcards("/path/to/{sample}_1.fastq.gz")
print(HISAT2_INDEX_PREFIX)
print (SAMPLES)
rule all:
input:
expand("{prefix}.{sample}.bam", zip, prefix=HISAT2_INDEX_PREFIX, sample=SAMPLES)
rule hisat2:
input:
hisat2_index=expand("%s.{ix}.ht2l" % "/path/to/index/{prefix}", ix=range(1, 9), prefix = HISAT2_INDEX_PREFIX),
fastq1="/path/to/{sample}_1.fastq.gz",
fastq2="/path/to/{sample}_2.fastq.gz"
output:
bam = "{prefix}.{sample}.bam",
txt = "{prefix}.{sample}.txt",
log: "{prefix}.{sample}.snakemake_log.txt"
threads: 5
shell:
"/Tools/hisat2-2.1.0/hisat2 -p {threads} -x {/path/to/index/{wildcards.prefix}"
" -1 {input.fastq1} -2 {input.fastq2} --summary-file {output.txt} |"
"/Tools/samtools-1.9/samtools sort -# {threads} -o {output.bam}"
The problem I get is when running HISAT2 is taking as -x input all bob.1.ht2l:bob.8.ht2l and steve.1.ht2l:steve.8.ht2l at once. While rna-seq should be mapped at each genome separately. Where is the error?
NB: my previous question: Snakemake: HISAT2 alignment of many RNAseq reads against many genomes
I think your confusion comes from the fact that hisat wants a prefix to the index files, not all the list of index files. So instead of -x {input.hisat2_index} (i.e. the list of index files) use something like -x /path/to/{wildcards.prefix}.
In other words, the input hisat2_index=expand(...) should be there only to tell snakemake to start this rule only after these files are ready but you don't use them directly (well, hisat does use them of course but you don't pass them on the command line).

Snakemake “Missing files after X seconds” error

I am getting the following error every time I try to run my snakemake script:
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cluster nodes: 99
Job counts:
count jobs
1 all
1 antiSMASH
1 pear
1 prodigal
4
[Wed Dec 11 14:59:43 2019]
rule pear:
input: Unmap_41_1.fastq, Unmap_41_2.fastq
output: merged_reads/Unmap_41.fastq
jobid: 3
wildcards: sample=Unmap_41, extension=fastq
Submitted job 3 with external jobid 'Submitted batch job 4572437'.
Waiting at most 120 seconds for missing files.
MissingOutputException in line 14 of /faststorage/project/ABR/scripts/antismash.smk:
Missing files after 120 seconds:
merged_reads/Unmap_41.fastq
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Job failed, going on with independent jobs.
Exiting because a job execution failed. Look above for error message
It would seem that the first rule is not executing, but I am unsure as to why as from what I can see all the syntax is correct. Anyone have some advice?
The snakefile is the following:
#!/miniconda/bin/python
workdir: config["path_to_files"]
wildcard_constraints:
separator = config["separator"],
extension = config["file_extension"],
sample = '|' .join(config["samples"])
rule all:
input:
expand("antismash-output/{sample}/{sample}.txt", sample = config["samples"])
# merging the paired end reads (either fasta or fastq) as prodigal only takes single end reads
rule pear:
input:
forward = f"{{sample}}{config['separator']}1.{{extension}}",
reverse = f"{{sample}}{config['separator']}2.{{extension}}"
output:
"merged_reads/{sample}.{extension}"
#conda:
#"/home/lamma/env-export/antismash.yaml"
run:
shell("set +u")
shell("source ~/miniconda3/etc/profile.d/conda.sh")
shell("conda activate antismash")
shell("pear -f {input.forward} -r {input.reverse} -o {output} -t 21")
# If single end then move them to merged_reads directory
rule move:
input:
"{sample}.{extension}"
output:
"merged_reads/{sample}.{extension}"
shell:
"cp {path}/{sample}.{extension} {path}/merged_reads/"
# Setting the rule order on the 3 above rules which should be treated equally and only one run.
ruleorder: pear > move
# annotating the metagenome with prodigal#. Can be done inside antiSMASH but prefer to do it out
rule prodigal:
input:
f"merged_reads/{{sample}}.{config['file_extension']}"
output:
gbk_files = "annotated_reads/{sample}.gbk",
protein_files = "protein_reads/{sample}.faa"
#conda:
#"/home/lamma/env-export/antismash.yaml"
run:
shell("set +u")
shell("source ~/miniconda3/etc/profile.d/conda.sh")
shell("conda activate antismash")
shell("prodigal -i {input} -o {output.gbk_files} -a {output.protein_files} -p meta")
# running antiSMASH on the annotated metagenome
rule antiSMASH:
input:
"annotated_reads/{sample}.gbk"
output:
touch("antismash-output/{sample}/{sample}.txt")
#conda:
#"/home/lamma/env-export/antismash.yaml"
run:
shell("set +u")
shell("source ~/miniconda3/etc/profile.d/conda.sh")
shell("conda activate antismash")
shell("antismash --knownclusterblast --subclusterblast --full-hmmer --smcog --outputfolder antismash-output/{wildcards.sample}/ {input}")
I am running the pipeline on only one file at the moment but the yaml file looks like this if it is of intest:
file_extension: fastq
path_to_files: /home/lamma/ABR/Each_reads
samples:
- Unmap_41
separator: _
I know the error can occure when you use certain flags in snakemake but I dont believe I am using those flags. The command being submited to run the snakefile is:
snakemake --latency-wait 120 --rerun-incomplete --keep-going --jobs 99 --cluster-status 'python /home/lamma/ABR/scripts/slurm-status.py' --cluster 'sbatch -t {cluster.time} --mem={cluster.mem} --cpus-per-task={cluster.c} --error={cluster.error} --job-name={cluster.name} --output={cluster.output}' --cluster-config antismash-config.json --configfile yaml-config-files/antismash-on-rawMetagenome.yaml --snakefile antismash.smk
I have tried to -F flag to force a rerun but this seems to do nothing, as does increasing the --latency-wait number. Any help would be appriciated :)
I think it is likely to be something involving the way I am calling the conda environment in the run commands but using the conda: option with a yaml files returns version not found style errors.
As of what I read from pear documentation:
-o Specify the name to be used as base for the output files. PEAR outputs four files. A file containing the assembled reads with a
assembled.fastq extension, two files containing the forward, resp.
reverse, unassembled reads with extensions unassembled.forward.fastq,
resp. unassembled.reverse.fastq, and a file containing the discarded
reads with a discarded.fastq extension
So if the output defined in your rule is just a base, I suggest you put this as a param and the real names of the output as output:
rule pear:
input:
forward = f"{{sample}}{config['separator']}1.{{extension}}",
reverse = f"{{sample}}{config['separator']}2.{{extension}}"
output:
"merged_reads/{sample}.{extension}".assembled.fastq,
"merged_reads/{sample}.{extension}".unassembled.forward.fastq,
"merged_reads/{sample}.{extension}".unassembled.reverse.fastq,
"merged_reads/{sample}.{extension}".discarded.fastq
params:
base="merged_reads/{sample}.{extension}"
#conda:
#"/home/lamma/env-export/antismash.yaml"
run:
shell("set +u")
shell("source ~/miniconda3/etc/profile.d/conda.sh")
shell("conda activate antismash")
shell("pear -f {input.forward} -r {input.reverse} -o {params.base} -t 21")
I haven't tested pear so not sure what the output file names are exactly.

snakemake with prefix as output including a path

How can I make sure in rule all that the output folder was well created?
Should I add each expected result file?
somehow relates to snakemake define folder as output but in my case the specified 'output' is a combination of a path to a dir and a prefix for all results files (they wil be multiple)
the following command creates a folder path Analysis/MosDepth and adds to that path the files:
gt0.mosdepth.global.dist.txt
gt0.mosdepth.region.dist.txt
gt0.per-base.bed.gz
gt0.per-base.bed.gz.csi
gt0.regions.bed.gz
gt0.regions.bed.gz.csi
rule MosDepth:
input:
bam = "Analysis/Minimap2/"+UnpackedRawFastq+".bam",
bed = "ReferenceData/"+UnpackedGenomeGFF+"_exons.bed"
output:
pfx = "Analysis/MosDepth/gt0"
threads: config["threads"]
shell:
"mosdepth -t {threads} -b {input.bed} {output.pfx} {input.bam}"
I currently have only one of the files in rule all:, is this enough or is there a better way to ensure that the mosdepth has run well and not redo it in a later re-run?
rule all:
input:
"Analysis/MosDepth/gt0.regions.bed.gz"
I would recommend sth like this:
mos_out = ['gt0.mosdepth.global.dist.txt', 'gt0.mosdepth.region.dist.txt', 'gt0.per-base.bed.gz', 'gt0.per-base.bed.gz.csi', 'gt0.regions.bed.gz', 'gt0.regions.bed.gz.csi']
rule MosDepth:
input:
bam = "Analysis/Minimap2/"+UnpackedRawFastq+".bam",
bed = "ReferenceData/"+UnpackedGenomeGFF+"_exons.bed"
output:
expand("Analysis/MosDepth/{mos_out}", mos_out=mos_out)
params:
pfx = "Analysis/MosDepth/gt0"
threads: config["threads"]
shell:
"mosdepth -t {threads} -b {input.bed} {params.pfx} {input.bam}"
If one of the output files is not created by the rule, snakemake will remove all the output files for you, and throw an error.