Using argparse to accept only one group of required arguments - argparse

I'm trying to use parser = argparse.ArgumentParser for a lil program I write.
The program accepts as an input EITHER ( a path to a txt file ) OR (opt1 && opt2 && opt3 ).
Meaning if the user wants to use a txt file as an input he cant provide neither of opt and if he provided any opt - he have to provide all 3 and cant provide a path to a txt file.
I tried using add_mutually_exclusive_group but not sure how to because the second group of arguments is a group itself.
This is what I tried so far:
import argparse
parser = argparse.ArgumentParser(description='this is the description',)
root_group = parser.add_mutually_exclusive_group()
group_list = root_group.add_mutually_exclusive_group()
group_list.add_argument('-path', help='path to the txt file')
group_list = root_group.add_mutually_exclusive_group()
group_list.add_argument('-opt1', help='opt1')
group_list.add_argument('-opt2', help='opt2')
group_list.add_argument('-opt3', help='opt3')
args = parser.parse_args()
-
python tests.py -path txt -opt1 asdasd
usage: tests.py [-h] [[-path PATH] [-opt1 OPT1 | -opt2 OPT2 | -opt3 OPT3]
tests.py: error: argument -opt1: not allowed with argument -path
path is not allowed with any of opt - that's exactly what I want.
But I want that if the user supplied even 1 opt he will have to provide all of them.
I also want that at least 1 group have be to satisfied.

Mutually exclusive groups aren't designed for nesting. It accepts it your code, but the net effect into make a 4 arguments exclusive. It will accept only one of path, opt1, opt2, etc.
While I have explored defining nesting groups, and allowing any or and operations within groups, such a feature is a long way off.
Since your user has to provide all 3 --opt I'd suggest condensing that into one argument:
root_group.add_argument('--opt', nargs=3)
root_group.add_argument('--path')
Usage should look something like
usage: tests.py [-h] [--path PATH | --opt OPT OPT OPT]
Contrast that with a hypothetical usage that allows nested inclusive groups:
[-path PATH | [-opt1 OPT1 & -opt2 OPT2 & -opt3 OPT3]]
===========
With a tuple metavar, the usage can be refined to:
g.add_argument('--opt',nargs=3,metavar=('OPT1','OPT2','OPT3'))
usage: ipython3 [-h] [--path PATH | --opt OPT1 OPT2 OPT3]
=============
Your other option is to write a custom usage and perform you own logical tests after parsing.

I would use a subcommand parser instead. Your "options" aren't really optional; they are 3 required arguments in a particular context.
import argparse
p = argparse.ArgumentParser()
sp = p.add_subparsers()
p1 = sp.add_parser('file')
p1.add_argument('path')
p2 = sp.add_parser('opts')
p2.add_argument('opt1')
p2.add_argument('opt2')
p2.add_argument('opt3')
args = parser.parse_args()
Then you would invoke your script with
python tmp.py file foo.txt
or
python tmp.py opts 1 2 3
The help will tell you about the required positional argument whose value is either file or opts:
% python tmp.py -h
usage: tmp.py [-h] {file,opts} ...
positional arguments:
{file,opts}
optional arguments:
-h, --help show this help message and exit
and each subcommand has its own usage message:
% python tmp.py file -h
usage: tmp.py file [-h] path
positional arguments:
path
optional arguments:
-h, --help show this help message and exit
% python tmp.py opts -h
usage: tmp.py opts [-h] opt1 opt2 opt3
positional arguments:
opt1
opt2
opt3
optional arguments:
-h, --help show this help message and exit

Related

Nextflow: publishDir, output channels, and output subdirectories

I've been trying to learn how to use Nextflow and come across an issue with adding output to a channel as I need the processes to run in an order. I want to pass output files from one of the output subdirectories created by the tool (ONT-Guppy) into a channel, but can't seem to figure out how.
Here is the nextflow process in question:
process GupcallBases {
publishDir "$params.P1_outDir", mode: 'copy', pattern: "pass/*.bam"
executor = 'pbspro'
clusterOptions = "-lselect=1:ncpus=${params.P1_threads}:mem=${params.P1_memory}:ngpus=1:gpu_type=${params.P1_GPU} -lwalltime=${params.P1_walltime}:00:00"
output:
path "*.bam" into bams_ch
script:
"""
module load cuda/11.4.2
singularity exec --nv $params.Gup_container \
guppy_basecaller --config $params.P1_gupConf \
--device "cuda:0" \
--bam_out \
--recursive \
--compress \
--align_ref $params.refGen \
-i $params.P1_inDir \
-s $params.P1_outDir \
--gpu_runners_per_device $params.P1_GPU_runners \
--num_callers $params.P1_callers
"""
}
The output of the process is something like this:
$params.P1_outDir/pass/(lots of bams and fastqs)
$params.P1_outDir/fail/(lots of bams and fastqs)
$params.P1_outDir/(a few txt and log files)
I only want to keep the bam files in $params.P1_outDir/pass/, hence trying to use the pattern = "pass/*.bam, but I've tried a few other patterns to no avail.
The output syntax was chosen since once this process is done, using the following channel works:
// Channel
// .fromPath("${params.P1_outDir}/pass/*.bam")
// .ifEmpty { error "Cannot find any bam files in ${params.P1_outDir}" }
// .set { bams_ch }
But the problem is if I don't pass the files into the output channel of the first process, they run in parallel. I could simply be missing something in the extensive documentation in how to order processes, which would be an alternative solution.
Edit: I forgo to add the error message which is here: Missing output file(s) `*.bam` expected by process `GupcallBases` and the $params.P1_outDir/ contains the subdirectories and all the log files despite the pattern argument.
Thanks in advance.
Nextflow processes are designed to run isolated from each other, but this can be circumvented somewhat when the command-line input and/or outputs are specified using params. Using params like this can be problematic because if, for example, a params variable specifies an absolute path but your output declaration expects files in the Nextflow working directory (e.g. ./work/fc/0249e72585c03d08e31ce154b6d873), you will get the 'Missing output file(s) expected by process' error you're seeing.
The solution is to ensure your inputs are localized in the working directory using an input declaration block and that the outputs are also written to the work dir. Note that only files specified in the output declaration block can be published using the publishDir directive.
Also, best to avoid calling Singularity manually in your script block. Instead just add singularity.enabled = true to your nextflow.config. This should also work nicely with the beforeScript process directive to initialize your environment:
params.publishDir = './results'
input_dir = file( params.input_dir )
guppy_config = file( params.guppy_config )
ref_genome = file( params.ref_genome )
process GuppyBasecaller {
publishDir(
path: "${params.publishDir}/GuppyBasecaller",
mode: 'copy',
saveAs: { fn -> fn.substring(fn.lastIndexOf('/')+1) },
)
beforeScript 'module load cuda/11.4.2; export SINGULARITY_NV=1'
container '/path/to/guppy_basecaller.img'
input:
path input_dir
path guppy_config
path ref_genome
output:
path "outdir/pass/*.bam" into bams_ch
"""
mkdir outdir
guppy_basecaller \\
--config "${guppy_config}" \\
--device "cuda:0" \\
--bam_out \\
--recursive \\
--compress \\
--align_ref "${ref_genome}" \\
-i "${input_dir}" \\
-s outdir \\
--gpu_runners_per_device "${params.guppy_gpu_runners}" \\
--num_callers "${params.guppy_callers}"
"""
}

Problems with the VEP snakemake wrapper

I'm experiencing two issues trying to run the VEP wrapper for snakemake.
The first is that I would like to use lambda wildcards in calls like so:
calling_dir = os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"])
callings_locations = [calling_dir] * len_samples
callings_dict = dict(zip(sample_names, callings_locations))
def getVCFs(sample):
return(list(os.path.join(callings_dict[sample],"{0}_sorted_dedupped_snp_varscan.vcf".format(sample,pair)) for pair in ['']))
rule variant_annotation:
input:
calls= lambda wildcards: getVCFs(wildcards.sample),
cache="resources/vep/cache",
plugins="resources/vep/plugins",
output:
calls="variants.annotated.vcf",
stats="variants.html"
params:
plugins=["LoFtool"],
extra="--everything"
message: """--- Annotating Variants."""
resources:
mem = 30000,
time = 120
threads: 4
wrapper:
"0.64.0/bio/vep/annotate"
However, I get an error:
When I replace lambda wildcards with a calls= expand('{CALLING_DIR}/{CALLING_TOOL}/{sample}_sorted_dedupped_snp_varscan.vcf', CALLING_DIR=dirs_dict["CALLING_DIR"], CALLING_TOOL=config["CALLING_TOOL"], sample=sample_names) ([which is not ideal - see this post for reason][1]) it give me errors about resources folder?
(snakemake) [moldach#cedar1 MTG353]$ snakemake -n -r
Building DAG of jobs...
MissingInputException in line 333 of /scratch/moldach/MADDOG/VCF-FILES/biostars439754/MTG353/Snakefile:
Missing input files for rule variant_annotation:
resources/vep/cache
resources/vep/plugins
I'm also [confused from the documentation as to how it knows which reference genome (version, _etc.) should be specified][2].
UPDATE:
Because of the character limit I cannot even respond to the two respondents so I will continue the issue here:
As #jafors mentioned the two wrappers solved the issue for cache and plugins - thanks!
Now I get an error from trying to run VEP though from the following rule:
rule variant_annotation:
input:
calls= expand('{CALLING_DIR}/{CALLING_TOOL}/{sample}_sorted_dedupped_snp_varscan.vcf', CALLING_DIR=dirs_dict["CALLING_DIR"], CALLING_TOOL=config["CALLING_TOOL"], sample=sample_names),
cache="resources/vep/cache",
plugins="resources/vep/plugins",
output:
calls=expand('{ANNOT_DIR}/{ANNOT_TOOL}/{sample}.annotated.vcf', ANNOT_DIR=dirs_dict["ANNOT_DIR"], ANNOT_TOOL=config["ANNOT_TOOL"], sample=sample_names),
stats=expand('{ANNOT_DIR}/{ANNOT_TOOL}/{sample}.html', ANNOT_DIR=dirs_dict["ANNOT_DIR"], ANNOT_TOOL=config["ANNOT_TOOL"], sample=sample_names)
params:
plugins=["LoFtool"],
extra="--everything"
message: """--- Annotating Variants."""
resources:
mem = 30000,
time = 120
threads: 4
wrapper:
"0.64.0/bio/vep/annotate"
this is the error I get from the log:
Building DAG of jobs...
Using shell: /cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin/bash
Provided cores: 4
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 variant_annotation
1
[Wed Aug 12 20:22:49 2020]
Job 0: --- Annotating Variants.
Activating conda environment: /scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/conda/f16fdb5f
Traceback (most recent call last):
File "/scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/scripts/tmpwx1u_776.wrapper.py", line 36, in <module>
if snakemake.output.calls.endswith(".vcf.gz"):
AttributeError: 'Namedlist' object has no attribute 'endswith'
[Wed Aug 12 20:22:53 2020]
Error in rule variant_annotation:
jobid: 0
output: ANNOTATION/VEP/BC1217.annotated.vcf, ANNOTATION/VEP/470.annotated.vcf, ANNOTATION/VEP/MTG109.annotated.vcf, ANNOTATION/VEP/BC1217.html, ANNOTATION/VEP/470.html, ANNOTATION/VEP/MTG$
conda-env: /scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/conda/f16fdb5f
RuleException:
CalledProcessError in line 393 of /scratch/moldach/MADDOG/VCF-FILES/biostars439754/Snakefile:
Command 'source /home/moldach/miniconda3/bin/activate '/scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/conda/f16fdb5f'; set -euo pipefail; python /scratch/moldach/MADDOG/VCF-FILE$
File "/scratch/moldach/MADDOG/VCF-FILES/biostars439754/Snakefile", line 393, in __rule_variant_annotation
File "/cvmfs/soft.computecanada.ca/easybuild/software/2017/Core/python/3.8.0/lib/python3.8/concurrent/futures/thread.py", line 57, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
TO BE CLEAR:
This is the code I had running VEP prior to trying out the wrapper so I would like to preserve similar options (e.g. offline, etc.):
vep \
-i {input.sample} \
--species "caenorhabditis_elegans" \
--format "vcf" \
--everything \
--cache_version 100 \
--offline \
--force_overwrite \
--fasta {input.ref} \
--gff {input.annot} \
--tab \
--variant_class \
--regulatory \
--show_ref_allele \
--numbers \
--symbol \
--protein \
-o {params.sample}
UPDATE 2:
Yes the use of expand() was the issue. I remember this is why I like to use lambda or os.path.join() as rule input/output except for as you mentioned in rule all:
The following seems to get rid of that problem although I'm met with a new one:
rule variant_annotation:
input:
calls= lambda wildcards: getVCFs(wildcards.sample),
cache="resources/vep/cache",
plugins="resources/vep/plugins",
output:
calls=os.path.join(dirs_dict["ANNOT_DIR"],config["ANNOT_TOOL"],"{sample}.annotated.vcf"),
stats=os.path.join(dirs_dict["ANNOT_DIR"],config["ANNOT_TOOL"],"{sample}.html")
Not sure why I get the unknown file type error - as I mentioned this was first tested out with the full command with the same input data?
Activating conda environment: /scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/conda/f16fdb5f
Failed to open VARIANT_CALLING/varscan/MTG109_sorted_dedupped_snp_varscan.vcf: unknown file type
Possible precedence issue with control flow operator at /scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/conda/f16fdb5f/lib/site_perl/5.26.2/Bio/DB/IndexedBase.pm line 805.
Traceback (most recent call last):
File "/scratch/moldach/MADDOG/VCF-FILES/biostars439754/.snakemake/scripts/tmpsh388k23.wrapper.py", line 44, in <module>
"(bcftools view {snakemake.input.calls} | "
File "/home/moldach/bin/snakemake/lib/python3.8/site-packages/snakemake/shell.py", line 156, in __new__
raise sp.CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command 'set -euo pipefail; (bcftools view VARIANT_CALLING/varscan/MTG109_sorted_dedupped_snp_varscan.vcf | vep --everything --fork 4 --format vcf --vcf --cach$
[Thu Aug 13 09:02:22 2020]
Update 3:
bcftools view is giving the warning from the output of samtools mpileup/varscan pileup2snp:
def getDeduppedBamsIndex(sample):
return(list(os.path.join(aligns_dict[sample],"{0}.sorted.dedupped.bam.bai".format(sample,pair)) for pair in ['']))
rule mpilup:
input:
bam=lambda wildcards: getDeduppedBams(wildcards.sample),
reference_genome=os.path.join(dirs_dict["REF_DIR"],config["REF_GENOME"])
output:
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_{contig}.mpileup.gz"),
log:
os.path.join(dirs_dict["LOG_DIR"],config["CALLING_TOOL"],"{sample}_{contig}_samtools_mpileup.log")
params:
extra=lambda wc: "-r {}".format(wc.contig)
resources:
mem = 1000,
time = 30
wrapper:
"0.65.0/bio/samtools/mpileup"
rule mpileup_to_vcf:
input:
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_{contig}.mpileup.gz"),
output:
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_{contig}.vcf")
message:
"Calling SNP with Varscan2"
threads:
2 # Keep threading value to one for unzipped mpileup input
# Set it to two for zipped mipileup files
log:
os.path.join(dirs_dict["LOG_DIR"],config["CALLING_TOOL"],"varscan_{sample}_{contig}.log")
resources:
mem = 1000,
time = 30
wrapper:
"0.65.0/bio/varscan/mpileup2snp"
rule vcf_merge:
input:
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_I.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_II.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_III.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_IV.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_V.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_X.vcf"),
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}_MtDNA.vcf")
output:
os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"],"{sample}.vcf")
log: os.path.join(dirs_dict["LOG_DIR"],config["CALLING_TOOL"],"{sample}_vcf-merge.log")
resources:
mem = 1000,
time = 10
threads: 1
message: """--- Merge VarScan by Chromosome."""
shell: """
awk 'FNR==1 && NR!=1 {{ while (/^<header>/) getline; }} 1 {{print}} ' {input} > {output}
"""
calling_dir = os.path.join(dirs_dict["CALLING_DIR"],config["CALLING_TOOL"])
callings_locations = [calling_dir] * len_samples
callings_dict = dict(zip(sample_names, callings_locations))
def getVCFs(sample):
return(list(os.path.join(callings_dict[sample],"{0}.vcf".format(sample,pair)) for pair in ['']))
rule annotate_variants:
input:
calls=lambda wildcards: getVCFs(wildcards.sample),
cache="resources/vep/cache",
plugins="resources/vep/plugins",
output:
calls="{sample}.annotated.vcf",
stats="{sample}.html"
params:
# Pass a list of plugins to use, see https://www.ensembl.org/info/docs/tools/vep/script/vep_plugins.html
# Plugin args can be added as well, e.g. via an entry "MyPlugin,1,FOO", see docs.
plugins=["LoFtool"],
extra="--everything" # optional: extra arguments
log:
"logs/vep/{sample}.log"
threads: 4
resources:
time=30,
mem=5000
wrapper:
"0.65.0/bio/vep/annotate"
If I run bcftools view on the output I get the error:
$ bcftools view variant_calling/varscan/MTG324.vcf
Failed to read from variant_calling/varscan/MTG324.vcf: unknown file type
About using the expand vs wildcard, it does not matter at all. The biostar post is just advice how to keep things readable. On the snakemake/programmatic side should not matter how you define you input, as long as it is correct.
The complaint about resources is that you define in the input of rule variant_annotation that resources/vep/cache and resources/vep/plugins are necessary inputs to be able to run variant_annotation. With this error snakemake is effectively telling you that those files do not exist, so it can not run the rule for you.
When I look at the code in the docs it seems like the cache directory as input should define which genome you use:
entrypath = get_only_child_dir(get_only_child_dir(Path(cache)))
species = entrypath.parent.name
release, build = entrypath.name.split("_")
Additionally to what Maarten said (the resources/vep/cache and resources/vep/plugins are just example paths to the required input which defines also which genome and version you want to use), you can get the cache and plugin directories easily with two other simple rules in your Snakefile using these wrappers:
https://snakemake-wrappers.readthedocs.io/en/stable/wrappers/vep/cache.htm
https://snakemake-wrappers.readthedocs.io/en/stable/wrappers/vep/plugins.html
EDIT
Glad this worked out for your first problem.
The second error seems to arise from the expand in the output.
Am I understanding correctly that you want to annotate all your vcfs one-by-one? So input is {sample}.vcf and output would be {sample}.annotated.vcf?
If that's the case, you probably don't want to use expand in this rule.
I am also not sure, why you would need the {ANNOT_DIR} and {ANNOT_TOOL} to be wildcards here. I guess if you are using VEP, the ANNOT_TOOL would always be VEP and the ANNOT_DIR will be ANNOTATION?
Then, you could write them directly in the output as ANNOTATION/VEP/{sample}.annotated.vcf.
Same for the {CALLING_DIR}, I guess this will always be the same directory, right? I get that the {CALLING_TOOL} might have more than one value if you used multiple callers on the samples.
If I am still on track, you have two wildcards you could want to expand on when using VEP, the {sample} and the {CALLING_TOOL}.
Just write
input:
calls: 'CALLDIR/{CALLING_TOOL}/{sample}_sorted_dedupped_snp_varscan.vcf',
cache="resources/vep/cache",
plugins="resources/vep/plugins"
output:
calls='ANNOTATION/VEP/{CALLING_TOOL}/{sample}.annotated.vcf',
stats='ANNOTATION/VEP/{CALLING_TOOL}/{sample}.html'
The expand belongs in your rule all or any other target rule that uses all annotated vcfs at once, sth. like this:
rule all:
input: expand('ANNOTATION/VEP/{CALLING_TOOL}/{sample}.annotated.vcf', CALLING_TOOL=config["CALLING_TOOL"], sample=sample_names)
Then, the variant_annotation rule will run all the samples you expand on in rule all.
I hope I got your idea correctly and this helps.
EDIT2
Ok, seems like we are nearly done. The error you get is thrown by bcftools view - it indicates that something might be wrong with the vcf.
Did you try bcftools view with your vcf outside of the Snakefile? This would give us an idea if the problem arises during this rule or if the vcf is already somehow problematic.

snakemake: how to implement log directive when using run directive?

Snakemake allows creation of a log for each rule with log parameter that specifies the name of the log file. It is relatively straightforward to pipe results from shell output to this log, but I am not able to figure out a way of logging output of run output (i.e. python script).
One workaround is to save the python code in a script and then run it from the shell, but I wonder if there is another way?
I have some rules that use both the log and run directives. In the run directive, I "manually" open and write the log file.
For instance:
rule compute_RPM:
input:
counts_table = source_small_RNA_counts,
summary_table = rules.gather_read_counts_summaries.output.summary_table,
tags_table = rules.associate_small_type.output.tags_table,
output:
RPM_table = OPJ(
annot_counts_dir,
"all_{mapped_type}_on_%s" % genome, "{small_type}_RPM.txt"),
log:
log = OPJ(log_dir, "compute_RPM_{mapped_type}", "{small_type}.log"),
benchmark:
OPJ(log_dir, "compute_RPM_{mapped_type}", "{small_type}_benchmark.txt"),
run:
with open(log.log, "w") as logfile:
logfile.write(f"Reading column counts from {input.counts_table}\n")
counts_data = pd.read_table(
input.counts_table,
index_col="gene")
logfile.write(f"Reading number of non-structural mappers from {input.summary_table}\n")
norm = pd.read_table(input.summary_table, index_col=0).loc["non_structural"]
logfile.write(str(norm))
logfile.write("Computing counts per million non-structural mappers\n")
RPM = 1000000 * counts_data / norm
add_tags_column(RPM, input.tags_table, "small_type").to_csv(output.RPM_table, sep="\t")
For third-party code that writes to stdout, maybe the redirect_stdout context manager could be helpful (found in https://stackoverflow.com/a/40417352/1878788, documented at
https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout).
Test snakefile, test_run_log.snakefile:
from contextlib import redirect_stdout
rule all:
input:
"test_run_log.txt"
rule test_run_log:
output:
"test_run_log.txt"
log:
"test_run_log.log"
run:
with open(log[0], "w") as log_file:
with redirect_stdout(log_file):
print(f"Writing result to {output[0]}")
with open(output[0], "w") as out_file:
out_file.write("result\n")
Running it:
$ snakemake -s test_run_log.snakefile
Results:
$ cat test_run_log.log
Writing result to test_run_log.txt
$ cat test_run_log.txt
result
My solution was the following. This is usefull both for normal log and logging exceptions with traceback. You can then wrap logger setup in a function to make it more organized. It's not very pretty though. Would be much nicer if snakemake could do it by itself.
import logging
# some stuff
rule logging_test:
input: 'input.json'
output: 'output.json'
log: 'rules_logs/logging_test.log'
run:
logger = logging.getLogger('logging_test')
fh = logging.FileHandler(str(log))
fh.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
try:
logger.info('Starting operation!')
# do something
with open(str(output), 'w') as f:
f.write('success!')
logger.info('Ended!')
except Exception as e:
logger.error(e, exc_info=True)

Is there a way to set parameters for the Java VM when using a Snakemake wrapper?

When using tools like picard or fgbio through snakemake wrappers, I keep running into out-of-memory issues. At the moment I resort to direct shell calls, which allow me to set the VMs memory. I would prefer to pass these parameters to the wrapped tools. Is there a way, maybe through the resources directive, passing something like mem_mb=10000? I tried, but have not gotten it to work yet.
I have never used the wrapper directive but looking for example at markduplicates/wrapper.py the shell command is picard MarkDuplicates {snakemake.params} .... So maybe using the params slot works?
rule markdups:
input:
'in.bam',
output:
bam= 'out.bam',
metrics= 'metrics.tmp',
params:
mem= "-Xmx4g",
wrapper:
"0.31.0/bio/picard/markduplicates"
picard should understand that -Xmx... is a java parameter.
According to wrapper sources (https://bitbucket.org/snakemake/snakemake-wrappers/src/bd3178f4b82b1856370bb48c8bdbb1932ace6a19/bio/picard/markduplicates/wrapper.py?at=master&fileviewer=file-view-default), it uses cmdline:
from snakemake.shell import shell
shell("picard MarkDuplicates {snakemake.params} INPUT={snakemake.input} "
"OUTPUT={snakemake.output.bam} METRICS_FILE={snakemake.output.metrics} "
"&> {snakemake.log}")
So you could pass any options using params: "smth" section.
If you check picard excecutable script sources:
cat `which picard`
You will find:
...
pass_args=""
for arg in "$#"; do
case $arg in
'-D'*)
jvm_prop_opts="$jvm_prop_opts $arg"
;;
'-XX'*)
jvm_prop_opts="$jvm_prop_opts $arg"
;;
'-Xm'*)
jvm_mem_opts="$jvm_mem_opts $arg"
;;
*)
if [[ ${pass_args} == '' ]] #needed to avoid preceeding space on first arg e.g. ' MarkDuplicates'
then
pass_args="$arg"
else
pass_args="$pass_args \"$arg\"" #quotes later arguments to avoid problem with ()s in MarkDuplicates regex arg
fi
;;
esac
done
...
So I assume this should work:
rule markdups:
input:
"in.bam",
output:
bam = "out.bam",
metrics = "metrics.tmp",
params:
"-Xmx10000m"
wrapper:
"0.31.0/bio/picard/markduplicates"

Snakemake: How do I use a function that takes in a wildcard and returns a value?

I have cram(bam) files that I want to split by read group. This requires reading the header and extracting the read group ids.
I have this function which does that in my Snakemake file:
def identify_read_groups(cram_file):
import subprocess
command = 'samtools view -H ' + cram_file + ' | grep ^#RG | cut -f2 | cut -f2 -d":" '
read_groups = subprocess.check_output(command, shell=True)
read_groups = read_groups.split('\n')[:-1]
return(read_groups)
I have this rule all:
rule all:
input:
expand('cram/RG_bams/{sample}.RG{read_groups}.bam', read_groups=identify_read_groups('cram/{sample}.bam.cram'))
And this rule to actually do the split:
rule split_cram_by_rg:
input:
cram_file='cram/{sample}.bam.cram',
read_groups=identify_read_groups('cram/{sample}.bam.cram')
output:
'cram/RG_bams/{sample}.RG{read_groups}.bam'
run:
import subprocess
read_groups = open(input.readGroupIDs).readlines()
read_groups = [str(rg.replace('\n','')) for rg in read_groups]
for rg in read_groups:
command = 'samtools view -b -r ' + str(rg) + ' ' + str(input.cram_file) + ' > ' + str(output)
subprocess.check_output(command, shell=True)
I get this error when doing a dry run:
[E::hts_open_format] fail to open file 'cram/{sample}.bam.cram'
samtools view: failed to open "cram/{sample}.bam.cram" for reading: No such file or directory
TypeError in line 19 of /gpfs/gsfs5/users/mcgaugheyd/projects/nei/mcgaughey/EGA_EGAD00001002656/Snakefile:
a bytes-like object is required, not 'str'
File "/gpfs/gsfs5/users/mcgaugheyd/projects/nei/mcgaughey/EGA_EGAD00001002656/Snakefile", line 37, in <module>
File "/gpfs/gsfs5/users/mcgaugheyd/projects/nei/mcgaughey/EGA_EGAD00001002656/Snakefile", line 19, in identify_read_groups
{sample} isn't being passed to the function.
How do I solve this problem? I'm open to other approaches if I'm not doing this in a 'snakemake-ic' way.
==============
EDIT 1
Ok, the first set of examples I gave had many many issues.
Here's a better (?) set of code, which I hope demonstrates my issue.
import sys
from os.path import join
shell.prefix("set -eo pipefail; ")
def identify_read_groups(wildcards):
import subprocess
cram_file = 'cram/' + wildcards + '.bam.cram'
command = 'samtools view -H ' + cram_file + ' | grep ^#RG | cut -f2 | cut -f2 -d":" '
read_groups = subprocess.check_output(command, shell=True)
read_groups = read_groups.decode().split('\n')[:-1]
return(read_groups)
SAMPLES, = glob_wildcards(join('cram/', '{sample}.bam.cram'))
RG_dict = {}
for i in SAMPLES:
RG_dict[i] = identify_read_groups(i)
rule all:
input:
expand('{sample}.boo.txt', sample=list(RG_dict.keys()))
rule split_cram_by_rg:
input:
file='cram/{sample}.bam.cram',
RG = lambda wildcards: RG_dict[wildcards.sample]
output:
expand('cram/RG_bams/{{sample}}.RG{input_RG}.bam') # I have a problem HERE. How can I get my read groups values applied here? I need to go from one cram to multiple bam files split by RG (see -r in samtools view below). It can't pull the RG from the input.
shell:
'samtools view -b -r {input.RG} {input.file} > {output}'
rule merge_RG_bams_into_one_bam:
input:
rules.split_cram_by_rg.output
output:
'{sample}.boo.txt'
message:
'echo {input}'
shell:
'samtools merge {input} > {output}' #not working
"""
==============
EDIT 2
Getting MUCH closer, but currently struggling with expand properly building the lane bam files and keeping the wildcards
I'm using this loop to create the intermediate file names:
for sample in SAMPLES:
for rg_id in list(return_ID(sample)):
out_rg_bam.append("temp/lane_bam/{}.ID{}.bam".format(sample, rg_id))
return_ID is a function which takes the sample wildcard and returns a list of the read groups the sample contains
If I use out_rg_bam as an input for a merge rule, then ALL of the files get combined into a merged bam, instead of being split by sample.
If I use expand('temp/realigned/{{sample}}.ID{rg_id}.realigned.bam', sample=SAMPLES, rg_id = return_ID(sample)) then rg_id gets applied to each sample. So if I have two samples (a,b) , with read groups (0,1) and (0,1,2), I end up with a0, a1, a0, a1, a2 and b0, b1, b0, b1, b2.
I'm going to give a more general answer to help others that might find this thread. Snakemake only applies wildcards to strings in the 'input' and 'output' sections when the strings are directly listed, e.g.:
input:
'{sample}.bam'
If you are trying to use functions like you were here:
input:
read_groups=identify_read_groups('cram/{sample}.bam.cram')
The wildcard replacement will not be done. You can use a lambda function and do the replacement yourself:
input:
read_groups=lambda wildcards: identify_read_groups('cram/{sample}.bam.cram'.format(sample=wildcards.sample))
try this:
I use id = 0, 1, 2, 3 to name the output bam file depending on how many readgroup for a bam file.
## this is a regular function which takes the cram file, and get the read-group to
## construct your rule all
## you actually just need the number of #RG, below can be simplified
def get_read_groups(sample):
import subprocess
cram_file = 'cram/' + sample + '.bam.cram'
command = 'samtools view -H ' + cram_file + ' | grep ^#RG | cut -f2 | cut -f2 -d":" '
read_groups = subprocess.check_output(command, shell=True)
read_groups = read_groups.decode().split('\n')[:-1]
return(read_groups)
SAMPLES, = glob_wildcards(join('cram/', '{sample}.bam.cram'))
RG_dict = {}
for sample in SAMPLES:
RG_dict[sample] = get_read_groups(sample)
outbam = []
for sample in SAMPLES:
read_groups = RG_dict[sample]
for i in range(len(read_groups)):
outbam.append("{}.RG{}.bam".format(sample, id))
rule all:
input:
outbam
## this is the input function, only uses wildcards as argument
def identify_read_groups(wildcards):
import subprocess
cram_file = 'cram/' + wildcards.sample + '.bam.cram'
command = 'samtools view -H ' + cram_file + ' | grep ^#RG | cut -f2 | cut -f2 -d":" '
read_groups = subprocess.check_output(command, shell=True)
read_groups = read_groups.decode().split('\n')[:-1]
return(read_groups[wildcards.id])
rule split_cram_by_rg:
input:
cram_file='cram/{sample}.bam.cram',
read_groups=identify_read_groups
output:
'cram/RG_bams/{sample}.RG{id}.bam'
run:
import subprocess
read_groups = input.read_groups
for rg in read_groups:
command = 'samtools view -b -r ' + str(rg) + ' ' + str(input.cram_file) + ' > ' + str(output)
subprocess.check_output(command, shell=True)
when use snakemake, think the way bottom up. First define what you want to generate in the rule all, and then construct the rule to create your final all.
Your all rule cannot have wildcards. It's a no wildcard-zone.
EDIT 1
I typed this pseudo-code in Notepad++, its not meant to compile, just trying to provide a framework. I think this is more what you are after.
Use a function inside of an expand to generate a list of file names which will then be used to driver the Snakemake pipeline's all rule. The baseSuffix and basePrefix variables are just to give you an idea as to String passing, arguments are permitted here. When passing back the list of strings, you will have to unpack them to ensure Snakemake reads the result properly.
def getSampleFileList(String basePrefix, String baseSuffix){
myFileList = []
ListOfSamples = *The wildcard glob call*
for sample in ListOfSamples:
command = "samtools -h " + sample + "SAME CALL USED TO GENERATE LIST OF HEADERS"
for rg in command:
myFileList.append(basePrefix + sample + ".RG" + rg + baseSuffix)
}
basePreix = "cram/RG_bams/"
baseSuffix = ".bam"
rule all:
input:
unpack(expand("{fileName}", fileName=getSampleFileList(basePrefix, baseSuffix)))
rule processing_rg_files:
input:
'cram/RG_bams/{sample}.RG{read_groups}.bam'
output:
'cram/RG_TXTs/{sample}.RG{read_groups}.txt'
run:
"Let's pretend this is useful code"
END OF EDIT
If it wasn't in the all rule, you'd use inline functions
So I'm not sure what you're trying to accomplish. As per my guesses, read below for some notes about your code.
rule all:
input:
expand('cram/RG_bams/{sample}.RG{read_groups}.bam', read_groups=identify_read_groups('cram/{sample}.bam.cram'))
The dry run is failing when it calls the function "identify_read_groups" inside the rule all call. It's being passed into your function call as a string, not a wildcard.
Technically, if the samtools call wasn't failing, and the function call "identify_read_groups(cram_file)" returned a list of 5 strings, it would expand to something like this:
rule all:
input:
'cram/RG_bams/{sample}.RG<output1FromFunctionCall>.bam',
'cram/RG_bams/{sample}.RG<output2FromFunctionCall>.bam',
'cram/RG_bams/{sample}.RG<output3FromFunctionCall>.bam',
'cram/RG_bams/{sample}.RG<output4FromFunctionCall>.bam',
'cram/RG_bams/{sample}.RG<output5FromFunctionCall>.bam'
But the term "{sample}", at this stage in Snakemake's pre-processing, is considered a string. As you needed to denote wildcards in an expand function with {{}}.
See how I address every Snakemake variable I declare for my rule all input call and don't use wildcards:
expand("{outputDIR}/{pathGVCFT}/tables/{samples}.{vcfProgram}.{form[1][varType]}{form[1][annotated]}.txt", outputDIR=config["outputDIR"], pathGVCFT=config["vcfGenUtil_varScanDIR"], samples=config["sample"], vcfProgram=config["vcfProgram"], form=read_table(StringIO(config["sampleFORM"]), " ").iterrows())
In this case read_table returns 2-dimensional array to form. Snakemake is well supported by python. I needed this for pairing of different annotations to different variant types.
Your rule all needs to be a string, or list of strings, as input. You cannot have wildcards in your "all" rule. These rule all input strings are what Snakemake uses to generate matches for OTHER wildcards. Build the entire filename in the function call and return it if you need to.
I think you should just turn it into something like this:
rule all:
input:
expand("{fileName}", fileName=myFunctionCall(BecauseINeededToPass, ACoupleArgs))
Also consider updating this to be more generic.:
rule split_cram_by_rg:
input:
cram_file='cram/{sample}.bam.cram',
read_groups=identify_read_groups('cram/{sample}.bam.cram')
It can have two or more wildcards (why we love Snakemake). You can access the wildcards later in the python "run" directive via the wildcards object, since it looks like you'll want to in your for each loop.
I think input and output wildcards have to match, so maybe do try it this way as well.
rule split_cram_by_rg:
input:
'cram/{sample}.bam.cram'
output:
expand('cram/RG_bams/{{sample}}.RG{read_groups}.bam', read_groups=myFunctionCall(BecauseINeededToPass, ACoupleArgs))
...
params:
rg=myFunctionCall(BecauseINeededToPass, ACoupleArgs)
run:
command = 'Just an example ' + + str(params.rg)
Again, not super sure what you're trying to do, I'm not sure I like the idea of the function call twice, but hey, it would run ;P Also notice the use of a wildcard "sample" in the input directive within a string {} and in the output directive within an expand {{}}.
An example of accessing wildcards in your run directive
Example of function calls in places you wouldn't think. I grabbed VCF fields but it could have been anything. I use an external configfile here.