Can Snakemake parallelize the same rule both within and across nodes? - snakemake

I have a somewhat basic question about Snakemake parallelization when using cluster execution: can jobs from the same rule be parallelized both within a node and across multiple nodes at the same time?
Let's say for example that I have 100 bwa mem jobs and my cluster has nodes with 40 cores each. Could I run 4 bwa mem per node, each using 10 threads, and then have Snakemake submit 25 separate jobs? Essentially, I want to parallelize both within and across nodes for the same rule.
Here is my current snakefile:
SAMPLES, = glob_wildcards("fastqs/{id}.1.fq.gz")
print(SAMPLES)
rule all:
input:
expand("results/{sample}.bam", sample=SAMPLES)
rule bwa:
resources:
time="4:00:00",
partition="short-40core"
input:
ref="/path/to/reference/genome.fa",
fwd="fastqs/{sample}.1.fq.gz",
rev="fastqs/{sample}.2.fq.gz"
output:
bam="results/{sample}.bam"
log:
"results/logs/bwa/{sample}.log"
params:
threads=10
shell:
"bwa mem -t {params.threads} {input.ref} {input.fwd} {input.rev} 2> {log} | samtools view -bS - > {output.bam}"
I've run this with the following command:
snakemake --cluster "sbatch --partition={resources.partition}" -s bwa_slurm_snakefile --jobs 25
With this setup, I get 25 jobs submitted, each to a different node. However, only one bwa mem process (using 10 threads) is run per node.
Is there some straightforward way to modify this so that I could get 4 different bwa mem jobs (each using 10 threads) to run on each node?
Thanks!
Dave
Edit 07/28/22:
In addition to Troy's suggestion below, I found a straightforward way of accomplishing what I was trying to do by simply following the job grouping documentation.
Specifically, I did the following when executing my Snakemake pipeline:
snakemake --cluster "sbatch --partition={resources.partition}" -s bwa_slurm_snakefile --jobs 25 --groups bwa=group0 --group-components group0=4 --rerun-incomplete --cores 40
By specifying a group ("group0") for the bwa rule and setting "--group-components group0=4", I was able to group the jobs such that 4 bwa runs are occurring on each node.

You can try job grouping but note that resources are typically summed together when submitting group jobs like this. Usually that's not what is desired, but in your case it seems to be correct.
Instead you can make a group job with another rule that does the grouping for you in batches of 4.
rule bwa_mem:
group: 'bwa_batch'
output: '{sample}.bam'
...
def bwa_mem_batch(wildcards):
# for wildcard.i, pick 4 bwa_mem outputs to put in this group
return expand('{sample}.bam', sample=SAMPLES[i*4:i*4+4])
rule bwa_mem_batch:
input: bwa_mem_batch_input
output: touch('flag_{i}') # could be temp too
group 'bwa_batch'
The consuming rule must request flag_{i} for i in {0..len(SAMPLES)//4}. With cluster integration, each slurm job gets 1 bwa_mem_batch job and 4 bwa_mem jobs with resources for a single bwa_mem job. This is useful for batching together multiple jobs to increase the runtime.
As a final point, this may do what you want, but I don't think it will help you get around QOS or other job quotas. You are using the same amount of CPU hours either way. You may be waiting in the queue longer because the scheduler can't find 40 threads to give you at once, where it could have given you a few 10 thread jobs. Instead, consider refining your resource values to get better efficiency, which may get your jobs run earlier.

Related

Is there a way to get a nice error report summary when running many jobs on DRMAA cluster?

I need to run a snakemake pipeline on a DRMAA cluster with a total number of >2000 jobs. When some of the jobs have failed, I would like to receive in the end an easy readable summary report, where only the failed jobs are listed instead of the whole job summary as given in the log.
Is there a way to achieve this without parsing the log file by myself?
These are the (incomplete) cluster options:
jobs: 200
latency-wait: 5
keep-going: True
rerun-incomplete: True
restart-times: 2
I am not sure if there is another way than parsing the log file yourself, but I've done it several times with grep and I am happy with the results:
cat .snakemake/log/[TIME].snakemake.log | grep -B 3 -A 3 error
Of course you should change the TIME placeholder for whichever run you want to check.

Snakemake: Is it possible to only display "Job counts" using --dryrun?

How can I make Snakemake display only the Job counts fields on a dry run? When performing a real run, that's the first information Snakemake outputs before starting the jobs.
Currently, the way I get job counts is to run Snakemake without the -n flag and immediately cancel it (^C), but that's far from ideal.
Letting the dry run complete will output the Job counts at the end, but that's not feasible for pipelines with hundreds or thousands of jobs.
Desired output:
$ snakemake -n --someflag
Job counts:
count jobs
504 BMO
1 all
504 fit_nbinoms
517 motifs_in_peaks
503 motifs_outside_peaks
2029
$
Flag -q does this.
--quiet, -q Do not output any progress or rule information.

Running parallel instances of a single job/rule on Snakemake

Unexperienced, self-tought "coder" here, so please be understanding :]
I am trying to learn and use Snakemake to construct pipeline for my analysis. Unfortunatly, I am unable to run multiple instances of a single job/rule at the same time. My workstation is not a computing cluster, so I cannot use this option. I looked for an answer for hours, but either there is non, or I am not knowledgable enough to understand it.
So: is there a way to run multiple instances of a single job/rule simultaneously?
If You would like a concrete example:
Lets say I want to analyze a set of 4 .fastq files using fastqc tool. So I input a command:
time snakemake -j 32
and thus run my code, which is:
SAMPLES, = glob_wildcards("{x}.fastq.gz")
rule Raw_Fastqc:
input:
expand("{x}.fastq.gz", x=SAMPLES)
output:
expand("./{x}_fastqc.zip", x=SAMPLES),
expand("./{x}_fastqc.html", x=SAMPLES)
shell:
"fastqc {input}"
I would expect snakemake to run as many instances of fastqc as possible on 32 threads (so easily all of my 4 input files at once). In reality. this command takes about 12 minutes to finish. Meanwhile, utilizing GNU parallel from inside snakemake
shell:
"parallel fastqc ::: {input}"
I get results in 3 minutes. Clearly there is some untapped potential here.
Thanks!
If I am not wrong, fastqc works on each fastq file separately, and therefore your implementation doesn't take advantage of parallelization feature of snakemake. This can be done by defining the targets as shown below using rule all.
from pathlib import Path
SAMPLES = [Path(f).name.replace('.fastq.gz', '') for f in glob_wildcards("{x}.fastq.gz") ]
rule all:
input:
expand("./{sample_name}_fastqc.{ext}",
sample_name=SAMPLES, ext=['zip', 'html'])
rule Raw_Fastqc:
input:
"{x}.fastq.gz", x=SAMPLES
output:
"./{x}_fastqc.zip", x=SAMPLES,
"./{x}_fastqc.html", x=SAMPLES
shell:
"fastqc {input}"
To add to JeeYem's answer above, you can also define the number of resources to reserve for each job using the 'threads' property of each rule, as so:
rule Raw_Fastqc:
input:
"{x}.fastq.gz", x=SAMPLES
output:
"./{x}_fastqc.zip", x=SAMPLES,
"./{x}_fastqc.html", x=SAMPLES
threads: 4
shell:
"fastqc --threads {threads} {input}"
Because fastqc itself can use multiple threads per task, you might even get additional speedups over the parallel implementation.
Snakemake will then automatically allocate as many jobs as can fit within the total threads provided by the top-level call:
snakemake -j 32, for example, would execute up to 8 instances of the Raw_Fastqc rule.

Setting SGE for running an executable with different input files on different nodes

I used to work with a cluster using SLURM scheduler, but now I am more or less forced to switch to a SGE-based cluster, and I'm trying to get a hang of it. The thing I was working on SLURM system involves running an executable using N input files, and set a SLURM configuration file in this fashion,
slurmConf.conf SLURM configuration file
0 /path/to/exec /path/to/input1
1 /path/to/exec /path/to/input2
2 /path/to/exec /path/to/input3
3 /path/to/exec /path/to/input4
4 /path/to/exec /path/to/input5
5 /path/to/exec /path/to/input6
6 /path/to/exec /path/to/input7
7 /path/to/exec /path/to/input8
8 /path/to/exec /path/to/input9
9 /path/to/exec /path/to/input10
And my working submission script in SLURM contains this line;
srun -n $SLURM_NNODES --multi-prog $slconf
$slconf refers to a path to that configuration file
This setup worked as I wanted - to run the executable with 10 different inputs at the same time with 10 nodes. Now that I just transitioned to SGE system, I want to do the same thing but I tried to read the manual and found nothing quite like SLURM. Could you please give me some light on how to achieve the same thing on SGE system?
Thank you very much!
You could use the "job array" feature of the Grid Engine.
Create a shell script sge_job.sh
#!/bin/sh
#
# sge_job.sh -- SGE job description script
#
#$ -t 1-10
/path/to/exec /path/to/input$SGE_TASK_ID
And submit this script to SGE with qsub.
qsub sge_job.sh
Dmitri Chubarov's answer is excellent, and the most robust way to proceed as it places less load on the submit node when submitting many jobs (>1000). Alternatively, you can wrap qsub in a for loop:
for i in {1..10}
do
echo "/path/to/exec /path/to/input${i}" | qsub
done
I sometimes use the above when whatever varies as input is not easily captured as a range of integers.
Example:
for f in `ls /some/path/input*`
do
echo "/path/to/exec ${f}" | qsub
done

Hadoop jobs getting poor locality

I have some fairly simple Hadoop streaming jobs that look like this:
yarn jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-101.jar \
-files hdfs:///apps/local/count.pl \
-input /foo/data/bz2 \
-output /user/me/myoutput \
-mapper "cut -f4,8 -d," \
-reducer count.pl \
-combiner count.pl
The count.pl script is just a simple script that accumulates counts in a hash and prints them out at the end - the details are probably not relevant but I can post it if necessary.
The input is a directory containing 5 files encoded with bz2 compression, roughly the same size as each other, for a total of about 5GB (compressed).
When I look at the running job, it has 45 mappers, but they're all running on one node. The particular node changes from run to run, but always only one node. Therefore I'm achieving poor data locality as data is transferred over the network to this node, and probably achieving poor CPU usage too.
The entire cluster has 9 nodes, all the same basic configuration. The blocks of the data for all 5 files are spread out among the 9 nodes, as reported by the HDFS Name Node web UI.
I'm happy to share any requested info from my configuration, but this is a corporate cluster and I don't want to upload any full config files.
It looks like this previous thread [ why map task always running on a single node ] is relevant but not conclusive.
EDIT: at #jtravaglini's suggestion I tried the following variation and saw the same problem - all 45 map jobs running on a single node:
yarn jar \
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.2.0.2.0.6.0-101.jar \
wordcount /foo/data/bz2 /user/me/myoutput
At the end of the output of that task in my shell, I see:
Launched map tasks=45
Launched reduce tasks=1
Data-local map tasks=18
Rack-local map tasks=27
which is the number of data-local tasks you'd expect to see on one node just by chance alone.