snakemake rule error. How can I select rule order? - snakemake

I run the snakemake for RNA-seq analysis.
I made snakefile for running, and some error occurred in terminal.
I set rule salmon quant reads at last order but it is running at first.
So snakemake showed the error in rule salmon quant reads.
salmon quant reads must run after salmon index finished.
Error in rule salmon_quant_reads:
jobid: 173
output: salmon/WT_Veh_11/quant.sf, salmon/WT_Veh_11/lib_format_counts.json
log: logs/salmon/WT_Veh_11.log (check log file(s) for error message)
conda-env: /home/baelab2/LEEJUNEYOUNG/7.Colesevelam/RNA-seq/.snakemake/conda/ff908de630224c1a4118f5dc69c8a761
RuleException:
CalledProcessError in line 111 of /home/baelab2/LEEJUNEYOUNG/7.Colesevelam/RNA-seq/Snakefile_2:
Command 'source /home/baelab2/miniconda3/bin/activate '/home/baelab2/LEEJUNEYOUNG/7.Colesevelam/RNA-seq/.snakemake/conda/ff908de630224c1a4118f5dc69c8a761'; set -euo pipefail; /home/baelab2/miniconda3/envs/snakemake/bin/python3.10 /home/baelab2/LEEJUNEYOUNG/7.Colesevelam/RNA-seq/.snakemake/scripts/tmpr6r8ryk9.wrapper.py' returned non-zero exit status 1.
File "/home/baelab2/LEEJUNEYOUNG/7.Colesevelam/RNA-seq/Snakefile_2", line 111, in __rule_salmon_quant_reads
File "/home/baelab2/miniconda3/envs/snakemake/lib/python3.10/concurrent/futures/thread.py", line 58, in run
How can I fix it?
Here is the my snakefile info.
SAMPLES = ["KO_Col_5", "KO_Col_6", "KO_Col_7", "KO_Col_8", "KO_Col_9", "KO_Col_10", "KO_Col_11", "KO_Col_15", "KO_Veh_3", "KO_Veh_4", "KO_Veh_5", "KO_Veh_9", "KO_Veh_11", "KO_Veh_13", "KO_Veh_14", "WT_Col_1", "WT_Col_2", "WT_Col_3", "WT_Col_6", "WT_Col_8", "WT_Col_10", "WT_Col_12", "WT_Veh_1", "WT_Veh_2", "WT_Veh_4", "WT_Veh_7", "WT_Veh_8", "WT_Veh_11", "WT_Veh_14"]
rule all:
input:
expand("raw/{sample}_1.fastq.gz", sample=SAMPLES),
expand("raw/{sample}_2.fastq.gz", sample=SAMPLES),
expand("qc/fastqc/{sample}_1.before.trim_fastqc.zip", sample=SAMPLES),
expand("qc/fastqc/{sample}_2.before.trim_fastqc.zip", sample=SAMPLES),
expand("trimmed/{sample}_1.fastq.gz", sample=SAMPLES),
expand("trimmed/{sample}_2.fastq.gz", sample=SAMPLES),
expand("qc/fastqc/{sample}_1.after.trim_fastqc.zip", sample=SAMPLES),
expand("qc/fastqc/{sample}_2.after.trim_fastqc.zip", sample=SAMPLES),
expand("salmon/{sample}/quant.sf", sample=SAMPLES),
expand("salmon/{sample}/lib_format_counts.json", sample=SAMPLES)
rule fastqc_before_trim_1:
input:
"raw/{sample}.fastq.gz",
output:
html="qc/fastqc/{sample}.before.trim.html",
zip="qc/fastqc/{sample}.before.trim_fastqc.zip",
log:
"logs/fastqc/{sample}.before.log"
threads: 10
priority: 1
wrapper:
"v1.7.0/bio/fastqc"
rule cutadapt:
input:
r1 = "raw/{sample}_1.fastq.gz",
r2 = "raw/{sample}_2.fastq.gz"
output:
fastq1="trimmed/{sample}_1.fastq.gz",
fastq2="trimmed/{sample}_2.fastq.gz",
qc="trimmed/{sample}.qc.txt"
params:
adapters = "-a AGATCGGAAGAGCACACGTCTGAACTCCAGTCA -A AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGT",
extra = "--minimum-length 1 -q 20"
log:
"logs/cutadapt/{sample}.log"
threads: 10
priority: 2
wrapper:
"v1.7.0/bio/cutadapt/pe"
rule fastqc_after_trim_2:
input:
"trimmed/{sample}.fastq.gz"
output:
html="qc/fastqc/{sample}.after.trim.html",
zip="qc/fastqc/{sample}.after.trim_fastqc.zip"
log:
"logs/fastqc/{sample}.after.log"
threads: 10
priority: 3
wrapper:
"v1.7.0/bio/fastqc"
rule salmon_index:
input:
sequences="raw/Mus_musculus.GRCm39.cdna.all.fasta"
output:
multiext(
"salmon/transcriptome_index/",
"complete_ref_lens.bin",
"ctable.bin",
"ctg_offsets.bin",
"duplicate_clusters.tsv",
"info.json",
"mphf.bin",
"pos.bin",
"pre_indexing.log",
"rank.bin",
"refAccumLengths.bin",
"ref_indexing.log",
"reflengths.bin",
"refseq.bin",
"seq.bin",
"versionInfo.json",
),
log:
"logs/salmon/transcriptome_index.log",
threads: 10
priority: 10
params:
# optional parameters
extra="",
wrapper:
"v1.7.0/bio/salmon/index"
rule salmon_quant_reads:
input:
# If you have multiple fastq files for a single sample (e.g. technical replicates)
# use a list for r1 and r2.
r1 = "trimmed/{sample}_1.fastq.gz",
r2 = "trimmed/{sample}_2.fastq.gz",
index = "salmon/transcriptome_index"
output:
quant = "salmon/{sample}/quant.sf",
lib = "salmon/{sample}/lib_format_counts.json"
log:
"logs/salmon/{sample}.log"
params:
# optional parameters
libtype ="A",
extra="--validateMappings"
threads: 10
priority: 20
wrapper:
"v1.7.0/bio/salmon/quant"

The only link between salmon_quant_reads and salmon_index is directory salmon/transcriptome_index. However, the directory creation is not a sufficient signal that all work in salmon_index has been completed. So, a quick way to fix this is to include an explicit file:
rule salmon_quant_reads:
input:
# If you have multiple fastq files for a single sample (e.g. technical replicates)
# use a list for r1 and r2.
r1 = "trimmed/{sample}_1.fastq.gz",
r2 = "trimmed/{sample}_2.fastq.gz",
index = "salmon/transcriptome_index",
_temp_dependency = "salmon/transcriptome_index/info.json",

Related

Input function for technical / biological replicates in snakemake

I´m currently trying to write a Snakemake workflow that can check automatically via a sample.tsv file if a given sample is a biological or technical replicate. And then use in this case at some point of my workflow a rule to merge technical/biological replicates.
My tsv file looks like this:
|sample | unit_bio | unit_tech | fq1 | fq2 |
|----------|----------|-----------|-----|-----|
| bCalAnn1 | 1 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_1_R2.fastq.gz |
| bCalAnn1 | 1 | 2 | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_2_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn1_1_2_R2.fastq.gz |
| bCalAnn2 | 1 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_1_R2.fastq.gz |
| bCalAnn2 | 1 | 2 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_2_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_1_2_R2.fastq.gz |
| bCalAnn2 | 2 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_2_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_2_1_R2.fastq.gz |
| bCalAnn2 | 3 | 1 | /home/assembly_downstream/data/arima_HiC/bCalAnn2_3_1_R1.fastq.gz | /home/assembly_downstream/data/arima_HiC/bCalAnn2_3_1_R2.fastq.gz |
My Pipeline looks like this:
import pandas as pd
import os
import yaml
configfile: "config.yaml"
samples = pd.read_table(config["samples"], dtype=str)
rule all:
input:
expand(config["arima_mapping"] + "final/{sample}_{unit_bio}_{unit_tech}.bam", zip,
sample=samples["sample"], unit_bio=samples["unit_bio"], unit_tech=samples["unit_tech"])
..
some rules
..
rule add_read_groups:
input:
config["arima_mapping"] + "paired/{sample}_{unit_bio}_{unit_tech}.bam"
output:
config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
params:
platform = "ILLUMINA",
sampleName = "{sample}",
library = "{sample}",
platform_unit ="None"
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/paired_read_groups/{sample}_{unit_bio}_{unit_tech}.log"
shell:
"picard AddOrReplaceReadGroups I={input} O={output} SM={params.sampleName} LB={params.library} PU={params.platform_unit} PL={params.platform} 2> {log}"
rule merge_tech_repl:
input:
config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
output:
config["arima_mapping"] + "merge_tech_repl/{sample}_{unit_bio}_{unit_tech}.bam"
params:
val_string = "SILENT"
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/merged_tech_repl/{sample}_{unit_bio}_{unit_tech}.log"
threads:
2 #verwendet nur maximal 2
shell:
"picard MergeSamFiles -I {input} -O {output} --ASSUME_SORTED true --USE_THREADING true --VALIDATION_STRINGENCY {params.val_string} 2> {log}"
rule mark_duplicates:
input:
config["arima_mapping"] + "merge_tech_repl/{sample}_{unit_bio}_{unit_tech}.bam" if config["tech_repl"] else config["arima_mapping"] + "paired_read_groups/{sample}_{unit_bio}_{unit_tech}.bam"
output:
bam = config["arima_mapping"] + "final/{sample}_{unit_bio}_{unit_tech}.bam",
metric = config["arima_mapping"] + "final/metric_{sample}_{unit_bio}_{unit_tech}.txt"
#params:
conda:
"../envs/arima_mapping.yaml"
log:
config["logs"] + "arima_mapping/mark_duplicates/{sample}_{unit_bio}_{unit_tech}.log"
shell:
"picard MarkDuplicates I={input} O={output.bam} M={output.metric} 2> {log}"
At the moment I have set a boolean in a config file that tells the mark_duplicates rule whether to take its input from the add_read_group or the merge_technical_replicates rule. This is of course not optimal since it could be that some samples may have duplicates (of any numbers) while others don´t. Therefore I want to create a syntax that checks the tsv table if a given sample name and unit_bio number are identical while the unit_tech number is different (and later analog to this for biological replicates), thus merging these specific samples while nonduplicate samples skip the merging rule.
EDIT
For clarification since I think I explained my goal confusingly.
My first attempt looks like this, I want "i" to be flexible, in case the duplicate number changes. I don't think that my input function returns all duplicates together that match each other but gives them one by one which is not what I want. I´m also unsure on how I would have to handle samples that do not have duplicates since they would have to skip this rule somehow.
input_function(wildcards):
return expand({sample}_{unit_bio}_{i}.bam", sample = wildcards.sample,
unit_bio = wildcards.unit_bio,
i = samples["sample"].str.count(wildcards.sample))
rule tech_duplicate_check:
input:
input_function #(that returns a list of 2-n duplicates, where n could be different for each sample)
output:
{sample}_{unit_bio}.bam
shell:
MergeTechDupl_tool {input} # input is a list
Therefore I want to create a syntax that checks the tsv table if a given sample name and unit_bio number are identical while the unit_tech number is different (and later analog to this for biological replicates), thus merging these specific samples while nonduplicate samples skip the merging rule.
rule gather_techdups_of_a_biodup:
output: "{sample}/{unit_bio}"
input: gather_techdups_of_a_biodup_input_fn
shell: "true" # Fill this in
rule gather_biodips_of_a_techdup:
output: "{sample}/{unit_tech}"
input: gather_biodips_of_a_techdup_input_fn
shell: "true" # Fill this in
After some attempts my main problem I struggle with is the table checking. As far as I know snakemake takes templates as input and checks for all samples that match this. But I would need to check the table for every sample that shares (e.g. for technical replicate) the sample name and the unit_bio number take all these samples and give them as input for the first rule run. Then I would have to take the next sample which was not already part of a previous run to prevent merging the same samples multiple times.
The logic you describe here can be implemented in the gather_techdups_of_a_biodup_input_fn and gather_biodips_of_a_techdup_input_fn functions above. For example, read your sample TSV file with pandas, filter for wildcards.sample and wildcards.unit_bio (or wildcards.unit_tech), then extract columns fq1 and fq2 from the filtered data frame.

nextflow .collect() method in RNA-seq example workflow

I understand we have to use collect() when we run a process that takes as input two channels, where the first channel has one element and then second one has > 1 element:
#! /usr/bin/env nextflow
nextflow.enable.dsl=2
process A {
input:
val(input1)
output:
path 'index.txt', emit: foo
script:
"""
echo 'This is an index' > index.txt
"""
}
process B {
input:
val(input1)
path(input2)
output:
path("${input1}.txt")
script:
"""
cat <(echo ${input1}) ${input2} > \"${input1}.txt\"
"""
}
workflow {
A( Channel.from( 'A' ) )
// This would only run for one element of the first channel:
B( Channel.from( 1, 2, 3 ), A.out.foo )
// and this for all of them as intended:
B( Channel.from( 1, 2, 3 ), A.out.foo.collect() )
}
Now the question: Why can this line in the example workflow from nextflow-io (https://github.com/nextflow-io/rnaseq-nf/blob/master/modules/rnaseq.nf#L15) work without using collect() or toList()?
It is the same situation, a channel with one element (the index) and a channel with > 1 (the fastq pairs) shall be used by the same process (quant), and it runs on all fastq files. What am I missing compared to my dummy example?
You need to create the first channel with a value factory which never exhausts the channel.
Your linked example implicitly creates a value channel which is why it works. The same happens when you call .collect() on A.out.foo.
Channel.from (or the more modern Channel.of) create a sequence channel which can be exhausted which is why both A and B only run once.
So
A( Channel.value('A') )
is all you need.

chain/dependency of some rules by wildcards

I have a particular use case for which I have not found the solution in the Snakemake documentation.
Let's say in a given pipeline I have a portion with 3 rules a, b and c which will run for N samples.
Those rules handle large amount of data and for reasons of local storage limits I do not want those rules to execute at the same time. For instance rule a produces the large amount of data then rule c compresses and export the results.
So what I am looking for is a way to chain those 3 rules for 1 sample/wildcard, and only then execute those 3 rules for the next sample. All of this to make sure the local space is available.
Thanks
I agree that this is problem that Snakemake still has no solution for. However you may have a workaround.
rule all:
input: expand("a{sample}", sample=[1, 2, 3])
rule a:
input: "b{sample}"
output: "a{sample}"
rule b:
input: "c{sample}"
output: "b{sample}"
rule c:
input:
lambda wildcards: f"a{wildcards.sample-1}"
output: "c{sample}"
That means that the rule c for sample 2 wouldn't start before the output for rule a for sample 1 is ready. You need to add a pseudo output a0 though or make the lambda more complicated.
So building on Dmitry Kuzminov's answer, the following can work (both with numbers as samples and strings).
The execution order will be a3 > b3 > a1 > b1 > a2 > b2.
I used a different sample order to show it can be made different from the sample list.
samples = [1, 2, 3]
sample_order = [3, 1, 2]
def get_previous(wildcards):
if wildcards.sample != sample_order[0]: # if different from a3 in this case
previous_sample = sample_order[sample_order.index(wildcards.sample) - 1]
return f'b_out_{previous_sample}'
else: # if is the first sample in the order i.e. a3
return #here put dummy file always present e.g. the file containing those rules or the Snakemake
rule all:
expand("b_out_{S}", S=sample)
rule a:
input:
"a_in_{sample}",
get_previous
output:
"a_out_{sample}"
rule b:
input:
"a_out_{sample}"
output:
"b_out_{sample}"

How do I get Snakemake to apply all samples to a single rule, before proceeding to the next rule?

On a machine with j cores, given a RuleB which depends on a RuleA, I expect to Snakemake to execute my workflow as follows:
RuleA Sample1 using j threads
RuleA Sample2 using j threads
...
RuleA SampleN using j threads
RuleB Sample1 using 1 thread
RuleB Sample2 using 1 thread
...
RuleB SampleN using 1 thread
With RuleB being executed on j samples simultaneously.
Instead the workflow is executed as follows:
RuleA Sample1 using j threads
RuleB Sample1 using 1 thread
RuleA Sample2 using j threads
RuleB Sample2 using 1 thread
...
with ruleB being executed on 1 sample at a time.
Executed in that order, ruleB can't be parallelised, and the workflow runs much slower than it could.
More specifically, I want to align reads to a genome using STAR, and quantify them using RNASeQC. The tool RNASEQC is single threaded, while STAR can work with multiple threads on a single sample.
This results in Snakemake aligning reads in sample1, and then quantifying them using rnaseqc, after which it proceeds to do the same in in sample2. I'd like it to reads in all samples first, and proceed to quantify them (this way, it would be able to run several instances of the single-threaded rnaseqc tool).
The relevant excerpt from the Snakemake file:
sample_basename = ["RNA-seq_L{}_S{}".format(x, y) for x,y in zip(range(1,41), range(1,41))]
sample_lane = [seq + "_L00{}".format(x) for x in [1, 2] for seq in sample_basename]
rule all:
input:
expand("rnaseqc/{s_l}/{s_l}.gene_tpm.gct", s_l=sample_lane)
rule run_star:
input:
index_dir=rules.star_index.output.index_dir,
fq1 = "data/fastq/{sample}_R1_001.fastq.gz",
fq2 = "data/fastq/{sample}_R2_001.fastq.gz",
output:
"star/{sample}/{sample}Aligned.sortedByCoord.out.bam",
"star/{sample}/{sample}Aligned.toTranscriptome.out.bam",
"star/{sample}/{sample}ReadsPerGene.out.tab",
"star/{sample}/{sample}Log.final.out"
log:
"logs/star/{sample}.log"
params:
extra="--quantMode GeneCounts TranscriptomeSAM --chimSegmentMin 20 --outSAMtype BAM SortedByCoordinate",
sample_name = "{sample}"
threads: 18
script:
"scripts/star_align.py"
rule rnaseqc:
input:
bam="star/{sample}/{sample}Aligned.sortedByCoord.out.bam",
gtf="data/gencode.v19.annotation.patched.collapsed.gtf"
output:
"rnaseqc/{sample}/{sample}.exon_reads.gct",
"rnaseqc/{sample}/{sample}.gene_fragments.gct",
"rnaseqc/{sample}/{sample}.gene_reads.gct",
"rnaseqc/{sample}/{sample}.gene_tpm.gct",
"rnaseqc/{sample}/{sample}.metrics.tsv"
params:
extra="-s {sample} --legacy",
output_dir="rnaseqc/{sample}"
log:
"logs/rnaseqc/{sample}"
shell:
"rnaseqc.v2.3.4.linux {params.extra} {input.gtf} {input.bam} {params.output_dir} 2> {log}"
Weirdly enough, doing a dry run with snakemake -np -j does the correct thing:
[Mon Oct 21 13:08:11 2019]
rule run_star:
input: data/STAR/, data/fastq/RNA-seq_L182_S16_L002_R1_001.fastq.gz, data/fastq/RNA-seq_L182_S16_L002_R2_001.fastq.gz
output: star/RNA-seq_L182_S16_L002/RNA-seq_L182_S16_L002Aligned.sortedByCoord.out.bam, star/RNA-seq_L182_S16_L002/RNA-seq_L182_S16_L002Aligned.toTranscriptome.out.bam, star/RNA-seq_L182_S16_L002/RNA-seq_L182_S16_L002ReadsPerGene.out.tab, star/RNA-seq_L182_S16_L002/RNA-seq_L182_S16_L002Log.final.out
log: logs/star/RNA-seq_L182_S16_L002.log
jobid: 1026
wildcards: sample=RNA-seq_L182_S16_L002
threads: 18
[Mon Oct 21 13:08:11 2019]
rule run_star:
input: data/STAR/, data/fastq/RNA-seq_L173_S7_L001_R1_001.fastq.gz, data/fastq/RNA-seq_L173_S7_L001_R2_001.fastq.gz
output: star/RNA-seq_L173_S7_L001/RNA-seq_L173_S7_L001Aligned.sortedByCoord.out.bam, star/RNA-seq_L173_S7_L001/RNA-seq_L173_S7_L001Aligned.toTranscriptome.out.bam, star/RNA-seq_L173_S7_L001/RNA-seq_L173_S7_L001ReadsPerGene.out.tab, star/RNA-seq_L173_S7_L001/RNA-seq_L173_S7_L001Log.final.out
log: logs/star/RNA-seq_L173_S7_L001.log
jobid: 737
wildcards: sample=RNA-seq_L173_S7_L001
threads: 18
...
[Mon Oct 21 13:10:50 2019]
rule rnaseqc:
input: star/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001Aligned.sortedByCoord.out.bam, data/gencode.v19.annotation.patched.collapsed.gtf
output: rnaseqc/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001.exon_reads.gct, rnaseqc/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001.gene_fragments.gct, rnaseqc/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001.gene_reads.gct, rnaseqc/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001.gene_tpm.gct, rnaseqc/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001.metrics.tsv
log: logs/rnaseqc/RNA-seq_L221_S15_L001
jobid: 215
wildcards: sample=RNA-seq_L221_S15_L001
rnaseqc.v2.3.4.linux -s RNA-seq_L221_S15_L001 --legacy data/gencode.v19.annotation.patched.collapsed.gtf star/RNA-seq_L221_S15_L001/RNA-seq_L221_S15_L001Aligned.sortedByCoord.out.bam rnaseqc/RNA-seq_L221_S15_L001 2> logs/rnaseqc/RNA-seq_L221_S15_L001
[Mon Oct 21 13:10:50 2019]
rule rnaseqc:
input: star/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001Aligned.sortedByCoord.out.bam, data/gencode.v19.annotation.patched.collapsed.gtf
output: rnaseqc/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001.exon_reads.gct, rnaseqc/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001.gene_fragments.gct, rnaseqc/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001.gene_reads.gct, rnaseqc/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001.gene_tpm.gct, rnaseqc/RNA-seq_L284_S38_L001/RNA-seq_L284_S38_L001.metrics.tsv
log: logs/rnaseqc/RNA-seq_L284_S38_L001
jobid: 278
wildcards: sample=RNA-seq_L284_S38_L001
but executing snakemake -j without the -np flag does not.
[Mon Oct 21 13:13:49 2019]
rule run_star:
input: data/STAR/, data/fastq/RNA-seq_L249_S3_L001_R1_001.fastq.gz, data/fastq/RNA-seq_L249_S3_L001_R2_001.fastq.gz
output: star/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001Aligned.sortedByCoord.out.bam, star/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001Aligned.toTranscriptome.out.bam, star/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001ReadsPerGene.out.tab, star/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001Log.final.out
log: logs/star/RNA-seq_L249_S3_L001.log
jobid: 813
wildcards: sample=RNA-seq_L249_S3_L001
threads: 18
Aligning RNA-seq_L249_S3_L001
[Mon Oct 21 13:21:33 2019]
Finished job 813.
2 of 478 steps (0.42%) done
[Mon Oct 21 13:21:33 2019]
rule rnaseqc:
input: star/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001Aligned.sortedByCoord.out.bam, data/gencode.v19.annotation.patched.collapsed.gtf
output: rnaseqc/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001.exon_reads.gct, rnaseqc/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001.gene_fragments.gct, rnaseqc/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001.gene_reads.gct, rnaseqc/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001.gene_tpm.gct, rnaseqc/RNA-seq_L249_S3_L001/RNA-seq_L249_S3_L001.metrics.tsv
log: logs/rnaseqc/RNA-seq_L249_S3_L001
jobid: 243
wildcards: sample=RNA-seq_L249_S3_L001
I'm using the latest version of Snakemake available through Conda:
5.5.2
Maybe what you are looking for is to give higher priority to the rule running STAR compared to the rule running rnaseqc. If so, look at the priorities directive, like:
rule star:
priority: 50
...
rule rnaseqc:
priority: 0
...
(Not tested) this should run first all the star jobs, one at a time because they need 18 cores each, then all the rnaseqc jobs in parallel.

wildcard_constraints between two wildcards with OR

I'd like to constraint a rule based on two wildcards to run if (id == 'FOO || (id == 'BAR' && ver == '2')). However, I am not quite sure how to do it (or if it is possible). I tried the example below but it doesn't seem to work...
rule foo:
input: "{id}{ver}.txt"
output: "{id}{ver}.out"
wildcard_constraints:
id = "FOO"
wildcard_constraints:
id = "BAR",
ver = "2"
I am not sure your current approach will work. Why not simply ask snakemake to make you files you need? e.g.:
rule all:
input: expand('FOO{ver}.txt, ver=[somelist]), 'BAR2.txt'
rule foo:
input: "{id}{ver}.txt"
output: "{id}{ver}.out"
shell: "some_command {input} > {output}
this should call rule foo for all foo{ver}.txt files you specify and for the bar2.txt file