I am using snakemake v. 5.7.0. The pipeline runs correctly when either launched locally or submitted to SLURM via snakemake --drmaa: jobs get submitted, everything works as expected. However, in the latter case, a number of slurm log files is produced in the current directory.
Snakemake invoked with the --drmaa-log-dir option creates the directory specified in the option, but fails to execute the rules. No log files are produced.
Here is a minimal example. First, the Snakefile used:
rule all:
shell: "sleep 20 & echo SUCCESS!"
Below is the output of snakemake --drmaa
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1
[Fri Apr 10 21:03:50 2020]
rule all:
jobid: 0
Submitted DRMAA job 0 with external jobid 13321.
[Fri Apr 10 21:04:00 2020]
Finished job 0.
1 of 1 steps (100%) done
Complete log: /XXXXX/snakemake_test/.snakemake/log/2020-04-10T210349.984931.snakemake.log
Here is the output of snakemake --drmaa --drmaa-log-dir foobar
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1
[Fri Apr 10 21:06:19 2020]
rule all:
jobid: 0
Submitted DRMAA job 0 with external jobid 13322.
[Fri Apr 10 21:06:29 2020]
Error in rule all:
jobid: 0
shell:
sleep 20 & echo SUCCESS!
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Error executing rule all on cluster (jobid: 0, external: 13322, jobscript: /XXXXXX/snakemake_test/.snakemake/tmp.9l7fqvgg/snakejob.all.0.sh). For error details see the cluster log and the log files of the involved rule(s).
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /XXXXX/snakemake_test/.snakemake/log/2020-04-10T210619.598354.snakemake.log
No log files are produced. The directory foobar has been created, but is empty.
What am I doing wrong?
Problem using --drmaa-log-dir in slurm was reported before, but unfortunately there has been no known solution so far.
Related
I am trying to run a snakemake rule with an external script that contains a wildcard as noted in the snakemake reathedocs. However I am running into KeyError when running snakemake.
For example, if we have the following rule:
SAMPLE = ['test']
rule all:
input:
expand("output/{sample}.txt", sample=SAMPLE)
rule NAME:
input: "workflow/scripts/{sample}.R"
output: "output/{sample}.txt",
script: "workflow/scripts/{wildcards.sample}.R"
with the script workflow/scripts/test.R containing the following code
out.path = snakemake#output[[1]]
out = "Hello World"
writeLines(out, out.path)
I get the following error when trying to execute snakemake.
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 NAME
1 all
2
[Fri May 21 12:04:55 2021]
rule NAME:
input: workflow/scripts/test.R
output: output/test.txt
jobid: 1
wildcards: sample=test
[Fri May 21 12:04:55 2021]
Error in rule NAME:
jobid: 1
output: output/test.txt
RuleException:
KeyError in line 14 of /sc/arion/projects/LOAD/Projects/sandbox/Snakefile:
'wildcards'
File "/sc/arion/work/andres12/conda/envs/py38/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 2231, in run_wrapper
File "/sc/arion/projects/LOAD/Projects/sandbox/Snakefile", line 14, in __rule_NAME
File "/sc/arion/work/andres12/conda/envs/py38/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 560, in _callback
File "/sc/arion/work/andres12/conda/envs/py38/lib/python3.8/concurrent/futures/thread.py", line 57, in run
File "/sc/arion/work/andres12/conda/envs/py38/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 546, in cached_or_run
File "/sc/arion/work/andres12/conda/envs/py38/lib/python3.8/site-packages/snakemake/executors/__init__.py", line 2262, in run_wrapper
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /sc/arion/projects/LOAD/Projects/sandbox/.snakemake/log/2021-05-21T120454.713963.snakemake.log
Does anyone know why this not working correctly?
I agree with Dmitry Kuzminov that having a script depending on a wildcard is odd. Maybe there are better solutions.
Anyway, this below works for me on snakemake 6.0.0. Note that in your R script snakemake#output[1] should be snakemake#output[[1]], but that doesn't give the problem you report.
SAMPLE = ['test']
rule all:
input:
expand("output/{sample}.txt", sample=SAMPLE)
rule make_script:
output:
"workflow/scripts/{sample}.R",
shell:
r"""
echo 'out.path = snakemake#output[[1]]' > {output}
echo 'out = "Hello World"' >> {output}
echo 'writeLines(out, out.path)' >> {output}
"""
rule NAME:
input:
"workflow/scripts/{sample}.R"
output:
"output/{sample}.txt",
script:
"workflow/scripts/{wildcards.sample}.R"
When I run with --cluster and --use-conda, Snakemake does not appear to set the conda environment before submitting to the cluster, and my jobs fail accordingly. Is there a trick I am missing to set the conda environment before cluster submission?
EDIT:
I get snakemake in a conda environment like:
channels:
- bioconda
- conda-forge
dependencies:
- snakemake-minimal=5.19.3
- xrootd=4.12.2
Reproducer:
I create a directory with Snakefile, dothing.py, and environment.yml:
Snakefile:
shell.prefix('unset PYTHONPATH; unset LD_LIBRARY_PATH; unset PYTHONHOME; ')
rule dothing:
conda: 'environment.yml'
output: 'completed.out'
log: 'thing.log'
shell: 'python dothing.py &> {log} && touch {output}'
dothing.py:
import uncertainties
print('it worked!')
environment.yml:
name: testsnakeconda
channels:
- conda-forge
dependencies:
- uncertainties=3.1.4
If I run locally like
snakemake --cores all --use-conda
It runs with no problems:
Building DAG of jobs...
Creating conda environment environment.yml...
Downloading and installing remote packages.
Environment for environment.yml created (location: .snakemake/conda/e0fff47f)
Using shell: /usr/bin/bash
Provided cores: 10
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 dothing
1
[Tue Jun 30 16:19:38 2020]
rule dothing:
output: completed.out
log: thing.log
jobid: 0
Activating conda environment: /path/to/environment.yml
[Tue Jun 30 16:19:39 2020]
Finished job 0.
1 of 1 steps (100%) done
Complete log: /path/to/.snakemake/log/2020-06-30T161824.906217.snakemake.log
If I try to submit using --cluster like
snakemake --cores all --use-conda --cluster 'condor_qsub -V -l procs={threads}' --latency-wait 30 --max-jobs-per-second 100 --jobs 50
there is no message about setting up a conda environment and the job fails with an error:
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cluster nodes: 50
Job counts:
count jobs
1 dothing
1
[Tue Jun 30 16:20:49 2020]
rule dothing:
output: completed.out
log: thing.log
jobid: 0
Submitted job 0 with external jobid 'Your job 9246856 ("snakejob.dothing.0.sh") has been submitted'.
[Tue Jun 30 16:26:00 2020]
Error in rule dothing:
jobid: 0
output: completed.out
log: thing.log (check log file(s) for error message)
conda-env: /path/to/.snakemake/conda/e0fff47f
shell:
python dothing.py &> thing.log && touch completed.out
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: Your job 9246856 ("snakejob.dothing.0.sh") has been submitted
Error executing rule dothing on cluster (jobid: 0, external: Your job 9246856 ("snakejob.dothing.0.sh") has been submitted, jobscript: /path/to/.snakemake/tmp.a7fpixla/snakejob.dothing.0.sh). For error details see the cluster log and the log files of the involved rule(s).
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /path/to/.snakemake/log/2020-06-30T162049.793041.snakemake.log
and I can see that the problem is that the uncertainties package is not available:
$ cat thing.log
Traceback (most recent call last):
File "dothing.py", line 1, in <module>
import uncertainties
ImportError: No module named uncertainties
EDIT:
verbose output without --cluster:
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 10
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 dothing
1
Resources before job selection: {'_cores': 10, '_nodes': 9223372036854775807}
Ready jobs (1):
dothing
Selected jobs (1):
dothing
Resources after job selection: {'_cores': 9, '_nodes': 9223372036854775806}
[Thu Jul 2 21:51:18 2020]
rule dothing:
output: completed.out
log: thing.log
jobid: 0
Activating conda environment: /path/to/workingdir/.snakemake/conda/e0fff47f
[Thu Jul 2 21:51:33 2020]
Finished job 0.
1 of 1 steps (100%) done
Complete log: /path/to/workingdir/.snakemake/log/2020-07-02T215117.964474.snakemake.log
unlocking
removing lock
removing lock
removed all locks
verbose output with --cluster:
Building DAG of jobs...
Checking status of 0 jobs.
Using shell: /usr/bin/bash
Provided cluster nodes: 50
Job counts:
count jobs
1 dothing
1
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 50}
Ready jobs (1):
dothing
Selected jobs (1):
dothing
Resources after job selection: {'_cores': 9223372036854775806, '_nodes': 49}
[Thu Jul 2 21:40:23 2020]
rule dothing:
output: completed.out
log: thing.log
jobid: 0
Jobscript:
#!/bin/sh
# properties = {"type": "single", "rule": "dothing", "local": false, "input": [], "output": ["completed.out"], "wildcards": {}, "params": {}, "log": ["thing.log"], "threads": 1, "resources": {}, "jobid": 0, "cluster": {}}
cd /path/to/workingdir && \
/path/to/miniconda/envs/envname/bin/python3.8 \
-m snakemake dothing --snakefile /path/to/workingdir/Snakefile \
--force -j --keep-target-files --keep-remote \
--wait-for-files /path/to/workingdir/.snakemake/tmp.5n32749i /path/to/workingdir/.snakemake/conda/e0fff47f --latency-wait 30 \
--attempt 1 --force-use-threads \
--wrapper-prefix https://github.com/snakemake/snakemake-wrappers/raw/ \
--allowed-rules dothing --nocolor --notemp --no-hooks --nolock \
--mode 2 --use-conda && touch /path/to/workingdir/.snakemake/tmp.5n32749i/0.jobfinished || (touch /path/to/workingdir/.snakemake/tmp.5n32749i/0.jobfailed; exit 1)
Submitted job 0 with external jobid 'Your job 9253728 ("snakejob.dothing.0.sh") has been submitted'.
Checking status of 1 jobs.
...
Checking status of 1 jobs.
[Thu Jul 2 21:46:23 2020]
Error in rule dothing:
jobid: 0
output: completed.out
log: thing.log (check log file(s) for error message)
conda-env: /path/to/workingdir/.snakemake/conda/e0fff47f
shell:
python dothing.py &> thing.log && touch completed.out
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
cluster_jobid: Your job 9253728 ("snakejob.dothing.0.sh") has been submitted
Error executing rule dothing on cluster (jobid: 0, external: Your job 9253728 ("snakejob.dothing.0.sh") has been submitted, jobscript: /path/to/workingdir/.snakemake/tmp.5n32749i/snakejob.dothing.0.sh). For error details see the cluster log and the log files of the involved rule(s).
Cleanup job metadata.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /path/to/workingdir/.snakemake/log/2020-07-02T214022.614691.snakemake.log
unlocking
removing lock
removing lock
removed all locks
What worked for me is to set the full path to your python interpreter inside the rule itself...
rule dothing:
conda: 'environment.yml'
output: 'completed.out'
log: 'thing.log'
shell: '/full_path_to_your_environment/bin/python dothing.py &> {log} && touch {output}'
and the full path to your python script if it's a package installed in that specific environnement (which is my case).
rule dothing:
conda: 'environment.yml'
output: 'completed.out'
log: 'thing.log'
shell: '/full_path_to_your_environment/bin/python /full_path_to_your_environment/package_dir/dothing.py &> {log} && touch {output}'
by /full_path_to_your_environment/ I mean the hash name conda and snakemake gave to your env the first time they installed it (e.g. /path/to/workingdir/.snakemake/conda/e0fff47f)
It's a bit uggly but still did the trick.
Hope that it'll help
Major EDIT:
Having fixed a couple of issues thanks to comments and written a minimal reproducible example to help my helpers, I've narrowed down the issue to a difference between execution locally and using DRMAA.
Here is a minimal reproducible pipeline that does not require any external file download and can be executed out of the box or clone following git repository:
git clone git#github.com:kevinrue/snakemake-issue-all.git
When I run the pipeline using DRMAA I get the following error:
Building DAG of jobs...
Using shell: /bin/bash
Provided cluster nodes: 100
Singularity containers: ignored
Job counts:
count jobs
1 all
2 cat
3
InputFunctionException in line 22 of /ifs/research-groups/sims/kevin/snakemake-issue-all/workflow/Snakefile:
SyntaxError: unexpected EOF while parsing (<string>, line 1)
Wildcards:
sample=A
However, if I run the pipeline locally (--cores 1), it works:
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Singularity containers: ignored
Job counts:
count jobs
1 all
2 cat
3
[Sat Jun 13 08:49:46 2020]
rule cat:
input: data/A1, data/A2
output: results/A/cat
jobid: 1
wildcards: sample=A
[Sat Jun 13 08:49:46 2020]
Finished job 1.
1 of 3 steps (33%) done
[Sat Jun 13 08:49:46 2020]
rule cat:
input: data/B1, data/B2
output: results/B/cat
jobid: 2
wildcards: sample=B
[Sat Jun 13 08:49:46 2020]
Finished job 2.
2 of 3 steps (67%) done
[Sat Jun 13 08:49:46 2020]
localrule all:
input: results/A/cat, results/B/cat
jobid: 0
[Sat Jun 13 08:49:46 2020]
Finished job 0.
3 of 3 steps (100%) done
Complete log: /ifs/research-groups/sims/kevin/snakemake-issue-all/.snakemake/log/2020-06-13T084945.632545.snakemake.log
My DRMAA profile is the following:
jobs: 100
default-resources: 'mem_free=4G'
drmaa: "-V -notify -p -10 -l mem_free={resources.mem_free} -pe dedicated {threads} -v MKL_NUM_THREADS={threads} -v OPENBLAS_NUM_THREADS={threads} -v OMP_NUM_THREADS={threads} -R y -q all.q"
drmaa-log-dir: /ifs/scratch/kevin
use-conda: true
conda-prefix: /ifs/home/kevin/devel/snakemake/envs
printshellcmds: true
reason: true
Briefly, the Snakefile looks like this
# The main entry point of your workflow.
# After configuring, running snakemake -n in a clone of this repository should successfully execute a dry-run of the workflow.
report: "report/workflow.rst"
# Allow users to fix the underlying OS via singularity.
singularity: "docker://continuumio/miniconda3"
include: "rules/common.smk"
include: "rules/other.smk"
rule all:
input:
# The first rule should define the default target files
# Subsequent target rules can be specified below. They should start with all_*.
expand("results/{sample}/cat", sample=samples['sample'])
rule cat:
input:
file1="data/{sample}1",
file2="data/{sample}2"
output:
"results/{sample}/cat"
shell:
"cat {input.file1} {input.file2} > {output}"
Running snakemake -np gives me what I expect:
$ snakemake -np
sample condition
sample_id
A A untreated
B B treated
Building DAG of jobs...
Job counts:
count jobs
1 all
2 cat
3
[Sat Jun 13 08:51:19 2020]
rule cat:
input: data/B1, data/B2
output: results/B/cat
jobid: 2
wildcards: sample=B
cat data/B1 data/B2 > results/B/cat
[Sat Jun 13 08:51:19 2020]
rule cat:
input: data/A1, data/A2
output: results/A/cat
jobid: 1
wildcards: sample=A
cat data/A1 data/A2 > results/A/cat
[Sat Jun 13 08:51:19 2020]
localrule all:
input: results/A/cat, results/B/cat
jobid: 0
Job counts:
count jobs
1 all
2 cat
3
This was a dry-run (flag -n). The order of jobs does not reflect the order of execution.
I'm not sure how to debug it further. I'm happy to provide more information as needed.
Note: I use snakemake version 5.19.2
Thanks in advance!
EDIT
Using the --verbose option, Snakemake seems to trip on the 'default-resources: 'mem_free=4G' and/or drmaa: "-l mem_free={resources.mem_free} that are defined in my 'drmaa' profile (see above).
$ snakemake --profile drmaa --verbose
Building DAG of jobs...
Using shell: /bin/bash
Provided cluster nodes: 100
Singularity containers: ignored
Job counts:
count jobs
1 all
2 cat
3
Resources before job selection: {'_cores': 9223372036854775807, '_nodes': 100}
Ready jobs (2):
cat
cat
Full Traceback (most recent call last):
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/rules.py", line 941, in apply
res, _ = self.apply_input_function(
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/rules.py", line 684, in apply_input_function
raise e
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/rules.py", line 678, in apply_input_function
value = func(Wildcards(fromdict=wildcards), **_aux_params)
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/resources.py", line 10, in callable
value = eval(
File "<string>", line 1
4G
^
SyntaxError: unexpected EOF while parsing
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/__init__.py", line 626, in snakemake
success = workflow.execute(
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/workflow.py", line 951, in execute
success = scheduler.schedule()
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/scheduler.py", line 394, in schedule
run = self.job_selector(needrun)
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/scheduler.py", line 540, in job_selector
a = list(map(self.job_weight, jobs)) # resource usage of jobs
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/scheduler.py", line 613, in job_weight
res = job.resources
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/jobs.py", line 267, in resources
self._resources = self.rule.expand_resources(
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/rules.py", line 977, in expand_resources
resources[name] = apply(name, res, threads=threads)
File "/ifs/devel/kevin/miniconda3/envs/snakemake/lib/python3.8/site-packages/snakemake/rules.py", line 960, in apply
raise InputFunctionException(e, rule=self, wildcards=wildcards)
snakemake.exceptions.InputFunctionException: SyntaxError: unexpected EOF while parsing (<string>, line 1)
Wildcards:
sample=B
InputFunctionException in line 20 of /ifs/research-groups/sims/kevin/snakemake-issue-all/workflow/Snakefile:
SyntaxError: unexpected EOF while parsing (<string>, line 1)
Wildcards:
sample=B
unlocking
removing lock
removing lock
removed all locks
Thanks to #JohannesKöster I realised that my profile settings were wrong.
--default-resources [NAME=INT [NAME=INT ...]] indicates indicates that only integer values are supported, while I was providing string (i.e., mem_free=4G), naively hoping those would be supported as well.
I've updated the following settings in my profile, and successfully ran both snakemake --cores 1 and snakemake --profile drmaa.
default-resources: 'mem_free=4'
drmaa: "-V -notify -p -10 -l mem_free={resources.mem_free}G -pe dedicated {threads} -v MKL_NUM_THREADS={threads} -v OPENBLAS_NUM_THREADS={threads} -v OMP_NUM_THREADS={threads} -R y -q all.q"
Note the integer value 4 set as default resources, and how I moved the G to the drmaa: ... -l mem_free=...G setting.
Thanks a lot for the help everyone!
I have a simple rule to generate a file in Snakemake. Running snakemake results in an immediate error that it cannot find the generated file, even when --latency-wait is specified as a command line option.
However, this does seem to be a latency-related issue, as this Snakefile runs without problems on a local machine. The output below is on a system that has known latency problems.
Contents of Snakefile:
rule generate_file:
output:
"dummy.txt"
shell:
"head --bytes 1024 < /dev/zero | base64 > '{output}'; ls"
Commands:
$ snakemake --version
5.2.0
$ snakemake -p --latency-wait 10
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 generate_file
1
rule generate_file:
output: dummy.txt
jobid: 0
head --bytes 1024 < /dev/zero | base64 > 'dummy.txt'; ls
dummy.txt Snakefile
MissingOutputException in line 1 of /home/user/project/Snakefile:
[Errno 2] No such file or directory: ''
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Removing output files of failed job generate_file since they might be corrupted:
dummy.txt
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/user/project/.snakemake/log/2018-08-08T101648.774072.snakemake.log
Interestingly, the ls command shows the file is created and visible.
Your rule creates output file dummy.txt when used with snakemake version 5.2.2 and linux, and snakemake ends successfully. Perhaps it is a bug in version 5.2.0? I don't see anything about it in change logs though.
On related note, use of head in shell command used to result in non-zero exit status error. Apparently recent version behaves differently in this respect.
I am combining Singularity & Snakemake to create a workflow for some sequencing data. I modeled my pipeline after this git project https://github.com/sci-f/snakemake.scif. The version of the pipeline that does not use Singularity runs absolutely fine. The version that uses Singularity always stops after the first rule with the following error:
$ singularity run --bind data/raw_data/:/scif/data/ /gpfs/data01/heinzlab/home/cag104/bin/chip-seq-pipeline/chip-seq-pipeline-hg38.simg run snakemake all
[snakemake] executing /bin/bash /scif/apps/snakemake/scif/runscript all
Copying Snakefile to /scif/data
Copying config.yaml to /scif/data
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1 bowtie2_mapping
1 create_bigwig
1 create_tag_directories
1 fastp
1 fastqc
1 quality_metrics
1 samtools_index
8
rule fastp:
input: THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastq.gz
output: fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastp.fastq.gz, fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.html, fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.json
log: logs/fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.log
jobid: 7
wildcards: sample=THP-1_PU1-cMyc_PU1_sc_S40_R1_001
usage: scif run [-h] [cmd [cmd ...]]
positional arguments:
cmd app and optional arguments to target for the entry
optional arguments:
-h, --help show this help message and exit
Waiting at most 5 seconds for missing files.
MissingOutputException in line 16 of /scif/data/Snakefile:
Missing files after 5 seconds:
fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastp.fastq.gz
fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.html
fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.json
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Will exit after finishing currently running jobs.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /scif/data/.snakemake/log/2018-04-06T224320.958604.snakemake.log
The directory however does create the fastp and fastp_report directories as well as the logs directory. I tried increasing the latency to 50 seconds, but I still get the same error.
Any ideas on what to try here?