Random "[Errno 13] Permission denied" during snakemake pipeline execution on cluster - snakemake

I get an annoying write error during a pipeline execution which shutdown everything and cannot understand the reason. I don't know whether this is a bug or a usage problem, hence posting here before posting an issue on the Snakemake repo.
Snakemake version : 7.19.1
Describe the bug
I randomly get the following error during the execution of a pipeline on a cluster. At the moment, I can't find the reason/context that make it happen.
Looks like, for some reason, Snakemake cannot write a temporary .sh script. Although, it has no problem writing the same script for other wildcards sets.
It might be related to the cluster I'm using and not Snakemake, but I would like to make sure of that.
Logs
Traceback (most recent call last):
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/__init__.py", line 752, in snakemake
success = workflow.execute(
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/workflow.py", line 1089, in execute
raise e
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/workflow.py", line 1085, in execute
success = self.scheduler.schedule()
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/scheduler.py", line 592, in schedule
self.run(runjobs)
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/scheduler.py", line 641, in run
executor.run_jobs(
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 155, in run_jobs
self.run(
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 1156, in run
self.write_jobscript(job, jobscript)
File "/home/sjuhel/mambaforge/lib/python3.9/site-packages/snakemake/executors/__init__.py", line 901, in write_jobscript
with open(jobscript, "w") as f:
PermissionError: [Errno 13] Permission denied: '/data/sjuhel/BoARIO-inputs/.snakemake/tmp.7lly73mm/snakejob.run_generic.11469.sh'
Minimal example
The error doesn't happen when running a smaller pipeline.
Additional context
I am running the following pipeline of simulations on a cluster using SLURM.
Profile :
cluster:
mkdir -p /scratchu/sjuhel/logs/smk/{rule} &&
sbatch
--parsable
--mem={resources.mem_mb}
--job-name=smk-{rule}-{wildcards}
--cpus-per-task={threads}
--output=/scratchu/sjuhel/logs/smk/{rule}/{rule}-{wildcards}-%j.out
--time={resources.time}
--partition={resources.partition}
default-resources:
- mem_mb=2000
- partition=zen16
- time=60
- threads=4
restart-times: 0
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 16
keep-going: False
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True
conda-frontend: mamba
cluster-status: status-sacct.sh
The pipeline is runned via :
nohup snakemake generate_csv_from_all_xp --profile simple > "../Runs/snakelog-$(date +"%FT%H%M%z").log" 2>&1 &
Rules to run (last execution with the error):
Using shell: /usr/bin/bash
Provided cluster nodes: 16
Job stats:
job count min threads max threads
------------------------ ------- ------------- -------------
generate_csv_from_all_xp 1 1 1
generate_csv_from_xp 3 1 1 -> indicator aggregation
indicators 4916 1 1 -> symlink to exp folder (multiple exps can share similar results)
indicators_generic 4799 1 1 -> indicators from simulations
init_all_sim_parquet 1 1 1
run_generic 4771 4 4 -> simulations to run
xp_parquet 3 1 1 -> regroup simulations by different scenarios
total 14494 1 4
What I don't understand is that Snakemake is able to write other temporary .sh files, and I don't understand at which point the error happen. And I have no ideas on how to debug this.
Edit [23/01/2023] :
The issue might be related to exiting the ssh session on the cluster. Could it be possible that the nohuped snakemake process cannot write files once I am no longer connected to the server ?
→ I will try with screen instead of nohup.

There is a very high probability that Snakemake lost the rights to write the temporary scripts files when I disconnected from the server.
I solved the problem by invoking the following script with sbatch :
#! /bin/bash
#SBATCH --ntasks 1
#SBATCH --time=2-00:00:00
#SBATCH --job-name=Snakemake
#SBATCH --mem=2048
for i in $(seq 1 10)
do
snakemake --batch generate_csv_from_all_xp=$i/10 --profile simple > "../Runs/snakelog-$(date +"%FT%H%M%z").log" 2>&1
done
I haven't tried tmux or screen but it should probably work as well. As the dag is a bit heavy to compute, I thought best to not run it on the login node of the cluster (+ with sbatch I get to have email notification of the job ending/failing)
Note that I read that Running snakemake on login nodes is unlikely to pose problems

Related

Allow job to run "reboot" command without causing failure

We have a large number of runners running a large number of jobs in one of our Gitlab CI/CD pipelines.
Each of these runners has a concurrency of 1, and they are of executor type shell.
[EDIT] These runners are AWS EC2 instances using Amazon Linux 2.
After certain jobs in the pipeline have completed, I would like them to run a reboot command to restart the runner.
However, some of these jobs will be tests. Currently, when I run the reboot command, the job fails. Obviously I can allow_failure so that the job passes, but this then means we have no way of determining whether or not the actual test has passed.
Originally, my test job looked like this:
after_script:
- sleep 1 && reboot
I have also tried the following variations:
after_script:
- sleep 15 && reboot
- exit 0
after_script:
- (sleep 15 ; reboot ) &
- exit 0
I've also tried running a shell script with the same contents.
All of these result in the same problem - ERROR: Job failed (system failure): aborted: terminated.
Can anyone think of a clever way round this?
In the end, I had to run this in a screen:
sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
This allowed me, in a Gitlab CI pipeline, to run this as a script element, and immediately afterwards execute an exit command, like this:
after_script:
- sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
- exit 0
This way, if a test fails - the job fails. If a test passes, the job passes. No need for allow_failure.
Unfortunately... I'm unsure of how to then contend with artifacts which take place after the after_script commands. If anyone has any ideas about that one, please add a comment here.

Snakemake cannot find output file, gives MissingOutputException while latency-wait is seemingly ignored

I have a simple rule to generate a file in Snakemake. Running snakemake results in an immediate error that it cannot find the generated file, even when --latency-wait is specified as a command line option.
However, this does seem to be a latency-related issue, as this Snakefile runs without problems on a local machine. The output below is on a system that has known latency problems.
Contents of Snakefile:
rule generate_file:
output:
"dummy.txt"
shell:
"head --bytes 1024 < /dev/zero | base64 > '{output}'; ls"
Commands:
$ snakemake --version
5.2.0
$ snakemake -p --latency-wait 10
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 generate_file
1
rule generate_file:
output: dummy.txt
jobid: 0
head --bytes 1024 < /dev/zero | base64 > 'dummy.txt'; ls
dummy.txt Snakefile
MissingOutputException in line 1 of /home/user/project/Snakefile:
[Errno 2] No such file or directory: ''
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Removing output files of failed job generate_file since they might be corrupted:
dummy.txt
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/user/project/.snakemake/log/2018-08-08T101648.774072.snakemake.log
Interestingly, the ls command shows the file is created and visible.
Your rule creates output file dummy.txt when used with snakemake version 5.2.2 and linux, and snakemake ends successfully. Perhaps it is a bug in version 5.2.0? I don't see anything about it in change logs though.
On related note, use of head in shell command used to result in non-zero exit status error. Apparently recent version behaves differently in this respect.

Singularity & Snakemake - MissingOutputException

I am combining Singularity & Snakemake to create a workflow for some sequencing data. I modeled my pipeline after this git project https://github.com/sci-f/snakemake.scif. The version of the pipeline that does not use Singularity runs absolutely fine. The version that uses Singularity always stops after the first rule with the following error:
$ singularity run --bind data/raw_data/:/scif/data/ /gpfs/data01/heinzlab/home/cag104/bin/chip-seq-pipeline/chip-seq-pipeline-hg38.simg run snakemake all
[snakemake] executing /bin/bash /scif/apps/snakemake/scif/runscript all
Copying Snakefile to /scif/data
Copying config.yaml to /scif/data
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1 bowtie2_mapping
1 create_bigwig
1 create_tag_directories
1 fastp
1 fastqc
1 quality_metrics
1 samtools_index
8
rule fastp:
input: THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastq.gz
output: fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastp.fastq.gz, fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.html, fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.json
log: logs/fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.log
jobid: 7
wildcards: sample=THP-1_PU1-cMyc_PU1_sc_S40_R1_001
usage: scif run [-h] [cmd [cmd ...]]
positional arguments:
cmd app and optional arguments to target for the entry
optional arguments:
-h, --help show this help message and exit
Waiting at most 5 seconds for missing files.
MissingOutputException in line 16 of /scif/data/Snakefile:
Missing files after 5 seconds:
fastp/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.fastp.fastq.gz
fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.html
fastp_report/THP-1_PU1-cMyc_PU1_sc_S40_R1_001.json
This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait.
Will exit after finishing currently running jobs.
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /scif/data/.snakemake/log/2018-04-06T224320.958604.snakemake.log
The directory however does create the fastp and fastp_report directories as well as the logs directory. I tried increasing the latency to 50 seconds, but I still get the same error.
Any ideas on what to try here?

How to keep the snakemake shell file while running in cluster

While running my snakemake file in cluster I keep getting an error,
snakemake -j 20 --cluster "qsub -o out.txt -e err.txt -q debug" -s
seadragon/scripts/viral_hisat.snake --config json="<input file>"
output="<output file>"
Now this gives me the follwing error,
Error in job run_salmon while creating output file
/gpfs/home/user/seadragon/output/quant_v2_4/test.
ClusterJobException in line 58 of seadragon/scripts/viral_hisat.snake
:
Error executing rule run_salmon on cluster (jobid: 1, external: 156618.sn-mgmt.cm.cluster, jobscript: /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh). For detailed error see the cluster log.
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
Now I don't find any way to track the error, since my cluster does not give me an way to store the log files, on the other hand /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh file is deleted immediately after finishing.
Please let me know if there is an way to keep this shell file even if the snakemake fails.
I am not a qsub user anymore, but if I remember correctly, stdout and stderr are stored in the working directory, under the jobid that Snakemake gives you under external in the error message.
You need to redirect the standard output and standard error output to a file yourself instead of relying on the cluster or snakemake to do this for you.
Instead of the following
my_script.sh
Run the following
my_script.sh > output_file.txt 2> error_file.txt

Never successfully built a large hadoop&spark cluster

I was wondering if anybody could help me with this issue in deploying a spark cluster using the bdutil tool.
When the total number of cores increase (>= 1024), it failed all the time with the following reasons:
Some machine is never sshable, like "Tue Dec 8 13:45:14 PST 2015: 'hadoop-w-5' not yet sshable (255); sleeping"
Some nodes fail with an "Exited 100" error when deploying spark worker nodes, like "Tue Dec 8 15:28:31 PST 2015: Exited 100 : gcloud --project=cs-bwamem --quiet --verbosity=info compute ssh hadoop-w-6 --command=sudo su -l -c "cd ${PWD} && ./deploy-core-setup.sh" 2>>deploy-core-setup_deploy.stderr 1>>deploy-core-setup_deploy.stdout --ssh-flag=-tt --ssh-flag=-oServerAliveInterval=60 --ssh-flag=-oServerAliveCountMax=3 --ssh-flag=-oConnectTimeout=30 --zone=us-central1-f"
In the log file, it says:
hadoop-w-40: ==> deploy-core-setup_deploy.stderr <==
hadoop-w-40: dpkg-query: package 'openjdk-7-jdk' is not installed and no information is available
hadoop-w-40: Use dpkg --info (= dpkg-deb --info) to examine archive files,
hadoop-w-40: and dpkg --contents (= dpkg-deb --contents) to list their contents.
hadoop-w-40: Failed to fetch http://httpredir.debian.org/debian/pool/main/x/xml-core/xml-core_0.13+nmu2_all.deb Error reading from server. Remote end closed connection [IP: 128.31.0.66 80]
hadoop-w-40: E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
I tried 16-core 128-nodes, 32-core 64-nodes, 32-core 32-nodes and other over 1024-core configurations, but either the above Reason 1 or 2 will show up.
I also tried to modify the ssh-flag to change the ConnectTimeout to 1200s, and change bdutil_env.sh to set the polling interval to 30s, 60s, ..., none of them works. There will be always some nodes which fail.
Here is one of the configurations that I used:
time ./bdutil \
--bucket $BUCKET \
--force \
--machine_type n1-highmem-32 \
--master_machine_type n1-highmem-32 \
--num_workers 64 \
--project $PROJECT \
--upload_files ${JAR_FILE} \
--env_var_files hadoop2_env.sh,extensions/spark/spark_env.sh \
deploy
To summarize some of the information that came out from a separate email discussion, as IP mappings change and different debian mirrors get assigned, there can be occasional problems where the concurrent calls to apt-get install during a bdutil deployment can either overload some unbalanced servers or trigger DDOS protections leading to deployment failures. These do tend to be transient, and at the moment it appears I can deploy large clusters in zones like us-east1-c and us-east1-d successfully again.
There are a few options you can take to reduce the load on the debian mirrors:
Set MAX_CONCURRENT_ASYNC_PROCESSES to a much smaller value than the default 150 inside bdutil_env.sh, such as 10 to only deploy 10 at a time; this will make the deployment take longer, but would lighten the load as if you just did several back-to-back 10-node deployments.
If the VMs were successfully created but the deployment steps fail, instead of needing to retry the whole delete/deploy cycle, you can try ./bdutil <all your flags> run_command -t all -- 'rm -rf /home/hadoop' followed by ./bdutil <all your flags> run_command_steps to just run through the whole deployment attempt.
Incrementally build your cluster using resize_env.sh; initially set --num_workers 10 and deploy your cluster, and then edit resize_env.sh to set NEW_NUM_WORKERS=20, and run ./bdutil <all your flags> -e extensions/google/experimental/resize_env.sh deploy and it will only deploy the new workers 10-20 without touching those first 10. Then you just repeat, adding another 10 workers to NEW_NUM_WORKERS each time. If a resize attempt fails, you simply ./bdutil <all your flags> -e extensions/google/experimental/resize_env.sh delete to only delete those extra workers without affecting the ones you already deployed successfully.
Finally, if you're looking for more reproducible and optimized deployments, you should consider using Google Cloud Dataproc, which lets you use the standard gcloud CLI to deploy cluster, submit jobs, and further manage/delete clusters without needing to remember your bdutil flags or keep track of what clusters you have on your client machine. You can SSH into Dataproc clusters and use them basically the same as bdutil clusters, with some minor differences, like Dataproc DEFAULT_FS being HDFS so that any GCS paths you use should fully-specify the complete gs://bucket/object name.