errors on shell - scripting

somebody knows what does this error mean?
Usage: tcsh [ -bcdefilmnqstvVxX ] [ argument ... ].
I receive this error after I enter in my script this row
#! /bin/tcsh -f

That doesn't seem like an actual error. If there would be an error bash should have said something like bash: error importing function definition for `module' what Linux System are you running on the shell? There's often documentation on error messages try command error info or browse the manual by command man -h.

Related

Setting {WSL::Bash} as default shell throws an error in cmder

note: backend error output: -v: -c: line 0: unexpected EOF while looking for matching `''
-v: -c: line 1: syntax error: unexpected end of file
ConEmuC: Root process was alive less than 10 sec, ExitCode=0.
Press Enter or Esc to close console...
This is error i am getting.
Also i have set the fish shell as default shell in WSL.
For WSL1 on windows 10 build later than 1909 (yes wsl2 is available to me but for corporate reasons i cant use it)
Try setting your command to wsl.exe -new_console:d:C:\_stuff\code -cur_console:p5 and the task parameters to /dir "c:/_stuff/code" /icon "c:/_distros/ubuntu/ubuntu1804.exe"
You may need to change the file locations to make the command and parameters suitable for your setup. c:/_stuff/code is where i keep all my repositories and c:/_distros/ubuntu is where i have installed ubuntu.

How to view detailed error message in failed build

So this is the only thing I see on failed build. When running npm scripts on a cli, you usually see more than the exit status. Is there some option to view the entire cli output instead of this pseudo log?
I contacted support and was told to cat the debug log in order to see the output.
#!/bin/bash
set -ex
cat $(find $HOME/.npm/_logs -name '*-debug.log')

How to keep the snakemake shell file while running in cluster

While running my snakemake file in cluster I keep getting an error,
snakemake -j 20 --cluster "qsub -o out.txt -e err.txt -q debug" -s
seadragon/scripts/viral_hisat.snake --config json="<input file>"
output="<output file>"
Now this gives me the follwing error,
Error in job run_salmon while creating output file
/gpfs/home/user/seadragon/output/quant_v2_4/test.
ClusterJobException in line 58 of seadragon/scripts/viral_hisat.snake
:
Error executing rule run_salmon on cluster (jobid: 1, external: 156618.sn-mgmt.cm.cluster, jobscript: /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh). For detailed error see the cluster log.
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
Now I don't find any way to track the error, since my cluster does not give me an way to store the log files, on the other hand /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh file is deleted immediately after finishing.
Please let me know if there is an way to keep this shell file even if the snakemake fails.
I am not a qsub user anymore, but if I remember correctly, stdout and stderr are stored in the working directory, under the jobid that Snakemake gives you under external in the error message.
You need to redirect the standard output and standard error output to a file yourself instead of relying on the cluster or snakemake to do this for you.
Instead of the following
my_script.sh
Run the following
my_script.sh > output_file.txt 2> error_file.txt

Can I fail a build based on the outcome of a SSH Task?

I was wondering if I could use bamboo's SSH task to run a script (this kicks off a small java message injector).
Then grep the logs for ERRORS. If any ERROR is present I would like to fail the build.
Something like this:
Is this a Bash question or is it really about Bamboo? Here is the Bash problem answer:
If you run
[[ ! $(grep ERROR /a/directory/log/*) ]]
the script will exit with an error if it finds the word "ERROR" anywhere in the files.
Bamboo should detect the task execution as failed.
(Note that if Bash is not the default shell on your target system you may need a #!/bin/bash on top of the script file.)

SGE Command Not Found, Undefined Variable

I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh