I am getting this error when building a private Ethereum node:
flag provided but not defined: -minerthreads
This happens whenever I try to start the node?
by the way this is the main script for startnode.cmd:
geth --networkid 4224 --mine --minerthreads 1 --datadir "." --nodiscover --rpc --rpcport "8545" --port "30303" rpccorsdomain "*" --nat "any" --rpcapi eth,web3,personal,net --unlock 0 --password ./password.sec
Try --miner.threads=1 instead. See geth command line options here
Related
note: backend error output: -v: -c: line 0: unexpected EOF while looking for matching `''
-v: -c: line 1: syntax error: unexpected end of file
ConEmuC: Root process was alive less than 10 sec, ExitCode=0.
Press Enter or Esc to close console...
This is error i am getting.
Also i have set the fish shell as default shell in WSL.
For WSL1 on windows 10 build later than 1909 (yes wsl2 is available to me but for corporate reasons i cant use it)
Try setting your command to wsl.exe -new_console:d:C:\_stuff\code -cur_console:p5 and the task parameters to /dir "c:/_stuff/code" /icon "c:/_distros/ubuntu/ubuntu1804.exe"
You may need to change the file locations to make the command and parameters suitable for your setup. c:/_stuff/code is where i keep all my repositories and c:/_distros/ubuntu is where i have installed ubuntu.
I am new to aws I was trying to create a pipeline. But it turns this error once it builds
[Container] 2020/05/23 04:32:56 Phase context status code: Decrypted Variables Error Message: parameter does not exist: JWT_SECRET
Even though the token was stored by running this command
s ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString
I tried to fix that by adding this line buildspec.yml post build commands. but still does not fix the problem
- kubectl set env deployment/simple-jwt-api JWT_SECRET=$JWT_SECRET
My buildspec.yml contain this added line to configure the pass of my jwt secret to the app
env:
parameter-store:
JWT_SECRET: JWT_SECRET
Check my github repos for more details about the code
Also once I run this under cmd to test the api endpoints kubectl get services simple-jwt-api -o wide I have got this error
Error from server (NotFound): services "simple-jwt-api" not found
Well it is obvious since the pipeline failed to build. Please how can I fix it?
In my case I go this error while I have created my stack in different region than the cluster. So whenever it search for the variable it does not find it. So, be carful to point to the same region in every creation action :).
The best solution I found was to add a region tag when declaring the env variables.
aws ssm put-parameter --name JWT_SECRET --value "myjwtsecret" --type SecureString --region <your-cluster-region>
I also encountered this same issue,
Changing the kubectl version in the buildspec.yml file worked for me
- curl -LO https://dl.k8s.io/release/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl
# Download the kubectl checksum file
- curl -LO "https://dl.k8s.io/v<YOUR_KUBERNETES_VERSION>/bin/linux/amd64/kubectl.sha256"
Note that the <YOUR_KUBERNETES_VERSION> must be the same with what you have on your created cluster dashboard.
While running my snakemake file in cluster I keep getting an error,
snakemake -j 20 --cluster "qsub -o out.txt -e err.txt -q debug" -s
seadragon/scripts/viral_hisat.snake --config json="<input file>"
output="<output file>"
Now this gives me the follwing error,
Error in job run_salmon while creating output file
/gpfs/home/user/seadragon/output/quant_v2_4/test.
ClusterJobException in line 58 of seadragon/scripts/viral_hisat.snake
:
Error executing rule run_salmon on cluster (jobid: 1, external: 156618.sn-mgmt.cm.cluster, jobscript: /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh). For detailed error see the cluster log.
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
Now I don't find any way to track the error, since my cluster does not give me an way to store the log files, on the other hand /gpfs/home/user/.snakemake/tmp.j9nb0hyo/snakejob.run_salmon.1.sh file is deleted immediately after finishing.
Please let me know if there is an way to keep this shell file even if the snakemake fails.
I am not a qsub user anymore, but if I remember correctly, stdout and stderr are stored in the working directory, under the jobid that Snakemake gives you under external in the error message.
You need to redirect the standard output and standard error output to a file yourself instead of relying on the cluster or snakemake to do this for you.
Instead of the following
my_script.sh
Run the following
my_script.sh > output_file.txt 2> error_file.txt
I'm attempting to setup a new compute cluster, and currently experiencing errors when using the qsub command in the SGE. Here's a simple experiment that shows the problem:
test.sh
#!/usr/bin/zsh
test="hello"
echo "${test}"
test.sh.eXX
test=hello: Command not found.
test: Undefined variable.
test.sh.oXX
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I ran the script on the head node (sh test.sh), the output is correct. I submit the job to the SGE by typing "qsub test.sh".
If I submit the exact same script job in the same way on an established compute cluster like HPC, it works perfectly as expected. What setting could be causing this problem?
Thanks for any help on this matter.
Most likely the queues on your cluster are set to posix_compliant mode with a default shell of /bin/csh. The posix_compliant setting means your #! line is ignored. You can either change the queues to unix_behavior or specify the required shell using qsub's -S option.
#$ -S /bin/sh
When running a Fabric task on a remote server I get the following stack trace:
[x.x.x.x] run: git fetch && git reset --hard origin/develop
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
self.run()
File "/var/lib/jenkins/jobs/deploy/workspace/.pyenv/lib/python2.6/site-packages/ssh/agent.py", line 115, in run
self._communicate()
File "/var/lib/jenkins/jobs/deploy/workspace/.pyenv/lib/python2.6/site-packages/ssh/agent.py", line 125, in _communicate
events = select([self._agent._conn, self.__inr], [], [], 0.5)
TypeError: argument must be an int, or have a fileno() method.
The fact that the Fabric task is trying to perform a git fetch and that exceptions is raised in ssh/agent.py makes me think something is wrong with SSH authentication.
The same user can run git fetch outside of Fabric, and the task runs fine on my laptop.
What's going on here? How do I resolve this issue?
An issue raised on Fabric's issue tracker mentions that the error might arise from not having ssh-agent running on the host.
I solved the problem by starting an ssh-agent and adding the user's key:
$> eval `ssh-agent`
$> ssh-add ~/.ssh/id_rsa
Success!
To auto-start ssh-agent when you first login, add this to your ~/.bashrc:
if [ ! -S ~/.ssh/ssh_auth_sock ]; then
eval `ssh-agent`
ln -sf "$SSH_AUTH_SOCK" ~/.ssh/ssh_auth_sock
ssh-add
fi
export SSH_AUTH_SOCK=~/.ssh/ssh_auth_sock
I ran into this error while using Fabric with Python/Django when I was trying to execute tasks by hand within ./manage.py shell_plus.
It turns out (for me) that the error was caused by the fact that my shell_plus was set up to use bpython instead of ipython.
When I ran ./manage.py shell_plus --ipython instead, everything worked perfectly.
I realize that this probably wasn't a direct answer to your problem, but I figure I might as well leave a note here for anyone else who happens across the issue like I did.