I have two servers A and B, I have a shell script in serverA which logs into serverB (through ssh) and runs the following command:
sh cassandra-cli -h <serverB> -v -f database_import.txt;
so when I do this manually, I follow these steps:
serverA:~$ ssh serverB
serverB:~$ sh cassandra-cli -h <serverB> -v -f database_import.txt;
It works properly when I follow these steps manually but when I automate this process in a shell script by this following line:
serverA:~$ssh serverB "sh cassandra-cli -h <serverB> -v -f database_import.txt;"
I get this error,
cassandra-cli: 46: cassandra-cli: -ea: not found
So, as you already pointed out, $JAVA is empty through ssh.
This is because .bashrc is not sourced when you log in using ssh. You can source it like this:
. ~/.bashrc
And your command is going to look like this:
ssh serverB ". ~/.bashrc; sh cassandra-cli -h <serverB> -v -f database_import.txt;"
You can also try placing this into your .bash_profile instead of invoking it manually each time.
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Related
I am trying to run the following command in karate using karate.fork
ssh -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa root#myjumphost" -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o PasswordAuthentication=no root#finaldest echo test
I have broken this up into an array to pass to karate.fork like so:
[
ssh,
-o,
ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa root#myjumphost",
-i,
~/.ssh/id_rsa,
-o,
StrictHostKeyChecking=no,
-o,
PasswordAuthentication=no,
root#finaldest,
echo test
]
Then run the command like so:
* karate.fork(args) where args is the array mentioned above
The command works when I paste it into the terminal and run it manually, however when run with karate.fork I get
zsh:1: no such file or directory: ssh -W finaldest:22 -I ~/.ssh/id_rsa root#myjumphost
kex_exchange_identification: Connection closed by remote host
I have tried adding a few backslashes before the " in the ProxyCommand but no amount of back slashes fixes this issue. I think I am misunderstanding what karate.fork is doing to run the command, is there some internal parsing or manipulating of the given input? I was able to get this command to work when I used useShell: true however this option breaks other tests for me so I would really like to avoid it.
I had to remove the double quotes, seems like they didn't play well with karate.fork and the command still runs without them
[
ssh,
-o,
ProxyCommand=ssh -W %h:%p -i ~/.ssh/id_rsa root#myjumphost,
-i,
~/.ssh/id_rsa,
-o,
StrictHostKeyChecking=no,
-o,
PasswordAuthentication=no,
root#finaldest,
echo test
]
I have the following command in my gitlab-ci.yml file: - rsync -v -e ssh /builds/Sustersic/untitled-combat-game/build username#1.1.1.1:/var/www/html
I tried re-writing this command with environemntal varibales:
- rsync -v -e ssh /builds/Sustersic/untitled-combat-game/build "$HOST_USERNAME"#"$HOST_IP":/var/www/html
But the environment variables are not getting read correctly. What am I doing wrong?
Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync
After using gsutil for more than 1 year I suddenly have this error:
.....
At destination listing 8350000...
At destination listing 8360000...
CommandException: Caught non-retryable exception - aborting rsync
.....
I tried to locate the files with this sync problem but I am not able to do so. Is there a "skip error" option of is there a way I can have gsutil more verbose?
My command line is like this:
gsutil -V -m rsync -d -r -U -P -C -e -x -x 'Download/*' /opt/ gs://mybucket1/kraanloos/
I have created a script to split the problem. This gives me more info for a solution
!#/bin/bash
array=(
3ware
AirTime
Amsterdam
BigBag
Download
guide
home
Install
Holding
Multimedia
newsite
Overig
Trak-r
)
for i in "${array[#]}"
do
echo Processing : $i
PROCESS="/usr/bin/gsutil -m rsync -d -r -U -P -C -e -x 'Backup/*' /opt/$i/ gs://mybucket1/kraanloos/$i/"
echo $PROCESS
$PROCESS
echo ""
echo ""
done
I've been struggling with the same problem the last few days. One way to make it super verbose is to put the -D flag before the rsync argument, as in:
gsutil -D rsync ...
By doing that, I found that my problem is due to having # characters in filenames, as in this question.
In my case, it was because of a broken link to a directory.
As blambert said, use the -D option to see exactly what file causes the problem.
I had struggled with this problem as well. I figured it out now.
you need to re-authenticate your Google Cloud SDK Shell and set a target project again.
It seems like rsync will not show the correct error message.
try cp instead, it will guide you to authentic and set the correct primary project
gsutil cp OBJECT_LOCATION gs://DESTINATION_BUCKET_NAME/
after that, your gsutil rsync should run fine.
Setup:
Local *nix machine with a SQL script script.sql (Postgres).
Remote machine remote (Debian 7) with Postgres.
I can SSH in as some_user, who is a sudoer.
Anything with Postgres needs to be done as postgres user.
The server only listens on localhost:5432.
How do I execute script.sql on remote without copying it there first?
This works well:
ssh -t some_user#remote 'sudo -u postgres psql -c "COMMANDS FOO BAR"'
The -t flag means that sudo will ask for some_user's password correctly on the local terminal.
One thing remains, to be able to pipe script.sql to psql. This does not work:
ssh -t some_user#remote 'sudo -u postgres psql' < script.sql
It fails with the message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Edit: simplified example
Postgres and psql don't seem to figure much in the problem. The following code has the same issues:
ssh some_user#remote xargs sudo ls < input_file
The problem seems to be: we need to send 2 inputs to sudo, both the password using a tty, and the stdin to pass to ls.
Edit: even simpler
ssh localhost xargs sudo ls < input_file
sudo: no tty present and no askpass program specified
Adding -t does not work:
$ ssh -t localhost xargs sudo ls < input_file
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Adding another -t does not work either:
$ ssh -t -t localhost xargs sudo ls < input_file
<content of input_file>
<waiting on a prompt>
ssh -T some_user#remote "sudo -u postgres psql -f-" < script.sql
"-f-" will read the script from STDIN. Just redirect the file in there, and there you go.
Don't bother with -t option to ssh, you don't need a full terminal for this.
ssh -T ${user}#${ip} sudo DEBIAN_FRONTEND=noninteractive postgres psql -f- < test.sql
Use DEBIAN_FRONTEND=noninteractive for resolve no tty present or equivalent of your distribution.