I have the following command in my gitlab-ci.yml file: - rsync -v -e ssh /builds/Sustersic/untitled-combat-game/build username#1.1.1.1:/var/www/html
I tried re-writing this command with environemntal varibales:
- rsync -v -e ssh /builds/Sustersic/untitled-combat-game/build "$HOST_USERNAME"#"$HOST_IP":/var/www/html
But the environment variables are not getting read correctly. What am I doing wrong?
Related
Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync
Situation:
shell gitlab runner, certificate configured, ssh connected as follows:
ssh-keygen --> id_rsa & id_rsa.pub
ssh-copy-id <user>#<remotehost>
ssh <user>#<remotehost> works as designed
id_rsa -> gitlab cicd variable called 'SSH_PRIVATE_KEY'
gitlab-ci as follows:
before_script:
- echo "Before script section"
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add < ~/.ssh/id_rsa
- ssh-add -l
build1:
stage: build
script:
- echo "Pulling on Dev\n"
- ssh -A <user>#<remotehost>
- hostname
- ssh-agent bash -c 'hostname'
- ssh-agent bash -c 'awk "NR==1{print;exit}" /etc/php7/php.ini'
Complication:
when executing commands via gitlab-ci after the ssh connection, it seems to be executed on the gitlab machine. (php is installed on the ssh'ed system, not on gitlab)
See gitlab job output below:
...
eval $(ssh-agent -s)
Agent pid 1234
$ ssh-add < ~/.ssh/id_rsa
Identity added: /home/gitlab-runner/.ssh/id_rsa (/home/gitlab-runner/.ssh/id_rsa)
$ ssh-add -l
4096 SHA256:<KEY> /home/gitlab-runner/.ssh/id_rsa (RSA)
# same behaviour with ssh -T <user>#<ipaddress> -p <portnumber>
$ ssh -A <user>#<ipaddress> -p <portnumber>
Pseudo-terminal will not be allocated because stdin is not a terminal.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
$ hostname
gitlab
$ ssh-agent bash -c 'hostname'
gitlab
$ ssh-agent bash -c 'awk "NR==1{print;exit}" /etc/php7/php.ini'
awk: cannot open /etc/php7/php.ini (No such file or directory)
In what way do I need to configure the system, so that the commands are actually run on the ssh'ed system?
I'm currently working with a solution which seems a bit too dirty for me.
In the gitlab-ci I'm pulling and running phpunit as follows
ssh -T <user>#<remotehost> "cd /var/www/projectfolder; git pull https://<gitlabUser>:$GITLAB_TOKEN#<privateGitlab>/<gitRepo>.git;"
ssh -T <user>#<remotehost> "cd /var/www/projectfolder/tests; phpunit;"
ie, I'm using a new ssh each time I'd like to run a command, which doesnt quite seem right to me. Any suggestions are welcome!
#til As per your suggestion request, single ssh command...
ssh -T <user>#<remotehost> "cd /var/www/projectfolder; git pull https://<gitlabUser>:$GITLAB_TOKEN#<privateGitlab>/<gitRepo>.git; cd /var/www/projectfolder/tests; phpunit;"
I'm trying to execute this shell with command line
host="192.168.X.XXX"
user="USERNAME"
pass="MYPASS"
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import "$user#$host:/"home/MYPATH/
To copy a file from my local server in to remote server. The remote server is a copy of the remote server but when I try to execute this shell I have this error:
**PERMISSION DENIED, PLEASE TRY AGAIN**
I didn't understand why if I try to execute this command in command line is working.
USERNAME#MYSERVER:~$ sshpass -p 'MYPASS' scp -o StrictHostKeyChecking=no /home/MYPATH/File.import USERNAME#192.168.X.XXX:/home/MYPATH/
Somebody have a solution??
Please use a pipe or the -e option for the password anyway.
export SSHPASS=password
sshpass -e ssh user#remote
Your simple command with -e option:
export SSHPASS=password
sshpass -e scp -o StrictHostKeyChecking=no /home/MYPATH/File.import user#192.168.X.XXX:/home/MYPATH/
Please remove the wrong quotes from your command:
sshpass -p "$pass" scp -o StrictHostKeyChecking=no /home/MYPATH/File.import $user#$host:/home/MYPATH/
You should also be able to remove the quotes around $pass.
Please ensure that you have no special characters in your pass variable or escape them correctly (and no typos anywhere).
For simplicity use a ssh command instead of scp for testing
Use the -v or -vvv option for the scp command to check what scp is trying to do. Also check the secure log or auth.log on the remote server
You have to install "sshpass" command then use the below snippet
export SSHPASS=password
sshpass -e sftp user#hostname << !
cd sftp_path
put filename
bye
!
A gotchya that I encountered was escaping special characters in the password which wasn't necessary when entering it in interactive ssh mode.
Is there a way to execute a command before accessing a remote terminal
When I enter this command:
bash
$> ssh user#server.com 'ls'
The ls command is executed on the remote computer but ssh quits and I cannot continue in my remote session.
Is there a way of keeping the connection? The reason that I am asking this is that I want to create a setup for ssh session without having to modify the remote .bashrc file.
I would force the allocation of a pseudo tty and then run bash after the ls command:
syzdek#host1$ ssh -t host2.example.com 'ls -l /dev/null; bash'
-rwxrwxrwx 1 root other 27 Apr 1 2005 /dev/null
bash-4.1$
You can try using process subsitution on the init file of bash. In the example below, I define a function myfunc:
myfunc () {
echo "Running myfunc"
}
which I transform to a properly-escaped one-liner echoed in the <(...) construct for process subsitution for the --init-file argument of bash:
$ ssh -t localhost 'bash --init-file <( echo "myfunc() { echo \"Running myfunc\" ; }" ) '
Password:
bash-3.2$ myfunc
Running myfunc
bash-3.2$ exit
Note that once connected, my .bashrc is not sourced but myfunc is defined and properly usable in an interactive session.
It might prove a little difficult for more complex bash functions, but it works.
I have two servers A and B, I have a shell script in serverA which logs into serverB (through ssh) and runs the following command:
sh cassandra-cli -h <serverB> -v -f database_import.txt;
so when I do this manually, I follow these steps:
serverA:~$ ssh serverB
serverB:~$ sh cassandra-cli -h <serverB> -v -f database_import.txt;
It works properly when I follow these steps manually but when I automate this process in a shell script by this following line:
serverA:~$ssh serverB "sh cassandra-cli -h <serverB> -v -f database_import.txt;"
I get this error,
cassandra-cli: 46: cassandra-cli: -ea: not found
So, as you already pointed out, $JAVA is empty through ssh.
This is because .bashrc is not sourced when you log in using ssh. You can source it like this:
. ~/.bashrc
And your command is going to look like this:
ssh serverB ". ~/.bashrc; sh cassandra-cli -h <serverB> -v -f database_import.txt;"
You can also try placing this into your .bash_profile instead of invoking it manually each time.
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi