TAR over two hops - ssh

I need to create a tar and shipped it to my local folder.
If i can create tar file, i can easily get it on local folder using scp.
Here problem is at first step: Creating TAR on remote server. Server is accessible only through another remote server (bastion server).
Here is the command i'm using currently:
timestamp="20160226-085856"
ssh bastion_server -t ssh remote_server "sudo su -c \"cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb\" "
Here is the error i am getting:
su: invalid option -- 'r'
Usage: su [options] [LOGIN]
Any help here would be great.

Give it a try without the fancy sudo su -c. Using sudo -s should be enough:
ssh bastion_server -t ssh remote_server "sudo -s cp -r /etc/nginx \
/home/ubuntu/backup/nginx_26Feb && cd /home/ubuntu/backup && \
tar -C /home/ubuntu/backup -cf backup_nginx-$timestamp.tar ./nginx_26Feb"
Or rather set up proper two-hops ~/.ssh/config:
Host bastion
Hostname bastion_server
Host remote
Hostname remote_server
ProxyCommand ssh -W %h:%p bastion
and then just run
ssh remote sudo su -c "cp -r /etc/nginx /home/ubuntu/backup/nginx_26Feb \
&& cd /home/ubuntu/backup && tar -C /home/ubuntu/backup -cf \
backup_nginx-$timestamp.tar ./nginx_26Feb"
Without the fancy escaping and stuff.

Related

SSHPASS Not Passing Password to SUDO Command

I have the below command and I'm unable to get it to run. It's saying that no password is being supplied for the SUDO command, any idea or help greatly appreciated, I've read every post I can find but to no avail.
sshpass -p $PASSWD ssh client_user#192.169.0.178 'cd /tmp/;echo $PASSWD | sudo -S mkdir ./test/;'
Also tried the below with no luck:
sshpass -p $PASSWD ssh client_user#192.169.0.178 <<EOF
cd /tmp/;
echo $PASSWD | sudo -S mkdir ./test/;
EOF
Error:
sudo: no password was provided

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Gitlab CI/CD: Deploy to ubuntu server using ssh keys (using a windows shell runner)

Hello everyone i need your help plz, i'm using gitlab ci/cd and trying to deploy my .jar application to an ubuntu server, i configured my gitlab project with a windows runner with shell executor. i configured a key based access on the runner to avoid being prompt for a password;
the following command runs successfully when i login to the runner machine and use it's powershell :
scp -i C:\Users\Administrators\ssh\id_rsa myapp-0.0.1-SNAPSHOT.jar username#myubuntuserver:/
but when i'm using the above commande in my .yml file to copy the .jar on the server, it doesn't give any response until the job fail due to timeout
i tried also the solution proposed here https://docs.gitlab.com/ee/ci/ssh_keys/ by setting an SSH_PRIVATE_KEY variable on my project but i'm unable to adapt the given 'before_script' to my windows runner.
this is the before_script proposed in the documentation (above link):
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
when the previous scp command is replaced by this:
ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver
i get the following output:
the image
Thanks in advance
It works after doing the following steps:
1) configuring the runner (shell executor) on ubuntu 18.04
2) Then from the terminal login as the gitlab-runner user: sudo su - gitlab-runner
3) run ssh-keygen -t rsa
4) run ssh -i ~/.ssh/id_rsa username#myubuntuserver:
5) run cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
5) now you can add the following to your job script (yml file) and it should work:
- scp -i ~/.ssh/id_rsa fileToCopy username#myubuntuserver:/mydirectory
#you can execute multiple commands at a time, for ex:
- ssh username#myubuntuserver " mv /mydirectory/myapp-0.0.1-SNAPSHOT.jar /mydirectory/myapp.jar "
Hope it will help
If ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver does not work, that may be because of the C: part, which confuses ssh into thinkig C is the name of the server!
A Unix-like path would work:
ssh -iv /C/Users/Administrators/ssh/id_rsa username#myubuntuserver
But, as the OP Medmahmoud comments, this supposes the public key has been published on the server:
Configure the runner on ubuntu18.04.
Then from the terminal login as the gitlab-runner user:
sudo su - gitlab-runner - run ssh-keygen -t rsa
ssh -i ~/.ssh/id_rsa username#myubuntuserver
cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver \
"mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Now from your yml file the following should work:
- scp -i ~/.ssh/id_rsa pom.xml username#myubuntuserver:/mydirectory

mkdir -p over SSH bash

I have a small test script as follows;
TESTDIR="$HOSTNAME"
ssh user#server.com "\$TESTDIR"
mkdir -p ~/$TESTDIR/test
exit
the output with bash -x is;
+ TESTDIR=ndx
+ ssh user#server.com '$TESTDIR'
+ mkdir -p /home/user/ndx/test
+ exit
Yet on the remote server, no directory exists?
The last argument of ssh is command you want to execute on the remote host:
TESTDIR="$HOSTNAME"
ssh user#server.com "mkdir -p ~/$TESTDIR/test"
If you have a pem file to ssh as authentication use the following
ssh -i your-key.pem user#ip_addr "mkdir -p /your_dir_name/test"

Docker - Cannot start Redis Service

I'm installation Redis, setting up init.d, placed the redis.conf beside init.d.
Then using CMD service init.d start to start Redis.
However, Redis-Server does not start, and there are no indiciation in the log file that the service failed to start.
Installing Redis and Placing redis.conf to the etc/init.d folder
Commands:
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r redis && useradd -r -g redis redis
RUN apt-get update > /dev/null \
&& apt-get install -y curl > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1
# grab gosu for easy step-down from root
RUN gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
RUN curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" > /dev/null 2>&1 \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" > /dev/null 2>&1 \
&& gpg --verify /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& rm /usr/local/bin/gosu.asc > /dev/null 2>&1 \
&& chmod +x /usr/local/bin/gosu > /dev/null 2>&1
ENV REDIS_VERSION 3.0.1
ENV REDIS_DOWNLOAD_URL http://download.redis.io/releases/redis-3.0.1.tar.gz
ENV REDIS_DOWNLOAD_SHA1 fe1d06599042bfe6a0e738542f302ce9533dde88
# for redis-sentinel see: http://redis.io/topics/sentinel
RUN buildDeps='gcc libc6-dev make'; \
set -x \
&& apt-get update > /dev/null && apt-get install -y $buildDeps --no-install-recommends > /dev/null 2>&1 \
&& rm -rf /var/lib/apt/lists/* > /dev/null 2>&1 \
&& mkdir -p /usr/src/redis > /dev/null 2>&1 \
&& curl -sSL "$REDIS_DOWNLOAD_URL" -o redis.tar.gz > /dev/null 2>&1 \
&& echo "$REDIS_DOWNLOAD_SHA1 *redis.tar.gz" | sha1sum -c - > /dev/null 2>&1 \
&& tar -xzf redis.tar.gz -C /usr/src/redis --strip-components=1 > /dev/null 2>&1 \
&& rm redis.tar.gz > /dev/null 2>&1 \
&& make -C /usr/src/redis > /dev/null 2>&1 \
&& make -C /usr/src/redis install > /dev/null 2>&1 \
&& cp /usr/src/redis/utils/redis_init_script /etc/init.d/redis_6379
&& rm -r /usr/src/redis > /dev/null 2>&1 \
&& apt-get purge -y --auto-remove $buildDeps > /dev/null 2>&1
RUN mkdir /data && chown redis:redis /data
VOLUME [/data]
WORKDIR /data
CMD Service init.d start
Command:
RUN touch /var/redis/6379/redis-6379-log.txt
RUN chmod 777 /var/redis/6379/redis-6379-log.txt
ENV REDISPORT 6379
ADD $app$/redis-config.txt /etc/redis/$REDISPORT.conf
CMD service /etc/init.d/redis_6379 start
If I use shellinabox to access the container, and if I type in
/etc/init.d/redis_6379 start
Redis server will start, but it won't start in the dockerfile. Why is this?
It seems that you cannot use background processes, but instead you need something called supervisord.
To Install:
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
ADD $app$/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD /usr/bin/supervisord
Configuration File:
[supervisord]
nodaemon=true
[program:shellinabox]
command=/bin/bash -c "cd /tmp && exec /opt/shellinabox/shellinaboxd --no-beep --service ${service}"
[program:redis-server]
command=/bin/bash -c "redis-server /etc/redis/${REDISPORT}.conf"
What happens is that after the command is executed, it will start both programs shelllinabox and redis-server.
Thanks everyone for the help!
In general, you can't use an init script inside a Docker container. These scripts are typically designed to start a service "in the background", which means that even if the service starts, the script ultimately exits.
If this is the first process in your Docker container, Docker will see it exit, which will cause it to clean up the container. You will need to arrange for redis to run in the foreground in your container, or you will need to arrange to run some sort of process supervisor in your container.
Consider looking at the official resource container to see one way of setting things up. You can see the Dockerfiles in the github repository.