SSH into staging machine from docker instance using Bitbucket Pipelines - authentication

Using the new Bitbucket Pipelines feature, how can I SSH into my staging box from the docker container it spins up?
The last step in my pipeline is an .sh file that deploys the necessary code on staging, however because my staging box uses public key authentication and doesn't know about the docker container, the SSH connection is being denied.
Anyway of getting around this without using password authentication over SSH (which is causing me issues as well by constantly choosing to authenticate over public key instead.)?

Bitbucket pipelines can use a Docker image you've created, that has the ssh client setup to run during your builds, as long as it's hosted on a publicly accessible container registry.
Create a Docker image.
Create a Docker image with your ssh key available somewhere. The image also needs to have the host key for your environment(s) saved under the user the container will run as. This is normally the root user but may be different if you have a USER command in your Dockerfile.
You could copy an already populated known-hosts file in or configure the file dynamically at image build time with:
RUN ssh-keyscan your.staging-host.com
Publish the image
Publish your image to a publicly accessible, but private registry. You can host your own or use a service like Docker Hub.
Configure Pipelines
Configure pipelines to build with your docker image.
If you use Docker Hub
image:
name: account-name/java:8u66
username: $USERNAME
password: $PASSWORD
email: $EMAIL
Or Your own external registry
name: docker.your-company-name.com/account-name/java:8u66
Restrict access on your hosts
You don't want to have ssh keys to access your hosts flying around the world so I would also restrict access for these deploy ssh keys to only run your deploy commands.
The authorized_keys file on your staging host:
command="/path/to/your/deploy-script",no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAAC8ghi9ldw== deploy#bitbucket
Unfortunately bitbucket don't publish an IP list to restrict access to as they use shared infrastructure for pipelines. If they happen to be running on AWS then Amazon do publish IP lists.
from="10.5.0.1",command="",no-... etc
Also remember to date them an expire them from time to time. I know ssh keys don't enforce dates but it's a good idea to do it anyway.

You can now setup SSH keys under pipeline settings so that you do not need to have a private docker image just to store ssh keys. It is also extracted from your source code so you don't have it in your repo as well.
Under
Settings -> Pipelines -> SSH keys
You can either provide a key pair or generate a new one. The private key will be put in the docker container at ~/.ssh/config and provide you a public key you can put in your host to the ~/.ssh/authorized_keys file. The page also requires an ip or name to setup the fingerprint for known hosts when running on docker as well.
Also, Bitbucket has provided IP addresses you can white list if necessary for the docker containers being spun up. They are currently:
34.236.25.177/32
34.232.25.90/32
52.203.14.55/32
52.202.195.162/32
52.204.96.37/32
52.54.90.98/32
34.199.54.113/32
34.232.119.183/32
35.171.175.212/32

Related

How to access an VM instance created from market product deployment in GCP via FileZilla/WinScp or SSH?

I am doing a wordpress installation on GCP, this is done through deploying a wordpress in market:
After the successful deployment, I also set a static IP address to the instance:
I need to use FileZilla or WinSCP to connect to the instance or at least SSH into the instance in order to do some customization work.
Can anyone enlighten me on how to get it done? I see SSH keys created for some most likely deleted resource during my practice:
[UPDATE]:
I double checked the Firewall rules and see there is a rule allowing SSH:
[Update]
I tried SSH from the console (Compute Engine -> VM Instances), I got into somewhere, here is the detail:
Connected, host fingerprint: ssh-rsa 0 AD:45:62:ED:E3:71:B1:3B:D4:9F:6D:9D:08:16
:0C:55:0F:C1:55:70:97:59:5E:C5:35:8E:D6:8E:E8:F9:C2:4A
Linux welynx-vm 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3 (2019-09-02) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
xenonxie#welynx-vm:~$ ls
xenonxie#welynx-vm:~$ pwd
/home/xenonxie
xenonxie#welynx-vm:~$
Where is the Wordpress installation?
What is the footprint showing up here? pub key of the instance?
[SOLUTION]
Since the issue is now sorted out, I would like to add more specific screenshots here to help future readers on similar questions like mine:
Where is the wordpress installation?
You would need to SSH into the instance to find out there, there are couple ways to SSH into the instance:
1.1 Once you deploy a wordpress (or other Blog&CMS from Market), an instance is also created for that deployment, you can go to Compute Engine -> VM instances, the new instance will be displayed there.
Note: You would need to change the ip address to "static", otherwise the ip gets changed when instance is restarted.
1.2 on the very right end, you can SSH into the instance directly.
SSH through third party tool like PuTTY:
set up a session with config like below:
2.1 Create a new key pair with Putty Keygen as below:
2.2 Save the public key in Compute Engiine -> Metadata -> SSH Keys
2.3 Save the private key somewhere in your local, you will need it later
With the instance has the public key, you can proceed to create a putty session as below:
Note the IP address is the instance's static ip address, remember to include the user name
In the SSH tab, attach the private key saved earlier:
Now connect to the instance:
Similarly you can do this in WinSCP:
Big thanks to #gcptest_cloud, to make the post more intruitive and understandable to future readers, I recap it as below:
Where is the wordpress installation?
The original wordpress installation in in /var/www/html(thank you #gcptest_cloud) on the instance of the wordpress installation.
How to access the wordpress installation?
You would need to SSH into the instance to find out there, there are couple ways to SSH into the instance:
1.1 Once you deploy a wordpress (or other Blog&CMS from Market), an instance is also created for that deployment, you can go to Compute Engine -> VM instances, the new instance will be displayed there:
Note: You would need to change the ip address to "static", otherwise the ip gets changed when instance is restarted.
1.2 on the very right end, you can SSH into the instance directly:
SSH through third party tool like PuTTY:
2.1 Create a new key pair with Putty Keygen as below:
2.2 Save the private key somewhere in your local, you will need it later
2.3 Save the public key in Compute Engine -> Metadata -> SSH Keys
Note: You can also manually create the key with the copy and paste in .ssh folder in your home directory in the instance
With the instance has the public key, you can proceed to create a putty session as below:
Note the IP address is the instance's static ip address, remember to include the user name
In the SSH tab, attach the private key saved earlier:
Now connect to the instance:
Similarly you can do this in WinSCP:
Since this is a marketplace image, make sure you have firewall rule allowing port 22 and attach the target TAG to network tags of your VM.
After that, Click on the SSH button in the console, near the VM name. This is the simplest way to login into GCP instances

How to SSH between 2 Google Cloud Debian Instances

I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.

Provision remote nixos box without sending private keys

I am provisioning a nixos instance on AWS. The instance has to download a repositiory from a private github repo. Currently I just run a shell script on the remote box using ssh-forwarding to download the repository. In this way I don't have to copy my private key, which gives me access to the repo, to the remote box.
I would like to change this procedure to be more Nix-like. I want to write a nix expression which downloads the repo and put it in /etc/nixos/configuration.nix. At the same time I don't want to copy my private key to the remote machine. Is this possible? Can nixos-rebuild use ssh forwarding?
You can explore --build-host and --target-host options of nixos-rebuild command. That is, make your local machine a build-machine, and remote one - target. You need root passwordless ssh access to remote though.

Docker image push over SSH (distributed)

TL;DR Basically, I am looking for this:
docker push myimage ssh://myvps01.vpsprovider.net/
I am failing to grasp the rationale behind whole Docker Hub / Registry thing. I know I can run a private registry, but for that I have to set up the infrastructure of actually running a server.
I took a sneak peek inside the inner workings of Docker (well, the filesystem at least), and it looks like Docker image layers are just a bunch of tarballs, more or less, with some elaborate file naming. I naïvely think it would not be impossible to whip up a simple Python script to do distributed push/pull, but of course I did not try, so that is why I am asking this question.
Are there any technical reasons why Docker could not just do distributed (server-less) push/pull, like Git or Mercurial?
I think this would be a tremendous help, since I could just push the images that I built on my laptop right onto the app servers, instead of first pushing to a repo server somewhere and then pulling from the app servers. Or maybe I have just misunderstood the concept and the Registry is a really essential feature that I absolutely need?
EDIT Some context that hopefully explains why I want this, consider the following scenario:
Development, testing done on my laptop (OSX, running Docker machine, using docker-compose for defining services and dependencies)
Deploy to a live environment by means of a script (self-written, bash, few dependencies on dev machine, basically just Docker machine)
Deploy to a new VPS with very few dependencies except SSH access and Docker daemon.
No "permanent" services running anywhere, i.e. I specifically don't want to host a permanently running registry (especially not accessible to all the VPS instances, though that could probably be solved with some clever SSH tunneling)
The current best solution is to use Docker machine to point to the VPS server and rebuild it, but it slows down deployment as I have to build the container from source each time.
If you want to push docker images to a given host, there is already everything in Docker to allow this. The following example shows how to push a docker image through ssh:
docker save <my_image> | ssh -C user#my.remote.host.com docker load
docker save will produce a tar archive of one of your docker images (including its layers)
-C is for ssh to compress the data stream
docker load creates a docker image from a tar archive
Note that the combination of a docker registry + docker pull command has the advantage of only downloading missing layers. So if you frequently update a docker image (adding new layers, or modifying a few last layers) then the docker pull command would generate less network traffic than pushing complete docker images through ssh.
I made a command line utility just for this scenario.
It sets up a temporary private docker registry on the server, establishes an SSH Tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save is that only the new layers are pushed to the server, resulting in a quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username#myserver.com my-docker-image
Biggest caveat is that you have to manually add your local ip to docker's insecure_registries config.
https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Saving/loading an image on to a Docker host and pushing to a registry (private or Hub) are two different things.
The former #Thomasleveil has already addressed.
The latter actually does have the "smarts" to only push required layers.
You can easily test this yourself with a private registry and a couple of derived images.
If we have two images and one is derived from the other, then doing:
docker tag baseimage myregistry:5000/baseimage
docker push myregistry:5000/baseimage
will push all layers that aren't already found in the registry. However, when you then push the derived image next:
docker tag derivedimage myregistry:5000/derivedimage
docker push myregistry:5000/derivedimage
you may noticed that only a single layer gets pushed - provided your Dockerfile was built such that it only required one layer (e.g. chaining of RUN parameters, as per Dockerfile Best Practises).
On your Docker host, you can also run a Dockerised private registry.
See Containerized Docker registry
To the best of my knowledge and as of the time of writing this, the registry push/pull/query mechanism does not support SSH, but only HTTP/HTTPS. That's unlike Git and friends.
See Insecure Registry on how to run a private registry through HTTP, especially be aware that you need to change the Docker engine options and restart it:
Open the /etc/default/docker file or /etc/sysconfig/docker for
editing.
Depending on your operating system, your Engine daemon start options.
Edit (or add) the DOCKER_OPTS line and add the --insecure-registry
flag.
This flag takes the URL of your registry, for example.
DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000"
Close and save the configuration file.
Restart your Docker daemon
You will also find instruction to use self-signed certificates, allowing you to use HTTPS.
Using self-signed certificates
[...]
This is more secure than the insecure registry solution. You must
configure every docker daemon that wants to access your registry
Generate your own certificate:
mkdir -p certs && openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 365 -out certs/domain.crt
Be sure to use the name myregistrydomain.com as a CN.
Use the result to start your registry with TLS enabled
Instruct every docker daemon to trust that certificate.
This is done by copying the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt.
Don’t forget to restart the Engine daemon.
Expanding on the idea of #brthornbury.
I did not want to dabble with running python, so I came up with bash script for the same.
#!/usr/bin/env bash
SOCKET_NAME=my-tunnel-socket
REMOTE_USER=user
REMOTE_HOST=my.remote.host.com
# open ssh tunnel to remote-host, with a socket name so that we can close it later
ssh -M -S $SOCKET_NAME -fnNT -L 5000:$REMOTE_HOST:5000 $REMOTE_USER#$REMOTE_HOST
if [ $? -eq 0 ]; then
echo "SSH tunnel established, we can push image"
# push the image to remote host via tunnel
docker push localhost:5000/image:latest
fi
# close the ssh tunnel using the socket name
ssh -S $SOCKET_NAME -O exit $REMOTE_USER#$REMOTE_HOST

Calling SSH command from Jenkins

Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.