SSH over two hops - ssh

I have to upload, compile and run some code on a remote system. It turned out, that the following mechanism works fine:
rsync -avz /my/code me#the-remote-host.xyz:/my/code
ssh me#the-remote-host.xyz 'cd /my/code; make; ./my_program'
While it's maybe not the best looking solution, it has the advantage that it's completely self-contained.
Now, the problem is: I need to do the same thing on another remote system which is not directly accessible from the outside by ssh, but via a proxy node. On that system, if I just want to execute a plain ssh command, I need to do the following:
[my local computer]$ ssh me#the-login-node.xyz
[the login node]$ ssh me#the-actual-system.xyz
[the actual system]$ make
How do I need to modify the above script in order to "tunnel" rsync and ssh via the-login-node to the-actual-system? I would also prefer a solution that is completely contained in the script.

Related

why yes command not working in git clone?

i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.

Copying files between two remote nodes over SSH without going through controller

How would you, in Ansible, make one remote node connect to another remote node?
My goal is to copy a file from remote node a to remote node b and untar it on the target, however one of the files is extremely large.
So doing it normally via fetching to controller, copy from controller to remote b, then unarchive is unacceptable. Ideally, I would do from _remote_a_ something like:
ssh remote_b cat filename | tar -x
It is to speed things up. I can use shell module to do this, however my main problem is that this way, I lose Ansible's handling of SSH connection parameters. I have to manually pass an SSH private key if any, or password in a non interactive way, or whatever to _remote_b_. Is there any better way to do this without multiple copying?
Also, doing it over SSH is a requirement in this case.
Update/clarification: Actually I know how to do this from shell and I could do same in ansible. I was just wondering if there is a better way to do this that is more ansible-like. The file in question is really large. The main problem is that when ansible executes commands on remote hosts, then I can configure everything in inventory. But in this case, if I would want a similar level of configurability/flexibility when it goes to parameters of that manually established ssh connection I would have to write it from scratch (maybe even as an ansible module), or something similar. Othervise for example trying to just use ssh hostname command would require a passwordless login or default private key, where I wouldn't be able to modify the private key path used in the inventory without adding that manually, and for ssh connection plugin there are actually two possible variables that may be used to set a private key.
Looks like more a shell question than an ansible one.
If the 2 nodes cannot talk to each other you can do a
ssh remote_a cat file | ssh remote_b tar xf -
if they can talk (one of the nodes can connect to the other) you can launch tell one remote node to connect to the other, like
ssh remote_b 'ssh remote_a cat file | tar xf -'
(maybe the quoting is wrong, launching ssh under ssh is sometimes confusing).
In this last case you need probably to insert some password or set properly public/private ssh keys.

hg clone through a login server using ssh

I'm trying to collaborate with some individuals that are not in my institution and are not allowed to connect to the internal network through VPN, however, ssh through a login server is allowed.
I.e. the collaborates can login using two successive ssh commands:
$ ssh loginserver
$ ssh repositoryserver
After logging in they can begin developing on the repository server. However, they would like to make a clone, and make modifications remotely, and then push changes back.
I know one can run mercerial commands through ssh, and this works fine for me (because I am on the network). I.e.:
$ hg clone ssh://uid#repositoryserver//path/to/repo
However, is there a way to run commands through the login server?
Something like:
$ hg clone ssh://uid#loginserver ssh://uid#repositoryserver//path/to/repo
Thanks for the help.
This is in principle possible, but the underlying ssh chaining is by necessity a bit fragile; if your institution's policies allow hosting on an external server, I'd consider that option first.
That said, yes, it can be done. First of all, your users will need to login to your repository server from your login server at least once (if you have a restricted setup, just cloning an hg repository once – and then throwing it away – will also work). This will set up an entry for the repository server in ~/.ssh/known_hosts, which is necessary for ssh chaining to proceed without user prompts. If your repository's server ssh configuration ever changes, this entry will become invalid and they will have to repeat the process after removing the entry from ~/.ssh/known_hosts or removing ~/.ssh/known_hosts entirely.
Second, they need to enable authentication agent forwarding on their machine (because otherwise they'll get prompted for a password or pass phrase, but won't be able to enter that). For that, they can do one of the following:
Add an entry to their ~/.ssh/config such as:
Host lserve
User uid
HostName loginserver
ForwardAgent true
The alternative to this approach is to tell Mercurial to use agent forwarding by adding the following entry to your ~/.hgrc or .hg/hgrc:
[ui]
ssh = ssh -A
The downside to doing this in your global ~/.hgrc is that agent forwarding will be done for every repository, including ones where you may not want that. Setting up ~/.ssh/config is the cleaner option and also allows you to simplify repo URLs.
You can also use the --ssh "ssh -A" command line option, but that's a lot of typing.
Depending on how they write their repo URLs, other configurations may work better. The above will allow the use of ssh://lserver//path/to/repo URLs. But the critical part is the ForwardAgent true line, which means that the remote server will query their local machine for authentication, rather than demanding a password or pass phrase. Needless to say, this also means that they need to have ssh agent authentication set up locally.
Next, you will have to create a shell script on loginserver that forwards the actual hg request. You can put it wherever you like (let's assume it is in /path/to/forward-hg:
#!/bin/sh
ssh repositoryserver hg "$#"
Once this is done, your friends can now access the remote repository as follows:
hg clone --remotecmd /path/to/forward-hg ssh://lserve//path/to/repo
hg push --remotecmd /path/to/forward-hg
hg pull --remotecmd /path/to/forward-hg
hg incoming --remotecmd /path/to/forward-hg
hg outgoing --remotecmd /path/to/forward-hg
Because this is a lot of typing, you may want to create aliases or put an entry in your local .hg/hgrc (caution: this cannot be done for hg clone, where you will still have to type it out or create, say, an hg rclone alias). This entry will be:
[ui]
remotecmd = /path/to/forward-hg
and tell Mercurial to add the requisite --remotecmd option to all commands that support it and that operate on this repository (note: Do NOT put this entry in your user's ~/.hgrc, only in the repository-specific one).
Finally, here is why this works: When accessing a remote repository, Mercurial will basically try to start $REMOTEHG serve --stdio (where $REMOTEHG is the remote Mercurial executable) and communicate with this process over stdin and stdout. By hijacking $REMOTEHG, this becomes effectively ssh repositoryserver hg serve --stdio, which will do it on the repository server instead. Meanwhile – assuming agent forwarding is setup properly, so that password prompts and the like don't get in the way – the local Mercurial client will remain completely unaware of this and only see the normal communication with the repository server over stdin and stdout (which get passed through unaltered by the ssh daemon on the login server).

Calling SSH command from Jenkins

Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.

Using rsync to remote SSH user with no shell access

I set up Jenkins CI to deploy my PHP app to our QA Apache server and I ran into an issuse. I successfully set up the pubkey authentication from the local jenkins account to the remote apache account, but when I use rsync, I get the following error:
[jenkins#build ~]# rsync -avz -e ssh test.txt apache#site.example.com:/path/to/site
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(64) [sender=2.6.8]
[jenkins#build ~]#
One potential problem is that the remote apache account doesn't have a valid shell account, should I create a remote account with shell access and part of the "apache" group? It is not an SSH key problem, since ssh apache#site.example.com connects successfully, but quickly kicks me out since apache doesn't have a shell.
That would probably be the easiest thing to do. You will probably want to only set it up with a limited shell like rssh or scponly to only allow file transfers. You may also want to set up a chroot jail so that it can't see your whole filesystem.
I agree that that would probably be the easiest thing to do. We do something similar, but use scp instead. Something like:
scp /path/to/test.txt apache#site.example.com:/path/to/site
I know this is pretty old thread, but if somebody comes across this page in future...
I had the same problem, but got that fixed when I fixed my .bashrc .
I removed the statement "echo setting DISPLAY=$DISPLAY" which was there before in my .bashrc. rsync has issues with that statement for some reason.
So, fixing .bashrc/.cshrc/.profile errors helped me.