why yes command not working in git clone? - ssh

i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.

This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.

Related

Copying files between two remote nodes over SSH without going through controller

How would you, in Ansible, make one remote node connect to another remote node?
My goal is to copy a file from remote node a to remote node b and untar it on the target, however one of the files is extremely large.
So doing it normally via fetching to controller, copy from controller to remote b, then unarchive is unacceptable. Ideally, I would do from _remote_a_ something like:
ssh remote_b cat filename | tar -x
It is to speed things up. I can use shell module to do this, however my main problem is that this way, I lose Ansible's handling of SSH connection parameters. I have to manually pass an SSH private key if any, or password in a non interactive way, or whatever to _remote_b_. Is there any better way to do this without multiple copying?
Also, doing it over SSH is a requirement in this case.
Update/clarification: Actually I know how to do this from shell and I could do same in ansible. I was just wondering if there is a better way to do this that is more ansible-like. The file in question is really large. The main problem is that when ansible executes commands on remote hosts, then I can configure everything in inventory. But in this case, if I would want a similar level of configurability/flexibility when it goes to parameters of that manually established ssh connection I would have to write it from scratch (maybe even as an ansible module), or something similar. Othervise for example trying to just use ssh hostname command would require a passwordless login or default private key, where I wouldn't be able to modify the private key path used in the inventory without adding that manually, and for ssh connection plugin there are actually two possible variables that may be used to set a private key.
Looks like more a shell question than an ansible one.
If the 2 nodes cannot talk to each other you can do a
ssh remote_a cat file | ssh remote_b tar xf -
if they can talk (one of the nodes can connect to the other) you can launch tell one remote node to connect to the other, like
ssh remote_b 'ssh remote_a cat file | tar xf -'
(maybe the quoting is wrong, launching ssh under ssh is sometimes confusing).
In this last case you need probably to insert some password or set properly public/private ssh keys.

How to SSH between 2 Google Cloud Debian Instances

I have installed ansible in on of my GCE Debian VM Instance(1). Now I want to connect to another GCE Debian VM instance(2).
I have generated the public key on Instance 1 and copied the .pub key manually to the the authorized key of instance 2.
But, when I try to do the ssh from 1 to 2 it gives permission denied.
Is there any other way round? I am a little new to this, trying to learn.
is there any step by step guide available? and also what is the exact ip address to do ssh on? will it be the internal IP or the External IP taken by GCE when the Instance is started.
I'm an Ansible user too and I manage a set of compute engine servers. My scenario is pretty close to yours so hopefully this will work for you as well. To get this to work smoothly, you just need to realise that ssh public keys are metadata and can be used to tell GCE to create user accounts on instance creation.
SSH public keys are project-wide metadata
To get what you want the ssh public key should be added to the Metadata section under Compute Engine. My keys look like this:
ssh-rsa AAAAB3<long key sequence shortened>Uxh bob
Every time I get GCE to create an instance, it creates /home/bob and puts the key into the .ssh/authorized_keys section with all of the correct permissions set. This means I can ssh into that server if I have the private key. In my scenario I keep the Private Key only in two places, LastPass and my .ssh directory on my work computer. While I don't recommend it, you could also copy that private key to the .ssh directory on each server that you want to ssh from but I really recommend getting to grips with ssh-agent
Getting it to work with Ansible
The core of this is to tell Ansible not to validate host checking and to connect as the user specified in the key (bob in this example). To do that you need to set some ssh options when calling ansible
ansible-playbook -ssh-common-args='-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no' -u bob
Now Ansible will connect to the servers mentioned in your playbook and try to use the local private key to negotiate the ssh connection which should work as GCE will have set things up for you when the VM is created. Also, since hostname checking is off, you can rebuild the VM as often as you like.
Saying it again
I really recommend that you run ansible from a small number of secure computers and not put your private key onto cloud servers. If you really need to ssh between servers, look into how ssh-agent passes identity around. A good place to start is this article.
Where did you say the metadata was?
I kind of glossed over that bit but here's an image to get you started.
From there you just follow the options for adding a public key. Don't forget that this works because the third part of the key is the username that you want GCE and Ansible to use when running plays.
It's quite simple if you have two instances in google cloud platform, automatically you have the guest environment installed (gcloud command line), with it you can ssh through all you ssh inside your project:
Just run the following command line for inside your instance A to reach the Instance B
[user#Instance(1)]$ gcloud compute ssh Instance(2) --zone [zone]
That it, if it's not working let me know, and verify if your firewall rule let internal traffic.

Calling SSH command from Jenkins

Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.

How to use ssh command in shell script?

I know that we shuld do
ssh user#target
but where do we specify the password ?
Hmm thanks for all your replies.
My requirement is I have to start up some servers on different machines. All servers should be started with one shell script. Well, entering password every time seems little bad but I guess I will have to resort to that option. One reason why I don't want to save the public keys is I may not connect to same machines every time. It is easy to go back and modify the script to change target addresses though.
The best way to do this is by generating a private/public key pair, and storing your public key on the remote server. This is a secure way to login w/o typing in a password each time.
Read more here
This cannot be done with a simple ssh command, for security reasons. If you want to use the password route with ssh, the following link shows some scripts to get around this, if you are insistent:
Scripts to automate password entry
The ssh command will prompt for your password. It is unsafe to specify passwords on the commandline, as the full command that is executed is typically world-visible (e.g. ps aux) and also gets saved in plain text in your command history file. Any well written program (including ssh) will prompt for the password when necessary, and will disable teletype echoing so that it isn't visible on the terminal.
If you are attempting to execute ssh from cron or from the background, use ssh-agent.
The way I have done this in the past is just to set up a pair of authentication keys.
That way, you can log in without ever having to specify a password and it works in shell scripts. There is a good tutorial here:
http://linuxproblem.org/art_9.html
SSH Keys are the standard/suggested solution. The keys must be setup for the user that the script will run as.
For that script user, see if you have any keys setup in ~/.ssh/ (Key files will end with a .pub extension)
If you don't have any keys setup you can run:
ssh-keygen -t rsa
which will generate ~/.ssh/id_rsa.pub (the -t option has other types as well)
You can then copy the contents of this file to ~(remote-user)/.ssh/authorized_keys on the remote machine.
As the script user, you can test that it works by:
ssh remote-user#remote-machine
You should be logged in without a password prompt.
Along the same lines, now when your script is run from that user, it can auto SSH to the remote machine.
If you really want to use password authentication , you can try expect. See here for an example

How to get Hudson CI to check out CVS projects over SSH?

I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.