I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.
Related
i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.
Recently our web hoster (Domainfactory) changed the method to externally access our online mysql database. From simple ssh "port forwarding" to a "unix socks tunnel".
The ssh call looks like this (and it works!):
ssh -N -L 5001:/var/lib/mysql5/mysql5.sock ssh-user#ourdomain.tld
The problem: you have to enter the password every single time.
In the past I used BitVise SSH client to create a profile (which also stores the encrypted password). By simply double-clicking on the profile you'll be automatically logged in.
Unfortunately, neither the "BitVise SSH client" nor "Putty" (plink.exe) supports the "Unix socks tunnel" feature/extension, so I can't use these tools any more.
Does anyone have an idea how to realize an automated login (script, tool, whatever)?.
The employees who access the database must not know the SSH password in any case!
I got a solution. The trick is to generate a SSH Key pair (private and public) on client side (Windows machine) calling 'ssh-keygen'. Important: don't secure the ssh keys with a password (simply press [enter] if you're asked for a password, otherwise you'll be asked for the SSH-Key password every time you try to SSH). Two files will be generated inside 'c:\Users\your_user\.shh\': 'id_rsa' (private key) and 'id_rsa.pub ' (public key).
On server side create a '.shh' directory within your user's home directory. Inside the '.ssh' directory create a simple text file called 'authorized_keys'. Copy the contents of 'id_rsa.pub' into this file (unfortunately 'ssh-copy-id' isn't available yet for Windows. So you have to do the copy and paste stuff on your own.) Set permissions of 'authorized_keys' file to '600'.
Now you should be able to simply SSH into your server by calling 'ssh-user#ourdomain.tld' without entering a password. Create a batch file with your individual ssh-call and you're done.
Thanks to Scott Hanselman for his tutorial: https://www.hanselman.com/blog/how-to-use-windows-10s-builtin-openssh-to-automatically-ssh-into-a-remote-linux-machine
So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?
GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.
I know that we shuld do
ssh user#target
but where do we specify the password ?
Hmm thanks for all your replies.
My requirement is I have to start up some servers on different machines. All servers should be started with one shell script. Well, entering password every time seems little bad but I guess I will have to resort to that option. One reason why I don't want to save the public keys is I may not connect to same machines every time. It is easy to go back and modify the script to change target addresses though.
The best way to do this is by generating a private/public key pair, and storing your public key on the remote server. This is a secure way to login w/o typing in a password each time.
Read more here
This cannot be done with a simple ssh command, for security reasons. If you want to use the password route with ssh, the following link shows some scripts to get around this, if you are insistent:
Scripts to automate password entry
The ssh command will prompt for your password. It is unsafe to specify passwords on the commandline, as the full command that is executed is typically world-visible (e.g. ps aux) and also gets saved in plain text in your command history file. Any well written program (including ssh) will prompt for the password when necessary, and will disable teletype echoing so that it isn't visible on the terminal.
If you are attempting to execute ssh from cron or from the background, use ssh-agent.
The way I have done this in the past is just to set up a pair of authentication keys.
That way, you can log in without ever having to specify a password and it works in shell scripts. There is a good tutorial here:
http://linuxproblem.org/art_9.html
SSH Keys are the standard/suggested solution. The keys must be setup for the user that the script will run as.
For that script user, see if you have any keys setup in ~/.ssh/ (Key files will end with a .pub extension)
If you don't have any keys setup you can run:
ssh-keygen -t rsa
which will generate ~/.ssh/id_rsa.pub (the -t option has other types as well)
You can then copy the contents of this file to ~(remote-user)/.ssh/authorized_keys on the remote machine.
As the script user, you can test that it works by:
ssh remote-user#remote-machine
You should be logged in without a password prompt.
Along the same lines, now when your script is run from that user, it can auto SSH to the remote machine.
If you really want to use password authentication , you can try expect. See here for an example