I set up Jenkins CI to deploy my PHP app to our QA Apache server and I ran into an issuse. I successfully set up the pubkey authentication from the local jenkins account to the remote apache account, but when I use rsync, I get the following error:
[jenkins#build ~]# rsync -avz -e ssh test.txt apache#site.example.com:/path/to/site
protocol version mismatch -- is your shell clean?
(see the rsync man page for an explanation)
rsync error: protocol incompatibility (code 2) at compat.c(64) [sender=2.6.8]
[jenkins#build ~]#
One potential problem is that the remote apache account doesn't have a valid shell account, should I create a remote account with shell access and part of the "apache" group? It is not an SSH key problem, since ssh apache#site.example.com connects successfully, but quickly kicks me out since apache doesn't have a shell.
That would probably be the easiest thing to do. You will probably want to only set it up with a limited shell like rssh or scponly to only allow file transfers. You may also want to set up a chroot jail so that it can't see your whole filesystem.
I agree that that would probably be the easiest thing to do. We do something similar, but use scp instead. Something like:
scp /path/to/test.txt apache#site.example.com:/path/to/site
I know this is pretty old thread, but if somebody comes across this page in future...
I had the same problem, but got that fixed when I fixed my .bashrc .
I removed the statement "echo setting DISPLAY=$DISPLAY" which was there before in my .bashrc. rsync has issues with that statement for some reason.
So, fixing .bashrc/.cshrc/.profile errors helped me.
Related
i am trying to run script that clone repository and then build it in my docker.
And it is a private repository so i have copied ssh keys in docker.
but seems like below command does not work.
yes yes | git clone (ssh link to my private repository.)
When i manually tried to run script in my local system its showing the same.but it works fine for other commands.
I have access of repository as i can type yes and it works.
But i can't type yes in docker build.
Any help will be appreciated.
This is purely an ssh issue. When ssh is connecting to a host for the "first time",1 it obtains a "host fingerprint" and prints it, then opens /dev/tty to interact with the human user so as to obtain a yes/no answer about whether it should continue connecting. You cannot defeat this by piping to its standard input.
Fortunately, ssh has about a billion options, including:
the option to obtain the host fingerprint in advance, using ssh-keyscan, and
the option to verify a host key via DNS.
The first is the one to use here: run ssh-keyscan and create a known_hosts file in the .ssh directory. Security considerations will tell you how careful to be about this (i.e., you must decide how paranoid to be).
1"First" is determined by whether there's a host key in your .ssh/known_hosts file. Since you're spinning up a Docker image that you then discard, every time is the first time. You could set up a docker image that has the file already in it, so that no time is the first time.
I have to upload, compile and run some code on a remote system. It turned out, that the following mechanism works fine:
rsync -avz /my/code me#the-remote-host.xyz:/my/code
ssh me#the-remote-host.xyz 'cd /my/code; make; ./my_program'
While it's maybe not the best looking solution, it has the advantage that it's completely self-contained.
Now, the problem is: I need to do the same thing on another remote system which is not directly accessible from the outside by ssh, but via a proxy node. On that system, if I just want to execute a plain ssh command, I need to do the following:
[my local computer]$ ssh me#the-login-node.xyz
[the login node]$ ssh me#the-actual-system.xyz
[the actual system]$ make
How do I need to modify the above script in order to "tunnel" rsync and ssh via the-login-node to the-actual-system? I would also prefer a solution that is completely contained in the script.
Hi I can login to the GCE VM with WinSCP using my own username, cannot login as root...this is by default according to Google, and can be changed.
Changed like this:
Step 1: Login SSH and Su Root
# sudo su root
Step 2: Change password Root
#passwd root
Step 3: Config SSHD allow Root login
#nano /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
#service sshd restart (I used ssh as I'm using ubuntu and sshd wouldn't work)
Tried to login as root via WinSCP but I get
"Received too large (1349281121 B) SFTP packet. Max supported packet
size is 1024000 B. The error is typically caused by message printed
from startup script (like .profile). The message may start with
'Plea'." Cannot initialize SFTP protocol. Is the host running a SFTP
server?"
Any ideas?
Received too large SFTP packet. Max supported packet size is 102400 B
Cause:
This problem can arise when your .bashrc file is printing data to the screen (e.g.archey, screenfetch). The .bashrc file runs every time any console shell is initialized.
Solution:
Simply move any scripts that generate output from your .bashrc file to your .bash_profile. The .bash_profile only runs when you create a physical shell session.
NOTE: Just for anyone who comes across this and simply wants to copy files and doesn't matter what file protocol they use. You can just switch file protocol from SFTP to SCP to avoid this issue. Thought it might be worth a mention.
If you used Ubuntu linux and try to connect the server then "Please login as the Ubuntu user" you should sftp as the ubuntu user, not as root.
Try that, hope it will work for you!
Thanks!
Hmmm, I added this in WinSCP in advanced settings under "protocol options":
sudo /usr/lib/openssh/sftp-servers
I can login with my own username and move files now. Although not exactly sure how this works, I think it somehow changes you to root user at login?
More info: https://winscp.net/eng/docs/faq_su
See WinSCP article on Received too large (... B) SFTP packet. Max supported packet size is 102400 B
If … (from the subject [error message]) is a very large number then the problem is typically caused by a message printed from some profile/logon script. It violates an SFTP protocol. Some of these scripts are executed even for non-interactive (no TTY) sessions, so they cannot print anything (nor ask user to type something).
To add to #ThatOneCoder's answer on the cause being too much output from .bashrc: in e.g. Ubuntu, there is also the system wide /etc/bash.bashrc that might be "too wordy" and cause the Received too large SFTP packet error.
It's a "system wide .bashrc", and if you want to execute code for all logging in, that's one location to place it. If you nixed ~/.bashrc and still get the error, check the contents of /etc/bash.bashrc.
It is happening because you haven't given shell access permission to the user.
I faced the same issue trying to login on my ubuntu 16.04 EC2 server as "root" via WinSCP. I spent a lot of time trying to fix it but in the end a simple workaround worked for me.
I ssh into the instance using PuTTY with the username "ubuntu". After this I typed
sudo -i
and with this the user was changed to root.
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.
I need to copy a file from a remote machine to my local machine and I need to automate it.
I've tried SCP command and it's working, however, I could not automate the part wherein it is asking for the password of the user of the local machine and the remote machine.
Based on this article I can Perform SSH Login Without Password Using ssh-keygen & ssh-copy-id
after following all the instructions written there, I tried to access the remote machine using this
ssh lalala#XXX.XXX.XXX.XXX
it works, it doesnt ask for the password anymore. But when I tried copying a file from that machine using the command below,
scp lalala#XXX.XXX.XXX.XXX:'/a/b/c.txt' lelele#XXX.XXX.XXX.YYY:'/b/c/'
it still asks for the password of the localmachine which is the lelele#XXX.XXX.XXX.YYY
I wonder if I did something wrong? what could it be? is there something wrong with the format of the command?
BTW, im using Centos, and I'm planning to code it using python
If you are copying to your local machine why don't you just do
scp lalala#XXX.XXX.XXX.XXX:'/a/b/c.txt' /b/c/
?
I tried your line on some machine with similar setup and didn't get asked for password; I got an error instead, but this is probably due to differences in our configurations. I tried mine and it worked.
Regarding whether your connection succeeds in the remote machine you could tail this file there:
tail -f /var/log/secure
If you see no error there you can be sure (well, never say always) your layout with the generated keys is working.
In this case I bet you'll see no error there
I think you may have multiple ssh keys and set identies only as yes. If so, please check this answer: https://askubuntu.com/a/999306/398861