Cannot ssh into remote machine after rsync - ssl

I followed this page on Protecting the Docker daemon Socket with HTTPS to generate ca.pem, server-key.pem, server-cert.pem, key.pem and key-cert.pem
I wanted a remote Docker daemon to use those keys so i used rsync via ssh to send three of the files(ca.pem, server-key.pem and key.pem) to the remote host's home directory. The identity file for ssh into the remote host is called dl-datatest-internal.pem
ubuntu#ip-10-3-1-174:~$ rsync -avz -progress -e "ssh -i dl-datatest-internal.pem" dockerCer/ core#10.3.1.181:~/
sending incremental file list
./
ca.pem
server-cert.pem
server-key.pem
sent 3,410 bytes received 79 bytes 6,978.00 bytes/sec
total size is 4,242 speedup is 1.22
The remote host stopped recognising the identity file ever since and started asking for a non-existent password.
ubuntu#ip-10-3-1-174:~$ ssh -i dl-datatest-internal.pem core#10.3.1.151
core#10.3.1.151's password:
Does anyone know why and how to fix it? I still have all the keys if that helps.

There are a couple things about the rsync command that bother me, but, I can't put my finger on the problem (if there is one).
the rsync command and subsequent ssh command reference different hosts: rsync(core#10.3.1.181:~/
) and ssh to the host(core#10.3.1.151). Those are different machines, no?
the ~ in the target of the rsync command. core#10.3.1.181:~/. I am pretty sure that the ~/ references the core home directory, but, you could just get rid of the ~/ and replace that with a . (dot).
If you can reproduce the environment you did the copy in, you can add a --dry-run to the rsync command to see what it is going to do. Looking at this command I can't see it erasing the target's .ssh directory.

Related

Rsync Won't Connect

I've just set up a new website on SiteGround and am trying to sync files to it from another server, however, when running the command, I'm getting the following error:
Enter passphrase for key '/cygdrive/c/Users/Example User/.ssh/id_rsa':
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /home/lapo/packaging/rsync-3.0.4-1/src/rsync-3.0.4/io.c(632) [sender=3.0.4]
The passphrase is definitely correct.
Here's the rsync command that I'm running:
rsync -avz --exclude-from='exclude.txt' -e 'ssh -p 1234' test/test.txt user#host:/home/user/www/sitename/public_html/
I know my keys and passphrase are correct because I'm able to ssh on to the server with no issues, using the following command:
ssh user#host -p 1234
I've tried many tweaks to the rsync code such as making sure there are trailing slashes, that the directories exist. I've tried it both with -avz and without. I've also tried a key without a passphrase.
I also thought that maybe Siteground is blocking things but rsync is available on the server so I don't know why this would be.
Unfortunately, despite being able to SSH to the server, rsync simply won't even though it's using the same key, passphrase and port.
Where am I going wrong?
I've managed to connect now and should anyone else run into the issue, the fix I found worked for me was to create a new SSH key outside of the Users directory. I'm putting this down to permissions, as soon as I move it to the the C drive in it's own folder, everything worked as expected!

Inject host's SSH keys into Docker Machine with Docker Compose

I am using Docker on Mac OS X with Docker Machine (with the default boot2docker machine), and I use docker-compose to setup my development environment.
Let's say that one of the containers is called "stack". Now what I want to do is call:
docker-composer run stack ssh user#stackoverflow.com
My public key (which has been added to stackoverflow.com and which will be used to authenticate me) is located on the host machine. I want this key to be available to the Docker Machine container so that I will be able to authenticate myself against stackoverflow using that key from within the container. Preferably without physically copying my key to Docker Machine.
Is there any way to do this? Also, if my key is password protected, is there any way to unlock it once so after every injection I will not have to manually enter the password?
You can add this to your docker-compose.yml (assuming your user inside container is root):
volumes:
- ~/.ssh:/root/.ssh
Also you can check for more advanced solution with ssh agent (I did not tried it myself)
WARNING: This feature seems to have limited support in Docker Compose and is more designed for Docker Swarm.
(I haven't checked to make sure, but) My current impression is that:
In Docker Compose secrets are just bind mount volumes, so there's no additional security compared to volumes
Ability to change secrets permissions with Linux host may be limited
See answer comments for more details.
Docker has a feature called secrets, which can be helpful here. To use it one could add the following code to docker-compose.yml:
---
version: '3.1' # Note the minimum file version for this feature to work
services:
stack:
...
secrets:
- host_ssh_key
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
Then the new secret file can be accessed in Dockerfile like this:
RUN mkdir ~/.ssh && ln -s /run/secrets/host_ssh_key ~/.ssh/id_rsa
Secret files won't be copied into container:
When you grant a newly-created or running service access to a secret, the decrypted secret is mounted into the container in an in-memory filesystem
For more details please refer to:
https://docs.docker.com/engine/swarm/secrets/
https://docs.docker.com/compose/compose-file/compose-file-v3/#secrets
If you're using OS X and encrypted keys this is going to be PITA. Here are the steps I went through figuring this out.
Straightforward approach
One might think that there’s no problem. Just mount your ssh folder:
...
volumes:
- ~/.ssh:/root/.ssh:ro
...
This should be working, right?
User problem
Next thing we’ll notice is that we’re using the wrong user id. Fine, we’ll write a script to copy and change the owner of ssh keys. We’ll also set ssh user in config so that ssh server knows who’s connecting.
...
volumes:
- ~/.ssh:/root/.ssh-keys:ro
command: sh -c ‘./.ssh-keys.sh && ...’
environment:
SSH_USER: $USER
...
# ssh-keys.sh
mkdir -p ~/.ssh
cp -r /root/.ssh-keys/* ~/.ssh/
chown -R $(id -u):$(id -g) ~/.ssh
cat <<EOF >> ~/.ssh/config
User $SSH_USER
EOF
SSH key passphrase problem
In our company we protect SSH keys using a passphrase. That wouldn’t work in docker since it’s impractical to enter a passphrase each time we start a container.
We could remove a passphrase (see example below), but there’s a security concern.
openssl rsa -in id_rsa -out id_rsa2
# enter passphrase
# replace passphrase-encrypted key with plaintext key:
mv id_rsa2 id_rsa
SSH agent solution
You may have noticed that locally you don’t need to enter a passphrase each time you need ssh access. Why is that?
That’s what SSH agent is for. SSH agent is basically a server which listens to a special file, unix socket, called “ssh auth sock”. You can see its location on your system:
echo $SSH_AUTH_SOCK
# /run/user/1000/keyring-AvTfL3/ssh
SSH client communicates with SSH agent through this file so that you’d enter passphrase only once. Once it’s unencrypted, SSH agent will store it in memory and send to SSH client on request.
Can we use that in Docker? Sure, just mount that special file and specify a corresponding environment variable:
environment:
SSH_AUTH_SOCK: $SSH_AUTH_SOCK
...
volumes:
- $SSH_AUTH_SOCK:$SSH_AUTH_SOCK
We don’t even need to copy keys in this case.
To confirm that keys are available we can use ssh-add utility:
if [ -z "$SSH_AUTH_SOCK" ]; then
echo "No ssh agent detected"
else
echo $SSH_AUTH_SOCK
ssh-add -l
fi
The problem of unix socket mount support in Docker for Mac
Unfortunately for OS X users, Docker for Mac has a number of shortcomings, one of which is its inability to share Unix sockets between Mac and Linux. There’s an open issue in D4M Github. As of February 2019 it’s still open.
So, is that a dead end? No, there is a hacky workaround.
SSH agent forwarding solution
Luckily, this issue isn’t new. Long before Docker there was a way to use local ssh keys within a remote ssh session. This is called ssh agent forwarding. The idea is simple: you connect to a remote server through ssh and you can use all the same remote servers there, thus sharing your keys.
With Docker for Mac we can use a smart trick: share ssh agent to the docker virtual machine using TCP ssh connection, and mount that file from virtual machine to another container where we need that SSH connection. Here’s a picture to demonstrate the solution:
First, we create an ssh session to the ssh server inside a container inside a linux VM through a TCP port. We use a real ssh auth sock here.
Next, ssh server forwards our ssh keys to ssh agent on that container. SSH agent has a Unix socket which uses a location mounted to Linux VM. I.e. Unix socket works in Linux. Non-working Unix socket file in Mac has no effect.
After that we create our useful container with an SSH client. We share the Unix socket file which our local SSH session uses.
There’s a bunch of scripts that simplifies that process:
https://github.com/avsm/docker-ssh-agent-forward
Conclusion
Getting SSH to work in Docker could’ve been easier. But it can be done. And it’ll likely to be improved in the future. At least Docker developers are aware of this issue. And even solved it for Dockerfiles with build time secrets. And there's a suggestion how to support Unix domain sockets.
You can forward SSH agent:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
You can use multi stage build to build containers This is the approach you can take :-
Stage 1 building an image with ssh
FROM ubuntu as sshImage
LABEL stage=sshImage
ARG SSH_PRIVATE_KEY
WORKDIR /root/temp
RUN apt-get update && \
apt-get install -y git npm
RUN mkdir /root/.ssh/ &&\
echo "${SSH_PRIVATE_KEY}" > /root/.ssh/id_rsa &&\
chmod 600 /root/.ssh/id_rsa &&\
touch /root/.ssh/known_hosts &&\
ssh-keyscan github.com >> /root/.ssh/known_hosts
COPY package*.json ./
RUN npm install
RUN cp -R node_modules prod_node_modules
Stage 2: build your container
FROM node:10-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY ./ ./
COPY --from=sshImage /root/temp/prod_node_modules ./node_modules
EXPOSE 3006
CMD ["npm", "run", "dev"]
add env attribute in your compose file:
environment:
- SSH_PRIVATE_KEY=${SSH_PRIVATE_KEY}
then pass args from build script like this:
docker-compose build --build-arg SSH_PRIVATE_KEY="$(cat ~/.ssh/id_rsa)"
And remove the intermediate container it for security. This Will help you cheers.
Docker for Mac now supports mounting the ssh agent socket on macOS.

How to scp from init.d

I am trying to execute the following script from the /etc/init.d directory
sshpass -p 'password' scp root#remote_IP:/file /copy/to/my/location
The script is running perfectly from the command line. I get an error of "Host key verification failed". When I use other commands e.g.
echo "something" > /path/to/output/file.txt
the scripts is executed, so there is no problem with that. The problem is with the scp command and I guess it has to do with the key.
Can anyone give some advises on how to run scp from the /etc/init.d/ directory?
Thanks a lot is advance
host key verification failed means that your ssh config requires strict host key verification, and that the key of the remote host does not match the one in your known_hosts file.
If the init script is run as root, add the host key to /root/.ssh/known_hosts or (less secure) turn off host key verification for the remote host in /root/.ssh/config
See https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh for more discussion.

Pass ssh options to ssh-copy-id

I'm stuck in the Permission denied (publickey) hell trying to copy public key to a remote server so Jenkins can rsync files during builds.
Running:
sudo ssh-copy-id -i id_rsa.pub ubuntu#xx.xx.xx.xx
I have done this for another server, but that one has a separate key pair for SSH assigned by EC2, and my current guess is that ssh-copy-id is trying to use wrong private key for this connection. Is there a way to pass -vv to ssh-copy-id so I can see what jey it's trying to use. I've looked into the -o switch, but can't seem to get it right.
Thank you.
So here's what I've done:
added following to /etc/ssh/ssh_config:
Host xx.xx.xx.xx
User ubuntu
IdentityFile ~/.ssh/key-name-for-that-machine.pem
Then copied key-name-for-that-machine.pem into /var/lib/jenkins/.ssh
Didn't run ssh-copy-id again, simply have rsync use that key file when moving stuff, here's the rsync script:
rsync -rvh -e 'ssh -v' "/tmp/project-DEV-${BUILD_ID}/" ubuntu#xx.xx.xx.xx:"/www/www.project-dir.net/"
my guess would by running it without sudo. But that's depending on how you normally log into the server.
If you normally login by using ssh ubuntu#xx.xx.xx.xx then lose the
sudo.
If not than try to login with sudo ssh ubuntu#xx.xx.xx.xx
Reading your question, at least one of these should fail.

ssh remote host identification has changed [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I've reinstalled my server and I am getting these messages:
[user#hostname ~]$ ssh root#pong
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
6e:45:f9:a8:af:38:3d:a1:a5:c7:76:1d:02:f8:77:00.
Please contact your system administrator.
Add correct host key in /home/hostname /.ssh/known_hosts to get rid of this message.
Offending RSA key in /var/lib/sss/pubconf/known_hosts:4
RSA host key for pong has changed and you have requested strict checking.
Host key verification failed.
I have tried various solutions that I found on the Internet. My known_hosts file (normally in ~/.ssh/known_hosts) is in /var/lib/sss/pubconf/known_hosts. I've tried to edit it, but it remains in one state. I have installed ipa-client and have Fedora 19. How do I resolve this warning?
All the answers answered so far work only if you do not have Freeipa installed.
The right answer for freeipa in comments below from adrin is here.
Here is the simplest solution:
ssh-keygen -R <host>
For example,
ssh-keygen -R 192.168.3.10
From the ssh-keygen man page:
-R hostname Removes all keys belonging to hostname from a known_hosts file. This option is useful to delete hashed hosts (see the -H option above).
Use
ssh-keygen -R [hostname]
Example with an ip address/hostname would be:
ssh-keygen -R 168.9.9.2
This will update the offending of your host from the known_hosts. You can also provide the path of the known_hosts with -f flag.
I had this same error occur after I recreated a Digital Ocean Ubuntu image. I used the following command with my server IP in place of [IP_ADDRESS]
ssh-keygen -R [IP_ADDRESS]
The sledgehammer is to remove every known host in one fell swoop:
rm ~/.ssh/known_hosts
On Monterey
sudo rm /var/root/.ssh/known_hosts
I come up against this as we use small subnets of short-lived servers from a jump box, and frequently have internal IP address reuse of servers that share the same ssh key.
When you reinstall the server its identity changes, and you'll start to get this message. Ssh has no way of knowing whether you've changed the server it connects to, or a server-in-the-middle has been added to your network to sniff on all your communications - so it brings this to your attention.
Simply remove the key from known_hosts by deleting the relevant entry:
sed '4d' -i /var/lib/sss/pubconf/known_hosts
The 4d is on the account of Offending RSA ...known_hosts:4
The problem is that you've previously accepted an SSH connection to a remote computer and that remote computer's digital fingerprint or SHA256 hash key has changed since you last connected. Thus when you try to SSH again or use github to pull code, which also uses SSH, you get an error. Why? Because you're using the same remote computer address as before but the remote computer is responding with a different fingerprint. Therefore, it's possible that someone is spoofing the computer you previously connected to. This is a security issue.
If you're 100% sure that the remote computer isn't compromised, hacked, being spoofed, etc then all you need to do is delete the entry in your known_hosts file for the remote computer. That will solve the issue as there will no longer be a mismatch with SHA256 fingerprint IDs when connecting.
On Mac here's what I did:
1) Find the line of output that reads RSA host key for servername:port has changed and you have requested strict checking. You'll need both the servername and potentially port from that log output.
2) Back up the SSH known hosts file cp /Users/yourmacusername/.ssh/known_hosts /Users/yourmacusername/.ssh/known_hosts.bak
3) Find the line where the computer's old fingerprint is stored and delete it. You can search for the specific offending remote computer fingerprint using the servername and port from step #1. nano /Users/yourmacusername/.ssh/known_hosts
4) CTRL-X to quit and choose Y to save changes
Now type ssh -p port servername and you will receive the original prompt you did when you first tried to SSH to that computer. You will then be given the option to save that remote computer's updated SHA256 fingerprint to your known_hosts file. If you're using SSH over port 22 then the -p argument is not necessary.
Any issues you can restore the original known_hosts file: cp /Users/yourmacusername/.ssh/known_hosts.bak /Users/yourmacusername/.ssh/known_hosts
As many have already said, use ssh-keygen, i.e.
ssh-keygen -R pong
Also, you may like to consider temporarily turning off host key checking:
ssh -oStrictHostKeyChecking=no root#pong
Works for me!
Error: Offending RSA key in /var/lib/sss/pubconf/known_hosts:4
This indicates you have an offending RSA key at line no. 4
Solution 1:
1. vi /var/lib/sss/pubconf/known_hosts
2. remove line no: 4.
3. Save and Exit, and Retry.
Solution 2:
ssh-keygen -R "you server hostname or ip"
OR
Solution 3:
sed -i '4d' /root/.ssh/known_hosts
This will remove 4th line of /root/.ssh/known_hosts in place(-i).
I used the solution of mockinterface, though the sed -i didn't quite work
I solved it by deleting the line by hand with vim:
sudo vim /var/lib/sss/pubconf/known_hosts
You can use any other text editor you want, but probably you'll need to show your administrative privileges
FINAL Solution!
It is showing due to the stored invalid ECDSA key. So we have to remove the ECDSA key from our master/controller machine by using the below command:
ssh-keygen -R 192.168.0.132
Here 192.168.0.132 is the remote system IP.
Edit /home/hostname /.ssh/known_hosts,and delete the 4 lines, and save it.
Then run ssh root#pong again, you will see message like this:Are you sure you want to continue connecting (yes/no)? yes, just print yes.
Note: If you got some problem, read the hints first, it will help.
The other answers here are good and working, anyway, I solved the problem by deleting ~/.ssh/known_hosts. This certainly solves the problem, but it's probably not the best approach.
updated your ssh key, getting the above message is normal.
Just edit ~/.ssh/known_hosts and delete line 4, as the message pointed you
Offending RSA key in /Users/isaacalves/.ssh/known_hosts:4
or use ssh-keygen to delete the invalid key
ssh-keygen -R "you server hostname or ip"
This is because your remote computer settings have changed. Remove your current keys for that.
vim /root/.ssh/known_hosts
Delete the line of the IP you are connecting.
In my case it happened because I previously had ssh connection with a machine with same ip(say 192.152.51.10) and the system was considering the RSA key(stored in /home/user_name/.ssh/known_hosts) of the previous host which resulted in mismatch.
To resolve this issue, you have to remove previously stored RSA key for the ip 192.152.51.10.
ssh-keygen -f "/home/user_name/.ssh/known_hosts" -R 192.152.51.10
Simple one-liner solution, tested on mac:
sed '/212.156.48.110/d' ~/.ssh/known_hosts > ~/.ssh/known_hosts
Deletes only the target ssh host IP from know hosts.
where 212.156.48.110 is replaced by the target host IP address.
Cause: Happened because the target IP was already known for a different machine due to port forwarding. Deleting the target IP before connecting will fix the issue.
I use PowerShell in Windows 10 for ssh.
My problem was in the Windows directory: C:\Users\youruser\.ssh
Delete the file known_hosts in that directory to forget the old value.
You may also use use File Explorer to locate and delete the file.
If you are trying to connect to running docker container on port 2222 with the command and you get the error
mian#tdowrick2~$ ssh pos#localhost -p 2222
Then to solve this problem, on your local computer (i.e. host machine not container) go to cd ~/.ssh/ and open known_hosts file with text editor. Remove the line starting with [localhost]:2222 and save the file. Now try to ssh again
mian#tdowrick2~$ ssh pos#localhost -p 2222
Error will disappear but you have to do it each time the container restart.
My solution is:
vi ~/.ssh/known_hosts
delete the line that contains your want connected ip.
This is better than delete all of the known_hosts
Remove that the entry from known_hosts using:
ssh-keygen -R *ip_address_or_hostname*
This will remove the problematic IP or hostname from known_hosts file and try to connect again.
From the man pages:
-R hostname
Removes all keys belonging to hostname from a known_hosts file. This option is useful to delete hashed hosts (see the -H option
above).
Sometimes, if for any reason, you need to reinstall a server, when connecting by ssh we will find that you server say that the identification has changed.
If we know that it is not an attack, but that we have reinstated the system, we can remove the old identification from the known_hosts using ssh-keygen:
ssh-keygen -R <host/ip:hostname>
root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
When connecting again we will ask you to validate the new fingerprint:
ssh -l user <host/ip:hostname>
The authenticity of host '<host/ip:hostname>' can't
be established.
RSA key fingerprint is 3f:3d:a0:bb:59:24:35:6d:e5:a0:1a:3f:9c:86:81:90.
Are you sure you want to continue connecting (yes/no)? yes
Use this command:
truncate -s 0 /home/SYSTEM_NAME/.ssh/known_hosts
I had this problem, and the reason is very simple, I have a duplicated IP address to ssh login, so after modify this problem, everthing is solved.
Only client side problem(duplicate key for ip):
Solve variants:
For clear one ip(default port 22):
ssh-keygen -f -R 7.7.7.7
For one ip(non default port):
ssh-keygen -f -R 7.7.7.7:333
Fast clear all ips:
cd ~; rm .ssh/known_hosts
7.7.7.7 - ssh your server ip connect
333 - non standart port
Just do:
cd /home/user/.ssh/ -> here user will be your username, i.e. /home/jon/ for example.
Then
gedit known_hosts & and delete the contents inside it.
Now ssh again, it should work.
I had the same error in my machine, and I clear the known_hosts file, and after that, it works fine.
Simply clear the known_hosts which is present in /home/{username}/.ssh/known_hosts
vi /home/{username}/.ssh/known_hosts
remove every line inside known hosts and exit after that you will be able to login.
OR
run this command
ssh-keygen -R "hostname/ip_address"
SOLUTION:
1- delete from "$HOME/.ssh/known_hosts" the line referring to the host towards which is impossible to connect.
2- execute this command: ssh-keygen -R "IP_ADDRESSorHOSTNAME" (substitute "IP_ADDRESSorHOSTNAME" with your destination ip or destination hostname)
3- Retry ssh connection (if it fails please check permission on .ssh directory, it has to be 700)
My solution on UBUNTU (linux):
1.You have to delete the content from "known_hosts" file which is in "/home/YOUR_USERNAME/.ssh/known_hosts"
2.Generate a new ssh key like "ssh-keygen -t rsa -C "your.email#example.com" -b 4096"
3.Copy-paste your new ssh key in your git repository (gitlab in my case) SSH keys.
It works for me !
AWS EC2.
Find the ip in the message it gives you.
run
vim /home/ec2-user/.ssh/known_hosts
Use the arrow keys to find the ip from the message and click.
dd
This will delete that line then run escape
:wp
This will save then you are good to go.