Can you prevent ssh from writing an entry to the known_hosts? - ssh

Can you prevent ssh from writing an entry to the known_hosts?
ssh -i 'id_rsa' -o 'StrictHostKeyChecking=no' localhost#127.0.0.1 -p 3033
Could not create directory '/home/user/.ssh' (Read-only file system).
Failed to add the host to the list of known hosts (/home/user/.ssh/known_hosts).
I would like to get rid of the warnings.

Related

Ansible percent expand

I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.

SSH Error: Permission denied (publickey,password) in Ansible

I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh user#10.4.1.141
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to rajat#10.4.1.141, then the issue got fixed.
If you log in with ssh user#10.4.1.141:
Option 1
Then make sure that in your hosts file inside etc\ansible you have:
[server01]
10.4.1.141
Then within etc\ansible run:
ansible all -m ping -u user --ask-pass
Option 2
If you want to log in without typing the SSH password then in your hosts file inside etc\ansible you add:
[server01]
10.4.1.141 ansible_ssh_pass=xxx ansible_ssh_user=user
Then within etc\ansible run:
ansible all -m ping
For me it worked both ways.
My case is I have multiple private keys in my .ssh.
Here is how I fix it by telling ansible to use a certain private key
ansible-playbook -i ../../inventory.ini --private-key=~/.ssh/id_rsa_ansiadmin update.yml
The previous solutions didn't work for me, unfortunately (DevOps layman here!).
But the below one worked for me.
Change your inventory file to:
[webserver] 10.4.1.141 ansible_user=ubuntu
ansible webserver --private-key pem_file.pem -m ping
Hitting the command with -vvvv helped me to debug it more.
Reference: Failed to connect to the host via ssh: Permission denied (publickey,password) #19584
If you execute Ansible with sudo, for example
sudo ansible -m ping all
Please keep in mind that the public key for root has to be on the server you want to reach as well, not only the public key from your non-root-user. Otherwise, you get the error message above as well.
Most of the issues happen while connecting Ubuntu machines in hosts.
Solution Ansible required which user want to connect, because Ubuntu doesn't have a default root user.
For the hosts file
[Test-Web-Server]
10.192.168.10 ansible_ssh_pass=foo ansible_ssh_user=foo
The problem lies in the inventory file.
vi /etc/ansible/hosts
It should be:
[webserver]
192.###.###.### ansible_ssh_user=user ansible_ssh_pass=pass
I have fixed this issue as well.
My issue was also in my hosts file, /etc/ansible/hosts.
I changed my hosts file from
172.28.2.101
to
name-of-server-in-ssh-config
I had IP addresses in the hosts file. Since I have SSH configurations already set up for names, I do not need to use a variable or username in front of the hosts.
[name-stg-web]
server-name-stg-web[01:02]
What first worked for me was to hardcode the target machine root's password in the /etc/ansible/hosts like this:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=root_password_in_target
But it is not recommended to do this of course because of security issues.
Then, I figured out a solutions from the docs by doing:
ssh-agent bash --> read here
and then
ssh-add /my/private/ssh-key
After this, my hosts file looks like this and ansible all -m ping works fine:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root
Mentioning the username in /etc/hosts file also can resolve the issue.
#sudo vim /etc/hosts
[test-server]
ip_address ansible_user="remote pc's username"
[jenkinsserver]
publicdnsname ansible_user=ubuntu private_key=ubuntu.cer
After years some OS require strong encryption of the SSH key, they don't support RSA and DSA keys. Therefore the message Permission denied (publickey,password) may indicate that OS needs strong SSH-key instead of id_rsa.
Use the following command to generate new key:
ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa -N ""
Ensure that server has an option
PubkeyAuthentication yes
in /etc/ssh/sshd_config or /etc/openssh/sshd_config.
Some other options may be required as well (read the documentation of your OS first), for example:
Protocol 2
PermitRootLogin without-password
AuthorizedKeysFile /etc/openssh/authorized_keys/%u /etc/openssh/authorized_keys2/%u .ssh/authorized_keys .ssh/authorized_keys2
Do not forget to restart sshd service to apply changes.
Copy the new key with ssh-copy-id -i ~/.ssh/id_ecdsa, then you can connect to remote server using ansible.
At the host machine you should install sshpass with the below command
sudo apt install sshpass -y
and use this command to ping
ansible all -i slaves.txt -m ping -u test --ask-pass
it will provide you keyboard interactive password entry, where you shall enter the passowrd of the slave machine

ssh not expanding ~ correctly?

So for permission reasons, I have had to change my default home directory to a non-standard location.
I did export HOME=/non/standard/home and then confirmed this was working with
$ cd ~
$ pwd
/non/standard/home
Even though man ssh says that it looks in ~/.ssh for keys and identity files, it doesn't seem to:
$ ls ~/.ssh
cluster_key cluster_key.pub config
$ ssh host
Could not create directory '/home/myname/.ssh'.
The authenticity of host 'host (<ip address deleted>)' can't be established.
RSA key fingerprint is <finerprint deleted>.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/myname/.ssh/known_hosts).
Permission denied (publickey,gssapi-with-mic).
What does it insist on looking in /home/myname? The man page state that is consults the HOME environment variable. Using the -F option also fails to work.
$ ssh -version
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Bad escape character 'rsion'.
When you run "export" command you actually affect only your process of BASH/SH. When .ssh looks for it it has it's own instance and thus looks in the default directory. You need to run the command usermod -m -d /path/to/new/home/dir userNameHere (change the user that .ssh uses, probably admin)

rsync remote files over SSH to my local machine, using sudo privileges on local side, and my personal SSH key

I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"

How to make SSH go directly to specific directory?

when you do an "ssh second_machine" you are able to connect to second_machine on your home directory
But usually i am working in my_machine in directory with very long path, and i want to connect to second_machine and move to my working directory right away. So everytime i have to:
ssh second_machine
cd /very/long/path/to/directory/
Is there a way to make it automatic ?? ( ssh automatically go to the desired directory )
This should work for you
ssh -t second_machine "cd /very/long/path/to/directory/; bash"
Assumes you're wanting to run bash, substitute for a different shell if required.
To make it permanent, use RemoteCommand in your ~/.ssh/config file, e.g.
Host myhost
HostName IP
User ubuntu
IdentityFile ~/.ssh/id_rsa
RemoteCommand cd /path/to/directory; $SHELL -il
Related:
SSH Config File Alias To Get To a Directory On Server
How can I automatically change directory on ssh login?
Run a remote command using ssh config file
You could do something like the one I'm using. Make an alias as the one below.
alias ssh 'ssh -t \!* "cd $PWD; csh"'
(here, csh could also be replaced by bash)
This brings you directly to the 'current' path on the other machine.
The usage would be like [$] ssh some machine
However, I find that it works slow. So, I'm looking for an alternative.