oneadmin opennebula ssh localhost - ssh

We've been trying to use opennebula to simulate a cluster but ssh is driving us crazy.
For some, still unknown reasons, it is necessary that user oneadmin (created by opennebula) is able to ssh to local host. The "home" directory of opennebula (created by it) is /var/lib/one and inside "one" we can find .ssh directory. So here's what I've done up to now:
sudo -su oneadmin
oneadmin#pc:$ cd /var/lib/one/.ssh
oneadmin#pc:/var/lib/one/.ssh$ ssh-keygen -t rsa
oneadmin#pc:/var/lib/one/.ssh$ cat id_rsa.pub >> authorized_keys
Moreover, I've changed all permissions: all files and directory have oneadmin as owner and 600 (as I can read from the opennebula guide)
and finally, by root, I do
service ssh restart
Then I login from one terminal as oneadmin again but when I perform:
ssh oneadmin#localhost
here's what I get
Permission denied (publickey).
where am I making this damned mistake? We've lost more than one day for all these permissions!

I've just run into a similar problem - turns out Open Nebula didn't get on with selinux.
Finally found the solution over here - http://n40lab.wordpress.com/2012/11/26/69/ - we need to restore the context to ~/.ssh/authorized_keys:
$ chcon -v --type=ssh_home_t /var/lib/one/.ssh/authorized_keys
$ semanage fcontext -a -t ssh_home_t /var/lib/one/.ssh/authorized_keys

Related

SSH Error: Permission denied (publickey,password) in Ansible

I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh user#10.4.1.141
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to rajat#10.4.1.141, then the issue got fixed.
If you log in with ssh user#10.4.1.141:
Option 1
Then make sure that in your hosts file inside etc\ansible you have:
[server01]
10.4.1.141
Then within etc\ansible run:
ansible all -m ping -u user --ask-pass
Option 2
If you want to log in without typing the SSH password then in your hosts file inside etc\ansible you add:
[server01]
10.4.1.141 ansible_ssh_pass=xxx ansible_ssh_user=user
Then within etc\ansible run:
ansible all -m ping
For me it worked both ways.
My case is I have multiple private keys in my .ssh.
Here is how I fix it by telling ansible to use a certain private key
ansible-playbook -i ../../inventory.ini --private-key=~/.ssh/id_rsa_ansiadmin update.yml
The previous solutions didn't work for me, unfortunately (DevOps layman here!).
But the below one worked for me.
Change your inventory file to:
[webserver] 10.4.1.141 ansible_user=ubuntu
ansible webserver --private-key pem_file.pem -m ping
Hitting the command with -vvvv helped me to debug it more.
Reference: Failed to connect to the host via ssh: Permission denied (publickey,password) #19584
If you execute Ansible with sudo, for example
sudo ansible -m ping all
Please keep in mind that the public key for root has to be on the server you want to reach as well, not only the public key from your non-root-user. Otherwise, you get the error message above as well.
Most of the issues happen while connecting Ubuntu machines in hosts.
Solution Ansible required which user want to connect, because Ubuntu doesn't have a default root user.
For the hosts file
[Test-Web-Server]
10.192.168.10 ansible_ssh_pass=foo ansible_ssh_user=foo
The problem lies in the inventory file.
vi /etc/ansible/hosts
It should be:
[webserver]
192.###.###.### ansible_ssh_user=user ansible_ssh_pass=pass
I have fixed this issue as well.
My issue was also in my hosts file, /etc/ansible/hosts.
I changed my hosts file from
172.28.2.101
to
name-of-server-in-ssh-config
I had IP addresses in the hosts file. Since I have SSH configurations already set up for names, I do not need to use a variable or username in front of the hosts.
[name-stg-web]
server-name-stg-web[01:02]
What first worked for me was to hardcode the target machine root's password in the /etc/ansible/hosts like this:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=root_password_in_target
But it is not recommended to do this of course because of security issues.
Then, I figured out a solutions from the docs by doing:
ssh-agent bash --> read here
and then
ssh-add /my/private/ssh-key
After this, my hosts file looks like this and ansible all -m ping works fine:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root
Mentioning the username in /etc/hosts file also can resolve the issue.
#sudo vim /etc/hosts
[test-server]
ip_address ansible_user="remote pc's username"
[jenkinsserver]
publicdnsname ansible_user=ubuntu private_key=ubuntu.cer
After years some OS require strong encryption of the SSH key, they don't support RSA and DSA keys. Therefore the message Permission denied (publickey,password) may indicate that OS needs strong SSH-key instead of id_rsa.
Use the following command to generate new key:
ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa -N ""
Ensure that server has an option
PubkeyAuthentication yes
in /etc/ssh/sshd_config or /etc/openssh/sshd_config.
Some other options may be required as well (read the documentation of your OS first), for example:
Protocol 2
PermitRootLogin without-password
AuthorizedKeysFile /etc/openssh/authorized_keys/%u /etc/openssh/authorized_keys2/%u .ssh/authorized_keys .ssh/authorized_keys2
Do not forget to restart sshd service to apply changes.
Copy the new key with ssh-copy-id -i ~/.ssh/id_ecdsa, then you can connect to remote server using ansible.
At the host machine you should install sshpass with the below command
sudo apt install sshpass -y
and use this command to ping
ansible all -i slaves.txt -m ping -u test --ask-pass
it will provide you keyboard interactive password entry, where you shall enter the passowrd of the slave machine

Cannot ssh into remote machine after rsync

I followed this page on Protecting the Docker daemon Socket with HTTPS to generate ca.pem, server-key.pem, server-cert.pem, key.pem and key-cert.pem
I wanted a remote Docker daemon to use those keys so i used rsync via ssh to send three of the files(ca.pem, server-key.pem and key.pem) to the remote host's home directory. The identity file for ssh into the remote host is called dl-datatest-internal.pem
ubuntu#ip-10-3-1-174:~$ rsync -avz -progress -e "ssh -i dl-datatest-internal.pem" dockerCer/ core#10.3.1.181:~/
sending incremental file list
./
ca.pem
server-cert.pem
server-key.pem
sent 3,410 bytes received 79 bytes 6,978.00 bytes/sec
total size is 4,242 speedup is 1.22
The remote host stopped recognising the identity file ever since and started asking for a non-existent password.
ubuntu#ip-10-3-1-174:~$ ssh -i dl-datatest-internal.pem core#10.3.1.151
core#10.3.1.151's password:
Does anyone know why and how to fix it? I still have all the keys if that helps.
There are a couple things about the rsync command that bother me, but, I can't put my finger on the problem (if there is one).
the rsync command and subsequent ssh command reference different hosts: rsync(core#10.3.1.181:~/
) and ssh to the host(core#10.3.1.151). Those are different machines, no?
the ~ in the target of the rsync command. core#10.3.1.181:~/. I am pretty sure that the ~/ references the core home directory, but, you could just get rid of the ~/ and replace that with a . (dot).
If you can reproduce the environment you did the copy in, you can add a --dry-run to the rsync command to see what it is going to do. Looking at this command I can't see it erasing the target's .ssh directory.

ssh not expanding ~ correctly?

So for permission reasons, I have had to change my default home directory to a non-standard location.
I did export HOME=/non/standard/home and then confirmed this was working with
$ cd ~
$ pwd
/non/standard/home
Even though man ssh says that it looks in ~/.ssh for keys and identity files, it doesn't seem to:
$ ls ~/.ssh
cluster_key cluster_key.pub config
$ ssh host
Could not create directory '/home/myname/.ssh'.
The authenticity of host 'host (<ip address deleted>)' can't be established.
RSA key fingerprint is <finerprint deleted>.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/myname/.ssh/known_hosts).
Permission denied (publickey,gssapi-with-mic).
What does it insist on looking in /home/myname? The man page state that is consults the HOME environment variable. Using the -F option also fails to work.
$ ssh -version
OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Bad escape character 'rsion'.
When you run "export" command you actually affect only your process of BASH/SH. When .ssh looks for it it has it's own instance and thus looks in the default directory. You need to run the command usermod -m -d /path/to/new/home/dir userNameHere (change the user that .ssh uses, probably admin)

rsync remote files over SSH to my local machine, using sudo privileges on local side, and my personal SSH key

I want to sync a directory /var/sites/example.net/ from a remote machine to a directory at the same path on my local machine.
The remote machine only authenticates SSH connections with keys, not passwords.
On my local machine I have an alias set up in ~/.ssh/config so that I can easily run ssh myserver to get in.
I'm trying rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ but it fails because my local user does not have permission to edit the local directory /var/sites/example.net/.
If I try sudo rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ (just adding sudo), I can fix the local permission issue, but then I encounter a different issue -- my local root user does not see the proper ssh key or ssh alias.
Is there a way I can accomplish this file sync by modifying this rsync command? I'd like to avoid changing anything else (e.g. no changes to file perms or ssh setup)
Try this:
sudo rsync -e "sudo -u localuser ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/
This runs rsync as root, but the -e flag causes rsync to run ssh as your local user (using sudo -u localuser), so the ssh command has access to the necessary credentials. Rsync itself is still running as root, so it has the necessary filesystem permissions.
Just improving on top of larsks's response:
sudo rsync -e "sudo -u $USER ssh" ...
So in your case change rsync -a myserver:/var/sites/example.net/ /var/sites/example.net/ to sudo rsync -e "sudo -u $USER ssh" -a myserver:/var/sites/example.net/ /var/sites/example.net/.
With regards to #larsks' answer, If you have your key loaded into the ssh agent, which is my use case, you can instead do:
sudo rsync -e "env SSH_AUTH_SOCK=$SSH_AUTH_SOCK ssh" /source/path /destination/path
Instead of the double sudo.
My use case, if anyone is interested in replicating, is that I'm SSHing to a non-root sudo-er account on remote A, and need to rsync root-owned files between remote A and remote B. Authentication to both remotes is done using keys I have on my real local machine and I use -A to forward the ssh-agent authentication socket to remote A.
Guss's answer works well if you want to use sudo rsync for local file permissions but want to utilise your user's SSH session. However, it falls short when you also want to use your SSH config file.
You can follow Wernight's approach by using sudo to switch the user for the SSH connection and supplying a path to the config file, but this won't work if you have to enter a passphrase. So, you can combine both approaches by making use of the --preserve-env flag:
sudo --preserve-env=SSH_AUTH_SOCK rsync -e "sudo --preserve-env=SSH_AUTH_SOCK -u $USER ssh" hostname:/source/path /destination/path
Note that it's necessary to cascade this flag through both sudo commands so it does look a bit messy!
As requested by Derek above:
when sudo asks for a password then you need to modify the sudoers config with sudo visudo and add a entry with NOPASSWD: in front of the rsync command.
For details you could consult man sudoers.
this will work in every mode, even via cron, at, systemd.service+timer, etc.
test it with: ssh <user>#<your-server> "sudo <your-rsync-command>"

How do I setup passwordless ssh on AWS

How do I setup passwordless ssh between nodes on AWS cluster
Following steps to setup password less authentication are tested thoroughly for Centos and Ubuntu.
Assumptions:
You already have access to your EC2 machine. May be using the pem key or you have credentials for a unix user which has root permissions.
You have already setup RSA keys on you local machine. Private key and public key are available at "~/.ssh/id_rsa" and "~/.ssh/id_rsa.pub" respectively.
Steps:
Login to you EC2 machine as a root user.
Create a new user
useradd -m <yourname>
sudo su <yourname>
cd
mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
Append contents of file ~/.ssh/id_rsa.pub on you local machine to ~/.ssh/authorized_keys on EC2 machine.
chmod -R 700 ~/.ssh
chmod 600 ~/.ssh/*
Make sure sshing is permitted by the machine. In file /etc/ssh/sshd_config, make sure that line containing "PasswordAuthentication yes" is uncommented. Restart sshd service if you make any change in this file:
service sshd restart # On Centos
service ssh restart # On Ubuntu
Your passwordless login should work now. Try following on your local machine:
ssh -A <yourname>#ec2-xx-xx-xxx-xxx.ap-southeast-1.compute.amazonaws.com
Making yourself a super user. Open /etc/sudoers. Make sure following two lines are uncommented:
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
%wheel ALL=(ALL) NOPASSWD: ALL
Add yourself to wheel group.
usermod -aG wheel <yourname>
This may help someone
Copy the pem file on the machine then copy the content of pem file to the .ssh/id_rsa file you can use bellow command or your own
cat my.pem > ~/.ssh/id_rsa
try ssh localhost it should work and same with the other machines in the cluster
how I made Paswordless shh work between two instances is the following:
create ec2 instances – they should be in the same subnet and have the same security group
Open ports between them – make sure instances can communicate to each other. Use the default security group which has one rule relevant for this case:
Type: All Traffic
Source: Custom – id of the security group
Log in to the instance you want to connect from to the other instance
Run:
1 ssh-keygen -t rsa -N "" -f /home/ubuntu/.ssh/id_rsa
to generate a new rsa key.
Copy your private AWS key as ~/.ssh/my.key (or whatever name you want to use)
Make sure you change the permission to 600
1 chmod 600 .ssh/my.key
Copy the public key to the instance you wish to connect to passwordless
1 cat ~/.ssh/id_rsa.pub | ssh -i ~/.ssh/my.key ubuntu#10.0.0.X "cat >> ~/.ssh/authorized_keys"
If you test the passwordless ssh to the other machine, it should work.
1 ssh 10.0.0.X
you can use ssh keys like described here:
http://pkeck.myweb.uga.edu/ssh/