SSH Permission denied (publickey,gssapi-keyex,gssapi-with-mic) - ssh

Try to build a k8s test bed cluster with vagrant box, during ssh config phase ssh-copy-id always timeout.
Try to run:
ssh -v vagrant#192.168.0.21
The message show an error message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Guess the error is related to ssh server, then try to compare the config with another machine that can login with ssh correct.
Finally find that there have a config item PasswordAuthentication in /etc/ssh/sshd_config, the value in vagrant box is no, but the value in another machine is yes
Try to update the value to yes and restart the box, the ssh login worked as expected.

Related

Ansible : Failed to connect to the host via ssh : Permission denied (publickey,password)

i'm new in ansible, i've installed it yesterday and i want to try to ping my remote host (hpe switch 5130).
I have an issue the host is unreachable and i don't know how to fix that.
The config
Here is the issue
The host
The ssh works fine but i can't use ansible :(
How do you ssh to your switch?
If you're using a password, add the "-k" option to the ansible command. It will ask you to enter your ssh password. Alternatively, set the ansible_password variable.
Also you should set some environment related vars, such as ansible_connection and ansible_network_os.

Permission denied with Vagrant

When I do a vagrant ssh in my project on a windows 10 laptop I get this error:
vagrant#127.0.0.1: Permission denied (publickey).
When I then delete .vagrant/machines/default/virtualbox/private_key and do vagrant ssh again, I get access to the VM.
But when I then exit the VM and do `vagrant halt', I get this error:
==> default: Attempting graceful shutdown of VM...
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
translation missing: en.vagrant_ps.errors.powershell_error.powershell_error
It seems to me that it tries to add my SSH key, but something goed wrong. Any idea how I can solve this?
you can simply run following command in your cmd:
set VAGRANT_PREFER_SYSTEM_BIN=0
vagrant ssh
successfully tested under the windows 10 with vagrant 2.1.5
you can also see: https://www.vagrantup.com/docs/other/environmental-variables.html#vagrant_prefer_system_bin
I solved error:
vagrant#127.0.0.1: Permission denied (publickey)
editing my Vagrantfile.
It seems Vagrant didn't like this configuration:
config.vm.synced_folder "app", "/home/vagrant"
Edited it to:
config.vm.synced_folder "app", "/vagrant"
The solution provided by #rekinz works, but I want to add some further explanation.
set VAGRANT_PREFER_SYSTEM_BIN=0
Vagrant will default to using a system provided SSH on Windows. This environment variable can also be used to disable that behavior to force Vagrant to use the embedded SSH executable by setting it to 0.
I also used Vagrant halt to clean up a previous installation. And then, when I provisioned it again, I had got the same error as the OP.
I think the SSH provided by Windows is not working and using this VAGRANT_PREFER_SYSTEM_BIN has reset the same.
The problem can be that the sshClient windows feature intercepting the operation, try opening powershell as admin and run the following:
Remove-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
if that doesn't solve then install sshclient again
Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
You can also check the permission of the file
.vagrant/machines/default/virtualbox/private_key
In my case the permissions for this file were for an Unknown user (likely from a previous OS installation) - setting the permissions for this file to myself fixed the issue
It works for me when I point to the private_key (check permission of it first )
ssh -i ${vagrant_home}/.vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -p 2222
On Windows 10, when we try to login to the Virtual machine node (eg. node01) using
vagrant ssh node01
If you get the error
vagrant#127.0.0.1: Permission denied (publickey)
Try to follow the steps below:
In the Power Shell, set the environmental variable VAGRANT_PREFER_SYSTEM_BIN to prefer using the local ssh instead of the packaged ssh (Read more about the variable here)
$Env:VAGRANT_PREFER_SYSTEM_BIN += 0
As per issue listed in Vagrant Github:
vagrant#127.0.0.1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Once done, do the vagrant ssh to the vm which was not accessible earlier

Vagrant allows to SSH onto it only the first time

I'm trying to configure my first Vagrant box.
However it allows me to ssh onto it (with vagrant ssh) only the first time after vagrant up
But when I log out of the VM I can't log back onto it.
Doing Vagrant provision yields:
▶ vagrant provision
==> default: Running provisioner: shell...
SSH authentication failed! This is typically caused by the public/private
keypair for the SSH user not being properly set on the guest VM. Please
verify that the guest VM is setup with the proper public key, and that
the private key path for Vagrant is setup properly as well.
And trying to ssh into it yields:
▶ vagrant ssh
vagrant#127.0.0.1's password:
vagrant#127.0.0.1's password:
vagrant#127.0.0.1's password:
Permission denied (publickey,password).
Even if I give it the default password (vagrant) it rejects it and asks for it again 3 times.
When I start it in GUI mode, after writing any credentials whole console clears out and asks to log in again.
Tried destroying and rebuilding the VM, clearing project/.vagrant and ~/.vagrant.d.
Always the same result.
Adding config.ssh.insert_key = false to Vagrantfile changed nothing.
My box is ubuntu/trusty64 and my ssh-config looks like that:
▶ vagrant ssh-config
Host default
HostName 127.0.0.1
User vagrant
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile /home/iraasta/.vagrant.d/insecure_private_key
IdentitiesOnly yes
LogLevel FATAL
Trying to ssh manually with ssh vagrant#127.0.0.1 -p 2222 and password vagrant also returns Permission Denied

vagrant login as root by default

Problem: frequently the first command I type to my boxes is su -.
Question: how do I make vagrant ssh use the root user by default?
Version: vagrant 1.6.5
This is useful:
sudo passwd root
for anyone who's been caught out by the need to set a root password in vagrant first
Solution:
Add the following to your Vagrantfile:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
When you vagrant ssh henceforth, you will login as root and should expect the following:
==> mybox: Waiting for machine to boot. This may take a few minutes...
mybox: SSH address: 127.0.0.1:2222
mybox: SSH username: root
mybox: SSH auth method: password
mybox: Warning: Connection timeout. Retrying...
mybox: Warning: Remote connection disconnect. Retrying...
==> mybox: Inserting Vagrant public key within guest...
==> mybox: Key inserted! Disconnecting and reconnecting using new SSH key...
==> mybox: Machine booted and ready!
Update 23-Jun-2015:
This works for version 1.7.2 as well. Keying security has improved since 1.7.0; this technique overrides back to the previous method which uses a known private key. This solution is not intended to be used for a box that is accessible publicly without proper security measures done prior to publishing.
Reference:
https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html
This works if you are on ubuntu/trusty64 box:
vagrant ssh
Once you are in the ubuntu box:
sudo su
Now you are root user. You can update root password as shown below:
sudo -i
passwd
Now edit the below line in the file /etc/ssh/sshd_config
PermitRootLogin yes
Also, it is convenient to create your own alternate username:
adduser johndoe
Wait until it asks for password.
If Vagrantfile as below:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
But vagrant still ask you root password,
most likely the base box you used do not configured to allow root login.
For example, the offical ubuntu14.04 box do not set PermitRootLogin yes in /etc/ssh/sshd_config.
So If you want a box can login as root default(only Vagrantfile, no more work), you have to :
Setup a vm by username vagrant(whatever name but root)
Login and edit sshd config file.
ubuntu: edit /etc/ssh/sshd_config, set PermitRootLogin yes
others: ....
(I only use ubuntu, feel free to add workaround of other platforms)
Build a new base box:
vagrant package --base your-vm-name
this create a file package.box
Add that base box to vagrant:
vagrant box add ubuntu-root file:///somepath/package.box
then, you need use this base box to build vm which allow auto login as root.
Destroy original vm by vagrant destroy
Edit original Vagrantfile, change box name to ubuntu-root and username to root, then vagrant up create a new one.
It cost me some time to figure out , it is too complicate in my opinion. Hope vagrant would improve this.
Dont't forget root is allowed root to login before!!!
Place the config code below in /etc/ssh/sshd_config file.
PermitRootLogin yes
Note: Only use this method for local development, it's not secure.
You can setup password and ssh config while provisioning the box. For example with debian/stretch64 box this is my provision script:
config.vm.provision "shell", inline: <<-SHELL
echo -e "vagrant\nvagrant" | passwd root
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
sed -in 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL
This will set root password to vagrant and permit root login with password. If you are using private_network say with ip address 192.168.10.37 then you can ssh with ssh root#192.168.10.37
You may need to change that echo and sed commands depending on the default sshd_config file.
Adding this to the Vagrantfile worked for me. These lines are the equivalent of you entering sudo su - every time you login. Please notice that this requires reprovisioning the VM.
config.vm.provision "shell", inline: <<-SHELL
echo "sudo su -" >> .bashrc
SHELL
I know this is an old question, but looking at the original question, it looks like the user just wanted to run a command as root, that's what I need to do when I was searching for an answer and stumbled across the question.
So this one is worth knowing in my opinion:
vagrant ssh servername -c "echo vagrant | sudo -S shutdown 0"
vagrant is the password being echoed into the the sudo command, because as we all know, the vagrant account has sudo privileges and when you sudo, you need to specify the password of the user account, not root..and of course by default, the vagrant user's password is vagrant !
By default you need root privileges to shutdown so I guess doing a shutdown is a good test.
Obviously you don't need to specify a server name if there is only one for that vagrant environment. Also, we're talking about local vagrant virtual machine to the host, so there isn't really any security issue that I can see.
Hope this helps.
I had some troubles with provisioning when trying to login as root, even with PermitRootLogin yes. I made it so only the vagrant ssh command is affected:
# Login as root when doing vagrant ssh
if ARGV[0]=='ssh'
config.ssh.username = 'root'
end
I used vagrant putty with the vagrant multi putty plugin, it took me directly to root.
vagrant destroy
vagrant up
Please add this to vagrant file:
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'

SSHing into EC2 server via gives error Please login as the ec2-user user rather than root user

Question as title.
Why is this, I have used the ssh command:
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
But i get that error, find nothing on google. What am I doing wrong?
You log in as ec2-user as Klaus suggested:
ssh -i key.pem ec2-user#host
... and then you use sudo to run commands. E.g., to edit the /etc/hosts file which is owned by root and requires root privileges: sudo nano /etc/hosts.
Or you run sudo su to become the root user.
By default root user is not allowed to login but you can use ec2-user as indicated by others.
Once you login with ec2-user you switch to root and change the SSH configuration.
To become the root user you run:
sudo su -
Edit the SSH daemon configuration file /etc/ssh/sshd_config, e.g. by using vi, and replace the PermitRootLogin entry with the following:
PermitRootLogin without-password
Reload the SSH daemon configuration by running:
/etc/init.d/sshd reload
The message Please login as the ec2-user user rather than root user. is displayed because a command is executed when you login with the private key. To remove that command edit ~/.ssh/authorized_keys file and remove the command option. The line should start with the key type (Eg. ssh-rsa).
(*) Do at your own risk. I recommend you to leave always a console open just in case you're not able to login after you make the configuration changes.
For reference you can read the man pages:
man sshd_config
man sshd
I have encountered a similar problem when setting up a hadoop cluster on Amazon ec2.
My head node needs to have root ssh access to each worker/slave nodes. I aliased the connects by adding each slave node's IP address, private address, and alias name to the /etc/hosts/ file. (I get that data by running the command echo -e "`hostname -i`\t`hostname -f`\talias-name" where alias-name is what I call each node (head or n1 for example). Then I put that output for each node in every node's /etc/hosts file.
The problem I have been encountering is that when I type ssh n1 while in my head node to ssh into my first slave node, I get that same error message: Please login as the use "ec2-user" rather than the user "root".
So after doing some research, I figured out how to fix it.
First:
ssh into your server. non-root (ec2-user) access is fine here.
Then su - your way into root. Now vi /etc/ssh/sshd_config and
un-comment the line PermitRootLogin yes.
Exit vi editor.
Now restart ssh daemon by typing service sshd stop then service
sshd start.
Second:
Now, here is the part I had to dig for,
run vi /root/.ssh/authorized_keys
Comment out everything up to ssh-rsa. Just put a # at the beginning
of the file's content, before no-port-forwarding... and hit enter on ssh-rsa to move it to
the next line (this way you dont have to delete anything in case you
want to backtrack).
exit vi editor
Now you should be able to login to root without that error message popping up.
Also, if you are using aliases for a cluster setup; Repeat the same steps on each node. First ssh in using ec2-user then follow the steps.
After adding the IP address, private address, and alias name info to your /etc/hosts file you should be able to ssh into each node's root using the alias name for example ssh n1.
The tutorial I followed is here: https://www.youtube.com/watch?v=xrxQXfE7t9A
But it didnt discuss the problem with root login.
Hope that helps! It worked for me.
*Keep in mind that I havnt taken any security into concern. This is simply a practice/dev setup.
I think it's just asking you to login with another username. Do you happen to have a user called ec2-user? If so, try this instead:
ssh -i mykey.pem ec2-user#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
I have faced the same problem when I tried to access my EC2 instance as 'root' through Windows PuTTY client, this is how I solved problem.
Access and edit SSH configuration file, to allow root login and password authentication.
Login as ec2-user (by default it is allowed)
Enter below command to open ssh config
sudo vi /etc/ssh/sshd_config
Edit SSH configuration file as below using vi, how to use vi editor
PermitRootLogin yes (remove # at begging if it present)
PasswordAuthentication yes
Restart SSH
sudo /etc/init.d/sshd restart
Change/set root password
sudo passwd root
type new password and re-enter it (at least 8 characters)
Exit current session and close PuTTY
exit
Try again login as root and type previously set password.
Solved!
Try compare root key file and user key file)
diff /root/.ssh/authorized_keys /home/user/.ssh/authorized_keys
...and see
For anyone like me that created a new user, copied root's .ssh dir to the new user, set ownership and STILL got this error - look at the new user's ~/.ssh/authorized_keys file. It has SSH params specified that force the prompt. Delete everything from that line up to the ssh-rsa and you'll be good to go.
Or - copy /home/ec2-user/.ssh to the new user homedir instead of /root/.ssh
Edit /etc/ssh/sshd_config, and make sure this is set:
PasswordAuthentication yes
Then reload SSH:
systemctl reload sshd.service
You can now log in as users other than ec2-user.
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
just replace above command to this
ssh -i mykey.pem ubuntu#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
its working in my case
For those who are looking for a single, simple line:
sudo ssh -i ./mykey.pem ec2-user#ec2-x-xx-xxx-xxx.us-east-2.compute.amazonaws.com
Note that, you can get the line after the # from the Public IPv4 DNS section in your instance summary page.