I did a vagrant up on the vagrant box
StefanScherer/windows_2019 (vmware_desktop, 2020.02.12)
and installed the ssh server via "Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0"
The server starts and works fine but the user vagrant cannot connect, neither from within the VM (ssh vagrant#127.0.0.1) nor from the outside. It says permission denied. I used vagrant as a password, the same that I use to log into the VM.
I created another user asdf and I am very well able to connect with this user. So it has something to do with the user vagrant. Running Get-LocalUser showed no differences between user vagrant and my newly created user asdf
PS C:\Windows\system32> Get-LocalUser
Name Enabled Description
---- ------- -----------
Administrator True Built-in account for administering the computer/domain
asdf True
DefaultAccount False A user account managed by the system.
Guest False Built-in account for guest access to the computer/domain
sshd True
vagrant True Vagrant User
WDAGUtilityAccount False A user account managed and used by the system for Windows Defender Application Guard scen...
Both accounts are of type LocalUser.
Why is it nor working for vagrant? How can I find out what makes this account so special?
The problem is this bug in openssh for Windows. The hostname (computername) must not be the same as the user name. Both were vagrant in my case.
You can fix it either by modifying your Vagrantfile and setting
config.vm.hostname to a value different to vagrant. Alternatively, you can change your hostname from inside the VM, e.g. from Powershell via Rename-Computer -NewName foo -Force -PassThru (requires restart).
Related
Being a new ansible user, I'm not able to understand if the control user (except root) needs to exist on target machines too or can those machine be controlled by any user on control machine?
I've tried to go through documentation, but is too overwhelming for a beginner. So tell me if below scenario can be possible?
sudoUser1 exists on control machine but not on target machines? Or Do I have to create same user on control machine as well as target nodes?
On a control node as a user (User1#controller) configure ssh connection to the target (User2#target). For example:
[User1#controler]# ssh-copy-id User2#target
Test SSH connection
[User1#controler]# ssh User2#target
On target allow User2 sudo
# grep User2 /usr/local/etc/sudoers
User2 ALL=(ALL) NOPASSWD: ALL
On controller create inventory
[User1#controler]# cat hosts
target
ansible_connection=ssh
ansible_user=User2
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
Test Ansible
[User1#controler]# ansible -m setup target
Ansible is a flexible tool. There are many other variations how to configure it. YMMV.
When I do a vagrant ssh in my project on a windows 10 laptop I get this error:
vagrant#127.0.0.1: Permission denied (publickey).
When I then delete .vagrant/machines/default/virtualbox/private_key and do vagrant ssh again, I get access to the VM.
But when I then exit the VM and do `vagrant halt', I get this error:
==> default: Attempting graceful shutdown of VM...
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
translation missing: en.vagrant_ps.errors.powershell_error.powershell_error
It seems to me that it tries to add my SSH key, but something goed wrong. Any idea how I can solve this?
you can simply run following command in your cmd:
set VAGRANT_PREFER_SYSTEM_BIN=0
vagrant ssh
successfully tested under the windows 10 with vagrant 2.1.5
you can also see: https://www.vagrantup.com/docs/other/environmental-variables.html#vagrant_prefer_system_bin
I solved error:
vagrant#127.0.0.1: Permission denied (publickey)
editing my Vagrantfile.
It seems Vagrant didn't like this configuration:
config.vm.synced_folder "app", "/home/vagrant"
Edited it to:
config.vm.synced_folder "app", "/vagrant"
The solution provided by #rekinz works, but I want to add some further explanation.
set VAGRANT_PREFER_SYSTEM_BIN=0
Vagrant will default to using a system provided SSH on Windows. This environment variable can also be used to disable that behavior to force Vagrant to use the embedded SSH executable by setting it to 0.
I also used Vagrant halt to clean up a previous installation. And then, when I provisioned it again, I had got the same error as the OP.
I think the SSH provided by Windows is not working and using this VAGRANT_PREFER_SYSTEM_BIN has reset the same.
The problem can be that the sshClient windows feature intercepting the operation, try opening powershell as admin and run the following:
Remove-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0
if that doesn't solve then install sshclient again
Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
You can also check the permission of the file
.vagrant/machines/default/virtualbox/private_key
In my case the permissions for this file were for an Unknown user (likely from a previous OS installation) - setting the permissions for this file to myself fixed the issue
It works for me when I point to the private_key (check permission of it first )
ssh -i ${vagrant_home}/.vagrant/machines/default/virtualbox/private_key vagrant#127.0.0.1 -p 2222
On Windows 10, when we try to login to the Virtual machine node (eg. node01) using
vagrant ssh node01
If you get the error
vagrant#127.0.0.1: Permission denied (publickey)
Try to follow the steps below:
In the Power Shell, set the environmental variable VAGRANT_PREFER_SYSTEM_BIN to prefer using the local ssh instead of the packaged ssh (Read more about the variable here)
$Env:VAGRANT_PREFER_SYSTEM_BIN += 0
As per issue listed in Vagrant Github:
vagrant#127.0.0.1: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Once done, do the vagrant ssh to the vm which was not accessible earlier
Whenever I try to connect to my local Vagrant, I get this error when I run ssh vagrant#127.0.0.1:2222 from the Windows git bash:
ssh: connect to host 127.0.0.1:2222 port 22: Bad file number
It was working previously, so I'm not sure what could have caused this. When I try to do an SFTP connection in PHPStorm 8, I get this error:
Connection to '127.0.0.1' failed.
SSH_MSG_DISCONNECT: 2 Too many authentication failures for vagrant
I've tried vagrant destroy with vagrant box remove laravel/homestead and then recreating the box from a backup I had that previously worked using vagrant box add laravel/homestead homestead.box but I still get the same errors.
I'm on Windows 7.
What can I do to get access to my vagrant box commandline again?
Try command:
ssh -p 2222 vagrant#127.0.0.1
The answer by outboundexplorer above is the correct one I believe.Here is my step-by-step approach on how I did this:
Step 1: Find out exactly what SSH settings to use
Ensure the vagrant box is running (you've done vagrant up that is)
From the command line, go to your project directory (the one where the Vagrantfile is located) and run vagrant ssh-config.
You'll get an output like this:
Host default
HostName 127.0.0.1
User ubuntu
Port 2222
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
PasswordAuthentication no
IdentityFile C:/Projects/my-test-project/.vagrant/machines/default/virtualbox/private_key
IdentitiesOnly yes
LogLevel FATAL
Step 2: Setting up PHPStorm to SFTP to the Vagrant box
Based on the config settings shown above, I set up the following SFTP remote deployment server:
SFTP host: 127.0.0.1
Port: 2222
Root path: /home/ubuntu/my-test-project (this is the folder inside the Vagrant box where the files will be uploaded to, change to whatever suits your needs)
User name: ubuntu
Auth type: Select "Key pair (OpenSSH or PuTTY)"
Private key file: Point to the IdentityFile path shown (C:/Projects/....)
... and that was it.
I got this same failure when using PHpStorm to SSH into the VirtualBox guest machine that i had set up with Vagrant. Everything worked fine before I upgraded to Windows 10. After upgrading, first of all i had to upgrade to VirtualBox and Vagrant latest versions to get everything to work on Windows 10.
But then i couldn't ssh into the guest machine using the PhpStorm ssh client. After much reading, everything seemed to suggest that I had too many ssh-keys installed on my Windows machine, but checking regedit just showed that I only had a couple of keys which should be less than the suggested max 5 keys (as default). In the end i did vagrant ssh which didn't allow me to ssh into the guest machine, but it did reconfirm the ssh details for me. I then realized that after all the new installs it didn't want me to use the C:\Users\Andy\.vagrant.d\insecure_private_key key but instead use a key that it had placed within the project itself at C:/Users/Andy/CodeLab5/vagrant/.vagrant/machines/default/virtualbox/private_key.
Everything is working as it should again now :)
Make sure your vagrant is up and running by command : vagrant up
and then do vagrant ssh. It will connect to vagrant localhost
Problem: frequently the first command I type to my boxes is su -.
Question: how do I make vagrant ssh use the root user by default?
Version: vagrant 1.6.5
This is useful:
sudo passwd root
for anyone who's been caught out by the need to set a root password in vagrant first
Solution:
Add the following to your Vagrantfile:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
When you vagrant ssh henceforth, you will login as root and should expect the following:
==> mybox: Waiting for machine to boot. This may take a few minutes...
mybox: SSH address: 127.0.0.1:2222
mybox: SSH username: root
mybox: SSH auth method: password
mybox: Warning: Connection timeout. Retrying...
mybox: Warning: Remote connection disconnect. Retrying...
==> mybox: Inserting Vagrant public key within guest...
==> mybox: Key inserted! Disconnecting and reconnecting using new SSH key...
==> mybox: Machine booted and ready!
Update 23-Jun-2015:
This works for version 1.7.2 as well. Keying security has improved since 1.7.0; this technique overrides back to the previous method which uses a known private key. This solution is not intended to be used for a box that is accessible publicly without proper security measures done prior to publishing.
Reference:
https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html
This works if you are on ubuntu/trusty64 box:
vagrant ssh
Once you are in the ubuntu box:
sudo su
Now you are root user. You can update root password as shown below:
sudo -i
passwd
Now edit the below line in the file /etc/ssh/sshd_config
PermitRootLogin yes
Also, it is convenient to create your own alternate username:
adduser johndoe
Wait until it asks for password.
If Vagrantfile as below:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
But vagrant still ask you root password,
most likely the base box you used do not configured to allow root login.
For example, the offical ubuntu14.04 box do not set PermitRootLogin yes in /etc/ssh/sshd_config.
So If you want a box can login as root default(only Vagrantfile, no more work), you have to :
Setup a vm by username vagrant(whatever name but root)
Login and edit sshd config file.
ubuntu: edit /etc/ssh/sshd_config, set PermitRootLogin yes
others: ....
(I only use ubuntu, feel free to add workaround of other platforms)
Build a new base box:
vagrant package --base your-vm-name
this create a file package.box
Add that base box to vagrant:
vagrant box add ubuntu-root file:///somepath/package.box
then, you need use this base box to build vm which allow auto login as root.
Destroy original vm by vagrant destroy
Edit original Vagrantfile, change box name to ubuntu-root and username to root, then vagrant up create a new one.
It cost me some time to figure out , it is too complicate in my opinion. Hope vagrant would improve this.
Dont't forget root is allowed root to login before!!!
Place the config code below in /etc/ssh/sshd_config file.
PermitRootLogin yes
Note: Only use this method for local development, it's not secure.
You can setup password and ssh config while provisioning the box. For example with debian/stretch64 box this is my provision script:
config.vm.provision "shell", inline: <<-SHELL
echo -e "vagrant\nvagrant" | passwd root
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
sed -in 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL
This will set root password to vagrant and permit root login with password. If you are using private_network say with ip address 192.168.10.37 then you can ssh with ssh root#192.168.10.37
You may need to change that echo and sed commands depending on the default sshd_config file.
Adding this to the Vagrantfile worked for me. These lines are the equivalent of you entering sudo su - every time you login. Please notice that this requires reprovisioning the VM.
config.vm.provision "shell", inline: <<-SHELL
echo "sudo su -" >> .bashrc
SHELL
I know this is an old question, but looking at the original question, it looks like the user just wanted to run a command as root, that's what I need to do when I was searching for an answer and stumbled across the question.
So this one is worth knowing in my opinion:
vagrant ssh servername -c "echo vagrant | sudo -S shutdown 0"
vagrant is the password being echoed into the the sudo command, because as we all know, the vagrant account has sudo privileges and when you sudo, you need to specify the password of the user account, not root..and of course by default, the vagrant user's password is vagrant !
By default you need root privileges to shutdown so I guess doing a shutdown is a good test.
Obviously you don't need to specify a server name if there is only one for that vagrant environment. Also, we're talking about local vagrant virtual machine to the host, so there isn't really any security issue that I can see.
Hope this helps.
I had some troubles with provisioning when trying to login as root, even with PermitRootLogin yes. I made it so only the vagrant ssh command is affected:
# Login as root when doing vagrant ssh
if ARGV[0]=='ssh'
config.ssh.username = 'root'
end
I used vagrant putty with the vagrant multi putty plugin, it took me directly to root.
vagrant destroy
vagrant up
Please add this to vagrant file:
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
I was given some login information for an EC2 machine, basically an ec2-X-X-X.compute-X.amazonaws.com plus a username and password.
How do I access the machine? I tried sshing:
ssh username#ec2-X-X-X.compute-X.amazonaws.com
but I get a Permission denied, please try again. when I enter the password. Is sshing the right way to access the EC2 machine? (Google hits I found suggested that you could ssh into the machine, but they also used keypairs.) Or is it more likely that the problem is that I was given invalid login credentials?
If you are new to AWS and need to access a brand new EC2 instance via ssh, keep in mind that you also need to allow incoming traffic on port 22.
Assuming that the EC2 instance was created accepting all the default wizard suggestions, access to the machine will be guarded by the default security group, which basically prohibits all inbound traffic. Thus:
Go to the AWS console
Choose Security Groups on the left navigation pane
Choose default from the main pane (it may be the only item in the list)
In the bottom pane, choose Inbound, then Create a new rule: SSH
Click Add rule and then Apply Rule Changes
Next, assuming that you are in possession of the private key, do the following:
$ chmod 600 path/to/mykey.pem
$ ssh -i path/to/mykey.pem root#ec2-X-X-X.compute-X.amazonaws.com
My EC2 instance was created from a Ubuntu 32-bit 12.04 image, whose configuration does not allow ssh access to root, and asks you to log in as ubuntu instead:
$ ssh -i path/to/mykey.pem ubuntu#ec2-X-X-X.compute-X.amazonaws.com
Cheers,
Giuseppe
Our Amazon AMI says to "Please login as the ec2-user user rather than root user.", so it looks like each image may have a different login user, e.g.
ssh -i ~/.ssh/mykey.pem ec2-user#ec2-NN-NNN-NN-NN.us-foo-N.compute.amazonaws.com
In short, try root and it will tell you what user you should login as.
[Edit] I'm supposing that you don't have AWS management console credentials for the account, but if you do, then you can navigate to the EC2->Instances panel of AWS Management Console, right click on the machine name and select "Connect..." A list of the available options for logging in will be displayed. You will (or should) need a key to access an instance via ssh. You should have been given this or else it may need to be generated.
If it's a Windows instance, you may need to use Remote Desktop Connection to connect using the IP or host name, and then you'll also need a Windows account login and password.
The process of connecting to an AWS EC2 Linux instance via SSH is covered step-by-step (including the points mentioned below) in this video.
To correct this particular issue with SSH-ing to your EC2 instance:
The ssh command you ran is not in the correct format. It should be:
ssh -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
Note, you need access to the private key (.pem) file to use in the command above. AWS prompts you to download this file when you first launch your instance. You will need to run the following command to ensure that only your root user has read-access to it:
chmod 400 /path/to/yourKeyFile.pem
Depending on your Linux distribution, the user you need to specify when you run ssh may be one of the following:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don't work, check with your AMI provider.
You need to enable an inbound SSH firewall. This can be done under the Security Groups section of AWS. Full details for this piece can be found here.
For this you need to be have a private key it's like keyname.pem.
Open the terminal using ctrl+alt+t.
change the file permission as a 400 or 600 using command chmod 400 keyname.pem or chmod 600 keyname.pem
Open the port 22 in security group.
fire the command on terminal ssh -i keyname.pem username#ec2-X-X-X.compute-X.amazonaws.com
Indeed EC2 (Amazon Elastic Compute Cloud) does not allow password authentication to their instances (linux machines) by default.
The only allowed authentication method is with an SSH key that is created when you create the instance. During creation they allow you to download the SSH key just once, so if you loose it, then you have to regenerate it.
This SSH key is only for the primary user - usually named
"ec2-user" (Amazon Linux, Red Hat Linux, SUSE Linux)
"root" (Red Hat Linux, SUSE Linux)
"ubuntu" (Ubuntu Linux distribution)
"fedora" (Fedora Linux distribution)
or similar (depending on distribution)
See connection instructions: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html
If you want to add a new user the recommended way is to generate and add a new SSH key for the new user, but not specify a password (which would be useless anyway since password authentication is not enabled by default).
Managing additional users: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
After all if you want to enable password authentication, which lowers down the security and is not recommended, but still you might need to do that for your own specific reasons, then just edit
/etc/ssh/sshd_config
For example:
sudo vim /etc/ssh/sshd_config
find the line that says:
PasswordAuthentication no
and change it to
PasswordAuthentication yes
Then restart the instance
sudo reboot
After restarting, you are free to create additional users with password authentication.
sudo useradd newuser
sudo passwd newuser
Add the new user to the sudoers list:
sudo usermod -a -G sudo newuser
Make sure user home folder exists and is owned by the user
sudo mkdir /home/newuser
sudo chown newuser:newuser /home/newuser
New you are ready to try and login with newuser via ssh.
Authentication with ssh keys will continue to work in parallel with password authentication.