SSHing into EC2 server via gives error Please login as the ec2-user user rather than root user - ssh

Question as title.
Why is this, I have used the ssh command:
ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
But i get that error, find nothing on google. What am I doing wrong?

You log in as ec2-user as Klaus suggested:
ssh -i key.pem ec2-user#host
... and then you use sudo to run commands. E.g., to edit the /etc/hosts file which is owned by root and requires root privileges: sudo nano /etc/hosts.
Or you run sudo su to become the root user.

By default root user is not allowed to login but you can use ec2-user as indicated by others.
Once you login with ec2-user you switch to root and change the SSH configuration.
To become the root user you run:
sudo su -
Edit the SSH daemon configuration file /etc/ssh/sshd_config, e.g. by using vi, and replace the PermitRootLogin entry with the following:
PermitRootLogin without-password
Reload the SSH daemon configuration by running:
/etc/init.d/sshd reload
The message Please login as the ec2-user user rather than root user. is displayed because a command is executed when you login with the private key. To remove that command edit ~/.ssh/authorized_keys file and remove the command option. The line should start with the key type (Eg. ssh-rsa).
(*) Do at your own risk. I recommend you to leave always a console open just in case you're not able to login after you make the configuration changes.
For reference you can read the man pages:
man sshd_config
man sshd

I have encountered a similar problem when setting up a hadoop cluster on Amazon ec2.
My head node needs to have root ssh access to each worker/slave nodes. I aliased the connects by adding each slave node's IP address, private address, and alias name to the /etc/hosts/ file. (I get that data by running the command echo -e "`hostname -i`\t`hostname -f`\talias-name" where alias-name is what I call each node (head or n1 for example). Then I put that output for each node in every node's /etc/hosts file.
The problem I have been encountering is that when I type ssh n1 while in my head node to ssh into my first slave node, I get that same error message: Please login as the use "ec2-user" rather than the user "root".
So after doing some research, I figured out how to fix it.
First:
ssh into your server. non-root (ec2-user) access is fine here.
Then su - your way into root. Now vi /etc/ssh/sshd_config and
un-comment the line PermitRootLogin yes.
Exit vi editor.
Now restart ssh daemon by typing service sshd stop then service
sshd start.
Second:
Now, here is the part I had to dig for,
run vi /root/.ssh/authorized_keys
Comment out everything up to ssh-rsa. Just put a # at the beginning
of the file's content, before no-port-forwarding... and hit enter on ssh-rsa to move it to
the next line (this way you dont have to delete anything in case you
want to backtrack).
exit vi editor
Now you should be able to login to root without that error message popping up.
Also, if you are using aliases for a cluster setup; Repeat the same steps on each node. First ssh in using ec2-user then follow the steps.
After adding the IP address, private address, and alias name info to your /etc/hosts file you should be able to ssh into each node's root using the alias name for example ssh n1.
The tutorial I followed is here: https://www.youtube.com/watch?v=xrxQXfE7t9A
But it didnt discuss the problem with root login.
Hope that helps! It worked for me.
*Keep in mind that I havnt taken any security into concern. This is simply a practice/dev setup.

I think it's just asking you to login with another username. Do you happen to have a user called ec2-user? If so, try this instead:
ssh -i mykey.pem ec2-user#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com

I have faced the same problem when I tried to access my EC2 instance as 'root' through Windows PuTTY client, this is how I solved problem.
Access and edit SSH configuration file, to allow root login and password authentication.
Login as ec2-user (by default it is allowed)
Enter below command to open ssh config
sudo vi /etc/ssh/sshd_config
Edit SSH configuration file as below using vi, how to use vi editor
PermitRootLogin yes (remove # at begging if it present)
PasswordAuthentication yes
Restart SSH
sudo /etc/init.d/sshd restart
Change/set root password
sudo passwd root
type new password and re-enter it (at least 8 characters)
Exit current session and close PuTTY
exit
Try again login as root and type previously set password.

Solved!
Try compare root key file and user key file)
diff /root/.ssh/authorized_keys /home/user/.ssh/authorized_keys
...and see

For anyone like me that created a new user, copied root's .ssh dir to the new user, set ownership and STILL got this error - look at the new user's ~/.ssh/authorized_keys file. It has SSH params specified that force the prompt. Delete everything from that line up to the ssh-rsa and you'll be good to go.
Or - copy /home/ec2-user/.ssh to the new user homedir instead of /root/.ssh

Edit /etc/ssh/sshd_config, and make sure this is set:
PasswordAuthentication yes
Then reload SSH:
systemctl reload sshd.service
You can now log in as users other than ec2-user.

ssh -i mykey.pem root#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
just replace above command to this
ssh -i mykey.pem ubuntu#xxx-xxx-xx-xx-xxx.compute-1.amazonaws.com
its working in my case

For those who are looking for a single, simple line:
sudo ssh -i ./mykey.pem ec2-user#ec2-x-xx-xxx-xxx.us-east-2.compute.amazonaws.com
Note that, you can get the line after the # from the Public IPv4 DNS section in your instance summary page.

Related

Access to jumpbox as normal user and change to root user in ansible

Here is my situation. I want to access a server through a jumpbox/bastion host.
so, I will login as normal user in jumpbox and then change user to root after that login to remote server using root. I dont have direct access to root in jumpbox.
$ ssh user#jumpbox
$ user#jumpbox:~# su - root
Enter Password:
$ root#jumpbox:~/ ssh root#remoteserver
Enter Password:
$ root#remoteserver:~/
Above is the manual workflow. I want to achieve this in ansible.
I have seen something like this.
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#jumpbox"'
This doesnot work when we need to switch to root and login to remote server.
There are a few things to unpack here:
General Design / Issue:
This isn't an Ansible issue, it's an ssh issue/proxy misconfiguration.
A bastion host/ssh proxy isn't meant to be logged into and have commands ran directly on it interactively (like su - root, enter password, then ssh...). That's not really a bastion, that's just a server you're logging into and running commands on. It's not an actual ssh proxy/bastion/jump role. At that point you might as well just run Ansible on the host.
That's why things like ProxyJump and ProxyCommand aren't working. They are designed to work with ssh proxies that are configured as ssh proxies (bastions).
Running Ansible Tasks as Root:
Ansible can run with sudo during task execution (it's called "become" in Ansible lingo), so you should never need to SSH as the literal root user with Ansible (shouldn't ssh as root ever really).
Answering the question:
There are a lot of workarounds for this, but the straightforward answer here is to configure the jump host as a proper bastion and your issue will go away. An example...
As the bastion "user", create an ssh key pair, or use an existing one.
On the bastion, edit the users ~/.ssh/config file to access the target server with the private key and desired user.
EXAMPLE user#bastion's ~/.ssh/config (I cringe seeing root here)...
Host remote-server
User root
IdentityFile ~/.ssh/my-private-key
Add the public key created in step 1 to the target servers ~/.ssh/authorized_keys file for the user you're logging in as.
After that type of config, your jump host is working as a regular ssh proxy. You can then use ProxyCommand or ProxyJump as you had tried to originally without issue.

PUTTY access Denied

I am trying to login using putty SSH it showing error "Access Denied Using Keyboard-interactive authentication.", I am trying with correct id and password.
My solution found.
I'd try:
1- SSH Config Reset http://YOURIPADDRESS:2086/scripts2/doautofixer?autofix=safesshrestart
2- Host Access Control add sshd ip allow
3- Manage root’s SSH Keys key creat and putty connect
4- Cpanel any domain login -> create ssh key > putty login su root
5- WHM Service Manager sshd enabled
6- Other root username login test
But not login.
SOLUTION:
any domain ssh connect via key -> vim /etc/passwd or vi /etc/passwd
/etc/passwd (preview):
root:x:0:0:root:/root:/bin/bash^M
bin:x:1:1:bin:/bin:/sbin/nologin^M
...
PROBLEM = ^M
Contact system administration clear ^M char solved.
Note: Because I have already edited the passwd file in the "windows" environment. Do not edit windows editor.
For edit: Putty another login and su root -> vim /etc/passwd -> clear all ^M -> For Save :wq, For Not save quit :q!
EDIT
If that does not work try using the command line:
putty.exe -l [LOGIN] -pw [PASSWORD] [HOST]
---END EDIT---
Try unchecking the encircled checkbox.
Maybe you are logging in as a user that is blocked (this error can happen if trying to login as root).
If you need to be a blocked user, try logging in as another user then after you are successfully logged in,
Run Command :
su amit
or
su root
as the case may be.
Instead of editing /etc/passwd utilizing vim to remove the ^M you can just install dos2unix to fix that issue.
sudo apt install dos2unix -y
sudo dos2unix /etc/passwd
The ^M typically happens when someone edits the Linux file in a Windows text editor and then saves that file back to a Linux system. The Linux system picks up on the end of line character placed by Windows and see's it as a ^M

Ambiguous output redirect while trying to setup passwordless ssh

I have installed keychain using the command:
apt-get install keychain
then, I tried to set up a passwordless ssh connection like this:
ssh-keygen -t rsa
After that,
ssh-copy-id user#host
it attempts for password but the following error is displayed:
.ssh/authorized_keys: No such file or directory
Ambiguous output redirect
How can I fix this problem?
You might try first logging onto the host again (with your password), and then running ssh-keygen (with no arguments) there. I think that should create a .ssh directory there for you.
And then (back on your local machine) when you run ssh-copy-id, it should work.

vagrant login as root by default

Problem: frequently the first command I type to my boxes is su -.
Question: how do I make vagrant ssh use the root user by default?
Version: vagrant 1.6.5
This is useful:
sudo passwd root
for anyone who's been caught out by the need to set a root password in vagrant first
Solution:
Add the following to your Vagrantfile:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
When you vagrant ssh henceforth, you will login as root and should expect the following:
==> mybox: Waiting for machine to boot. This may take a few minutes...
mybox: SSH address: 127.0.0.1:2222
mybox: SSH username: root
mybox: SSH auth method: password
mybox: Warning: Connection timeout. Retrying...
mybox: Warning: Remote connection disconnect. Retrying...
==> mybox: Inserting Vagrant public key within guest...
==> mybox: Key inserted! Disconnecting and reconnecting using new SSH key...
==> mybox: Machine booted and ready!
Update 23-Jun-2015:
This works for version 1.7.2 as well. Keying security has improved since 1.7.0; this technique overrides back to the previous method which uses a known private key. This solution is not intended to be used for a box that is accessible publicly without proper security measures done prior to publishing.
Reference:
https://docs.vagrantup.com/v2/vagrantfile/ssh_settings.html
This works if you are on ubuntu/trusty64 box:
vagrant ssh
Once you are in the ubuntu box:
sudo su
Now you are root user. You can update root password as shown below:
sudo -i
passwd
Now edit the below line in the file /etc/ssh/sshd_config
PermitRootLogin yes
Also, it is convenient to create your own alternate username:
adduser johndoe
Wait until it asks for password.
If Vagrantfile as below:
config.ssh.username = 'root'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'
But vagrant still ask you root password,
most likely the base box you used do not configured to allow root login.
For example, the offical ubuntu14.04 box do not set PermitRootLogin yes in /etc/ssh/sshd_config.
So If you want a box can login as root default(only Vagrantfile, no more work), you have to :
Setup a vm by username vagrant(whatever name but root)
Login and edit sshd config file.
ubuntu: edit /etc/ssh/sshd_config, set PermitRootLogin yes
others: ....
(I only use ubuntu, feel free to add workaround of other platforms)
Build a new base box:
vagrant package --base your-vm-name
this create a file package.box
Add that base box to vagrant:
vagrant box add ubuntu-root file:///somepath/package.box
then, you need use this base box to build vm which allow auto login as root.
Destroy original vm by vagrant destroy
Edit original Vagrantfile, change box name to ubuntu-root and username to root, then vagrant up create a new one.
It cost me some time to figure out , it is too complicate in my opinion. Hope vagrant would improve this.
Dont't forget root is allowed root to login before!!!
Place the config code below in /etc/ssh/sshd_config file.
PermitRootLogin yes
Note: Only use this method for local development, it's not secure.
You can setup password and ssh config while provisioning the box. For example with debian/stretch64 box this is my provision script:
config.vm.provision "shell", inline: <<-SHELL
echo -e "vagrant\nvagrant" | passwd root
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
sed -in 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
service ssh restart
SHELL
This will set root password to vagrant and permit root login with password. If you are using private_network say with ip address 192.168.10.37 then you can ssh with ssh root#192.168.10.37
You may need to change that echo and sed commands depending on the default sshd_config file.
Adding this to the Vagrantfile worked for me. These lines are the equivalent of you entering sudo su - every time you login. Please notice that this requires reprovisioning the VM.
config.vm.provision "shell", inline: <<-SHELL
echo "sudo su -" >> .bashrc
SHELL
I know this is an old question, but looking at the original question, it looks like the user just wanted to run a command as root, that's what I need to do when I was searching for an answer and stumbled across the question.
So this one is worth knowing in my opinion:
vagrant ssh servername -c "echo vagrant | sudo -S shutdown 0"
vagrant is the password being echoed into the the sudo command, because as we all know, the vagrant account has sudo privileges and when you sudo, you need to specify the password of the user account, not root..and of course by default, the vagrant user's password is vagrant !
By default you need root privileges to shutdown so I guess doing a shutdown is a good test.
Obviously you don't need to specify a server name if there is only one for that vagrant environment. Also, we're talking about local vagrant virtual machine to the host, so there isn't really any security issue that I can see.
Hope this helps.
I had some troubles with provisioning when trying to login as root, even with PermitRootLogin yes. I made it so only the vagrant ssh command is affected:
# Login as root when doing vagrant ssh
if ARGV[0]=='ssh'
config.ssh.username = 'root'
end
I used vagrant putty with the vagrant multi putty plugin, it took me directly to root.
vagrant destroy
vagrant up
Please add this to vagrant file:
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.ssh.insert_key = 'true'

ssh: The authenticity of host 'hostname' can't be established

When i ssh to a machine, sometime i get this error warning and it prompts to say "yes" or "no". This cause some trouble when running from scripts that automatically ssh to other machines.
Warning Message:
The authenticity of host '<host>' can't be established.
ECDSA key fingerprint is SHA256:TER0dEslggzS/BROmiE/s70WqcYy6bk52fs+MLTIptM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pc' (ECDSA) to the list of known hosts.
Is there a way to automatically say "yes" or ignore this?
Depending on your ssh client, you can set the StrictHostKeyChecking option to no on the command line, and/or send the key to a null known_hosts file. You can also set these options in your config file, either for all hosts or for a given set of IP addresses or host names.
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
EDIT
As #IanDunn notes, there are security risks to doing this. If the resource you're connecting to has been spoofed by an attacker, they could potentially replay the destination server's challenge back to you, fooling you into thinking that you're connecting to the remote resource while in fact they are connecting to that resource with your credentials. You should carefully consider whether that's an appropriate risk to take on before altering your connection mechanism to skip HostKeyChecking.
Reference.
Old question that deserves a better answer.
You can prevent interactive prompt without disabling StrictHostKeyChecking (which is insecure).
Incorporate the following logic into your script:
if [ -z "$(ssh-keygen -F $IP)" ]; then
ssh-keyscan -H $IP >> ~/.ssh/known_hosts
fi
It checks if public key of the server is in known_hosts. If not, it requests public key from the server and adds it to known_hosts.
In this way you are exposed to Man-In-The-Middle attack only once, which may be mitigated by:
ensuring that the script connects first time over a secure channel
inspecting logs or known_hosts to check fingerprints manually (to be done only once)
To disable (or control disabling), add the following lines to the beginning of /etc/ssh/ssh_config...
Host 192.168.0.*
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
Options:
The Host subnet can be * to allow unrestricted access to all IPs.
Edit /etc/ssh/ssh_config for global configuration or ~/.ssh/config for user-specific configuration.
See http://linuxcommando.blogspot.com/2008/10/how-to-disable-ssh-host-key-checking.html
Similar question on superuser.com - see https://superuser.com/a/628801/55163
Make sure ~/.ssh/known_hosts is writable. That fixed it for me.
The best way to go about this is to use 'BatchMode' in addition to 'StrictHostKeyChecking'. This way, your script will accept a new hostname and write it to the known_hosts file, but won't require yes/no intervention.
ssh -o BatchMode=yes -o StrictHostKeyChecking=no user#server.example.com "uptime"
This warning is issued due the security features, do not disable this feature.
It's just displayed once.
If it still appears after second connection, the problem is probably in writing to the known_hosts file.
In this case you'll also get the following message:
Failed to add the host to the list of known hosts
You may fix it by changing owner of changing the permissions of the file to be writable by your user.
sudo chown -v $USER ~/.ssh/known_hosts
Edit your config file normally located at '~/.ssh/config', and at the beggining of the file, add the below lines
Host *
User your_login_user
StrictHostKeyChecking no
IdentityFile ~/my_path/id_rsa.pub
User set to your_login_user says that this settings belongs to your_login_user
StrictHostKeyChecking set to no will avoid the prompt
IdentityFile is path to RSA key
This works for me and my scripts, good luck to you.
Ideally, you should create a self-managed certificate authority. Start with generating a key pair:
ssh-keygen -f cert_signer
Then sign each server's public host key:
ssh-keygen -s cert_signer -I cert_signer -h -n www.example.com -V +52w /etc/ssh/ssh_host_rsa_key.pub
This generates a signed public host key:
/etc/ssh/ssh_host_rsa_key-cert.pub
In /etc/ssh/sshd_config, point the HostCertificate to this file:
HostCertificate /etc/ssh/ssh_host_rsa_key-cert.pub
Restart the sshd service:
service sshd restart
Then on the SSH client, add the following to ~/.ssh/known_hosts:
#cert-authority *.example.com ssh-rsa AAAAB3Nz...cYwy+1Y2u/
The above contains:
#cert-authority
The domain *.example.com
The full contents of the public key cert_signer.pub
The cert_signer public key will trust any server whose public host key is signed by the cert_signer private key.
Although this requires a one-time configuration on the client side, you can trust multiple servers, including those that haven't been provisioned yet (as long as you sign each server, that is).
For more details, see this wiki page.
Do this -> chmod +w ~/.ssh/known_hosts. This adds write permission to the file at ~/.ssh/known_hosts. After that the remote host will be added to the known_hosts file when you connect to it the next time.
With reference to Cori's answer, I modified it and used below command, which is working. Without exit, remaining command was actually logging to remote machine, which I didn't want in script
ssh -o StrictHostKeyChecking=no user#ip_of_remote_machine "exit"
Add these to your /etc/ssh/ssh_config
Host *
UserKnownHostsFile=/dev/null
StrictHostKeyChecking=no
Generally this problem occurs when you are modifying the keys very oftenly. Based on the server it might take some time to update the new key that you have generated and pasted in the server. So after generating the key and pasting in the server, wait for 3 to 4 hours and then try. The problem should be solved. It happened with me.
The following steps are used to authenticate yourself to the host
Generate a ssh key. You will be asked to create a password for the key
ssh-keygen -f ~/.ssh/id_ecdsa -t ecdsa -b 521
(above uses the recommended encryption technique)
Copy the key over to the remote host
ssh-copy-id -i ~/.ssh/id_ecdsa user#host
N.B the user # host will be different to you. You will need to type in the password for this server, not the keys password.
You can now login to the server securely and not get an error message.
ssh user#host
All source information is located here:
ssh-keygen
For anyone who finds this and is simply looking to prevent the prompt on first connection, but still wants ssh to strictly check the key on subsequent connections (trust on first use), you can set StrictHostKeyChecking to accept-new in ~/.ssh/config, which will do what you're looking for. You can read more about it in man ssh_config. I strongly discourage disabling key checking altogether.
Run this in host server it's premonition issue
chmod -R 700 ~/.ssh
I had the same error and wanted to draw attention to the fact that - as it just happened to me - you might just have wrong privileges.You've set up your .ssh directory as either regular or root user and thus you need to be the correct user. When this error appeared, I was root but I configured .ssh as regular user. Exiting root fixed it.
This is trying to establish password-less authentication. So, if you try to run that command manually once, it will ask to provide the password there. After entering password, it saves that password permanently, and it will never ask again to type 'yes' or 'no'.
For me the reason is that I have wrong permission on ~/.ssh/known_hosts.
I have no write permission on known_hosts file. So it ask me again and again.
In my case, the host was unkown and instead of typing yes to the question are you sure you want to continue connecting(yes/no/[fingerprint])? I was just hitting enter .
I solve the issue which gives below written error:
Error:
The authenticity of host 'XXX.XXX.XXX' can't be established.
RSA key fingerprint is 09:6c:ef:cd:55:c4:4f:ss:5a:88:46:0a:a9:27:83:89.
Solution:
1. install any openSSH tool.
2. run command ssh
3. it will ask for do u add this host like.
accept YES.
4. This host will add in the known host list.
5. Now you are able to connect with this host.
This solution is working now......