how to login to ec2 machine? - ssh

I was given some login information for an EC2 machine, basically an ec2-X-X-X.compute-X.amazonaws.com plus a username and password.
How do I access the machine? I tried sshing:
ssh username#ec2-X-X-X.compute-X.amazonaws.com
but I get a Permission denied, please try again. when I enter the password. Is sshing the right way to access the EC2 machine? (Google hits I found suggested that you could ssh into the machine, but they also used keypairs.) Or is it more likely that the problem is that I was given invalid login credentials?

If you are new to AWS and need to access a brand new EC2 instance via ssh, keep in mind that you also need to allow incoming traffic on port 22.
Assuming that the EC2 instance was created accepting all the default wizard suggestions, access to the machine will be guarded by the default security group, which basically prohibits all inbound traffic. Thus:
Go to the AWS console
Choose Security Groups on the left navigation pane
Choose default from the main pane (it may be the only item in the list)
In the bottom pane, choose Inbound, then Create a new rule: SSH
Click Add rule and then Apply Rule Changes
Next, assuming that you are in possession of the private key, do the following:
$ chmod 600 path/to/mykey.pem
$ ssh -i path/to/mykey.pem root#ec2-X-X-X.compute-X.amazonaws.com
My EC2 instance was created from a Ubuntu 32-bit 12.04 image, whose configuration does not allow ssh access to root, and asks you to log in as ubuntu instead:
$ ssh -i path/to/mykey.pem ubuntu#ec2-X-X-X.compute-X.amazonaws.com
Cheers,
Giuseppe

Our Amazon AMI says to "Please login as the ec2-user user rather than root user.", so it looks like each image may have a different login user, e.g.
ssh -i ~/.ssh/mykey.pem ec2-user#ec2-NN-NNN-NN-NN.us-foo-N.compute.amazonaws.com
In short, try root and it will tell you what user you should login as.
[Edit] I'm supposing that you don't have AWS management console credentials for the account, but if you do, then you can navigate to the EC2->Instances panel of AWS Management Console, right click on the machine name and select "Connect..." A list of the available options for logging in will be displayed. You will (or should) need a key to access an instance via ssh. You should have been given this or else it may need to be generated.
If it's a Windows instance, you may need to use Remote Desktop Connection to connect using the IP or host name, and then you'll also need a Windows account login and password.

The process of connecting to an AWS EC2 Linux instance via SSH is covered step-by-step (including the points mentioned below) in this video.
To correct this particular issue with SSH-ing to your EC2 instance:
The ssh command you ran is not in the correct format. It should be:
ssh -i /path/my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
Note, you need access to the private key (.pem) file to use in the command above. AWS prompts you to download this file when you first launch your instance. You will need to run the following command to ensure that only your root user has read-access to it:
chmod 400 /path/to/yourKeyFile.pem
Depending on your Linux distribution, the user you need to specify when you run ssh may be one of the following:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don't work, check with your AMI provider.
You need to enable an inbound SSH firewall. This can be done under the Security Groups section of AWS. Full details for this piece can be found here.

For this you need to be have a private key it's like keyname.pem.
Open the terminal using ctrl+alt+t.
change the file permission as a 400 or 600 using command chmod 400 keyname.pem or chmod 600 keyname.pem
Open the port 22 in security group.
fire the command on terminal ssh -i keyname.pem username#ec2-X-X-X.compute-X.amazonaws.com

Indeed EC2 (Amazon Elastic Compute Cloud) does not allow password authentication to their instances (linux machines) by default.
The only allowed authentication method is with an SSH key that is created when you create the instance. During creation they allow you to download the SSH key just once, so if you loose it, then you have to regenerate it.
This SSH key is only for the primary user - usually named
"ec2-user" (Amazon Linux, Red Hat Linux, SUSE Linux)
"root" (Red Hat Linux, SUSE Linux)
"ubuntu" (Ubuntu Linux distribution)
"fedora" (Fedora Linux distribution)
or similar (depending on distribution)
See connection instructions: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstances.html
If you want to add a new user the recommended way is to generate and add a new SSH key for the new user, but not specify a password (which would be useless anyway since password authentication is not enabled by default).
Managing additional users: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/managing-users.html
After all if you want to enable password authentication, which lowers down the security and is not recommended, but still you might need to do that for your own specific reasons, then just edit
/etc/ssh/sshd_config
For example:
sudo vim /etc/ssh/sshd_config
find the line that says:
PasswordAuthentication no
and change it to
PasswordAuthentication yes
Then restart the instance
sudo reboot
After restarting, you are free to create additional users with password authentication.
sudo useradd newuser
sudo passwd newuser
Add the new user to the sudoers list:
sudo usermod -a -G sudo newuser
Make sure user home folder exists and is owned by the user
sudo mkdir /home/newuser
sudo chown newuser:newuser /home/newuser
New you are ready to try and login with newuser via ssh.
Authentication with ssh keys will continue to work in parallel with password authentication.

Related

Can't login with root user in native templates of environments Jelastic

When I create a new environment in some nodes, (i.e. with the Nginx) I can't access to this node with root user
I logged with user a not with root.
Using username "251X-XXX".
Authenticating with public key "rsa-key-XXXXXXXX"
Last login: Thu Sep 28 09:11:56 2017
nginx#node251X-delete ~ $ sudo date
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for nginx:
Sorry, try again.
Brief:
I didn't receive root password to my email (I'm the owner of this environment).
I can't change this node to a Docker image
There's no Reset Password option on Dashboard
Sudo it doesn't work.
Also it happens with other non-docker nodes (Tomcat, MySQL,...)
Any alternative or configuration to enter with root user to this node.
Thanks
Jelastic doesn't provide root access to separate containers. At the same time while accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
For example, you can restart nginx with the following command:
sudo /etc/init.d/nginx restart
No password will be requested.
Note: If you deploy any application, change the configurations or add any extra functionality via SSH to your Jelastic environment, this
will not be displayed at the Jelastic dashboard.
Using our documentation you’ll find out how to:
use SFTP and FISH protocols
manage containers via SSH with Capistrano
Root user is only provided for self-managed nodes (custom Docker / Elastic VPS).
You can execute specific whitelisted commands with sudo (e.g. sudo service nginx restart). Besides that you shouldn't need root access.
If you feel otherwise then contact your hosting provider to discuss your needs and they can find a solution for you.

Unable to connect using SSH to the pushed MobileFirst container image on Bluemix

I have built an MF container image and pushed it. I have copied the file in (Mac) ~/.ssh/id_rsa.pub to mfpf-server/usr/ssh before building the image.
I am trying to connect using the command in Mac terminal:
ssh -i ~/.ssh/id_rsa admin#public_ip
It says:
Permission denied (publickey).
Any idea? What is the user I shall use?
your problem is very probably related to the permissions of the pub key copied on the container or to the configuration of your key.
You could check the permissions of key copied on the container, sshd is really strict on permissions for the authorized_keys files: if authorized_keys is writable for anybody other than the user or can be made writable by anybody other than the user, sshd will refuse to authenticate (unless sshd is configured with StrictModes no)
Moreover such a problem won't be showed using ssh -v, it will showed only on daemon logs (on the container)
From man sshd(8):
~/.ssh/authorized_keys
Lists the public keys (RSA/DSA) that can be used for logging in
as this user. The format of this file is described above. The
content of the file is not highly sensitive, but the recommended
permissions are read/write for the user, and not accessible by
others.
If this file, the ~/.ssh directory, or the user's home directory
are writable by other users, then the file could be modified or
replaced by unauthorized users. In this case, sshd will not
allow it to be used unless the StrictModes option has been set to
“no”.
So I suggest you to check about the files and directories permissions.
Then check that the content of your pub key has been copied correctly on authorized_keys listing
/root/.ssh/authorized_keys
To access the container with the ssh key you need to use the "root" user.
ssh -i ~/.ssh/id_rsa root#<ip address>

Does the .ssh file automatically come installed on a linux system by default?

If I tell someone to look in
~/.ssh
Can I assume that that folder will always exist on a nix filesystem? Specifically, is it always there on the standard distros of linux and MacOsx? I'm following the github generate ssh keys tutorial, and it appears to assume that ssh is something included by default. Is that true?
Update: apparently MAC OSX has an ssh server installed by default, but it is not enabled. according to the log by Chris Double,
The Apple Mac OS X operating system has SSH installed by default but the SSH daemon is not enabled. This means you can’t login remotely or do remote copies until you enable it.
To enable it, go to ‘System Preferences’. Under ‘Internet & Networking’ there is a ‘Sharing’ icon. Run that. In the list that appears, check the ‘Remote Login’ option.
This starts the SSH daemon immediately and you can remotely login using your username. The ‘Sharing’ window shows at the bottom the name and IP address to use. You can also find this out using ‘whoami’ and ‘ifconfig’ from the Terminal application.
On OS X, Ubuntu, CentOS and presumably other linux distros the ~/.ssh directory does not exist by default in a user's home directory. On OS X and most linux distros the ssh-client and typically an ssh server are installed by default so that can be a safe assumption.
The absence of the ~/.ssh directory does not mean that the ssh client is not installed or that an ssh server is not installed. It just means that particular user has not created the directory or used the ssh client before. A user can create the directory automatically by successfully sshing to a host which will add the host to the client's ~/.ssh/known_hosts file or by generating a key via ssh-keygen. A user can also create the directory manually via the following commands.
mkdir ~/.ssh
chmod 700 ~/.ssh
To test whether an ssh client and/or server is installed and accessible on the path you can use the which command. Output will indicate whether the command is installed and in the current user's path.
which ssh # ssh client
which sshd # ssh server
I would say no. I guess on 99% of the systems there is an ssh server running but IMHO in most cases you need to install that software on your own.
And even if it is installed, the directories are created on the first usage of ssh for that user.

Amazon AWS EC2 Instance - Can't connect with SSH

This shouldn't be this hard. I cannot connect to new AWS EC2 instance via SSH clients. I am connecting from a Win 7 box.
Instance OS: Debian 6
AMI: debian-squeeze-i386-20121119-e4554303-3a9d-412e-9604-eae67dde7b76-ami-1977f070.1(ami-a121a6c8)
User: tried root and also ec2-user
Using .pem keypair that AWS generated and I downloaded
Confirmed security group and Key Pair Name on instance
SSH port 22 is OPEN: Nmap says so and Telnet gets a welcome reply
Using 3 different clients: all clients connect ok
PuTTY replies: Server refused our key
MindTerm Java browser add-in replies: Authentication failed, permission denied
Bitvise SSH replies: Attempting 'publickey' auth; auth failed;
Rebooted instance, wash, rinse, repeat...
REBUILT new instance and new keypair, wash, rinse, repeat...
Connecting isn't the issue. Why would the instance not accept the .pem file as the password? Is there an additional step I am missing? I followed EVERY frigging guide I could Google. AWS support is a joke. stackoverflow to the rescue...
TIA.
According to the debian wiki which has documentation on the AMI you are using, the username you need to use to login is 'admin'.
I have had many issues with connecting to EC2 via ssh.
ssh -i the-keypair-filename root#yourdomain.com
- Keypair file must be in same directory.
- I just used terminal to connect.
Make sure you generate or assign the keypair when launching the instance.
Also you can verify the keypair you have set in the AWS Management Console, this is done by selecting the running instance and then looking for "Key Pair Name:".
I hope this is helpful.
My problem was that I didn't add a volume that was expected in the fstab file so the server didn't start fully and the sshd daemon wasn't running.
Check with:
telnet HOST 22
Check the server logs to make sure it starts properly before you waste lots of time like I did.
Amazon Linux AMIs that use ec2-user password are listed at the bottom of this page.
http://aws.amazon.com/amazon-linux-ami/
Check that you are using one of those if trying to use ec2-user, or check the documentation for the AMI you are using.
Teri
Try using the "admin" username and ignore the username suggested by Amazon.
I had the similar problem and I have solved the issue by following approach.
1) Edited the knife.rb file in my chef folder i.e. :\Users\Administrator\chef-starter\chef-repo.chef\knife.rb as bellow:
knife[:aws_access_key_id] = "xxxxxxxxxxxxxxxxxxxx"
knife[:aws_secret_access_key] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
knife[:region] = 'ap-southeast-1'
knife[:aws_ssh_key_id] = "ChefUser"
knife[:ssh_user]="ec2-user"
In the command prompt, issued the command to create an ec2-server:
knife ec2 server create -r "role[webserver]" --image ami-abcd1234 --flavor t1.micro -G ChefClient -x root -N server01 -i H:\Chef-files\ChefUser.pem
Note that, even though I had given all the details in the knife.rb file, I had to give the .pem file path in coomand line through -i option. That solved my problem.
Check, if the solution of mine helps you.
Cheers,
Chandan
Logging in as "ubuntu" worked for me:
ssh -i private_key.pem ubuntu#myubuntuserver
Hope this helps
--Erin

SSH to AWS Instance without key pairs

1: Is there a way to log in to an AWS instance without using key pairs? I want to set up a couple of sites/users on a single instance. However, I don't want to give out key pairs for clients to log in.
2: What's the easiest way to set up hosting sites/users in 1 AWS instance with different domains pointing to separate directories?
Answer to Question 1
Here's what I did on a Ubuntu EC2:
A) Login as root using the keypairs
B) Setup the necessary users and their passwords with
# sudo adduser USERNAME
# sudo passwd USERNAME
C) Edit /etc/ssh/sshd_config setting
For a valid user to login with no key
PasswordAuthentication yes
Also want root to login also with no key
PermitRootLogin yes
D) Restart the ssh daemon with
# sudo service ssh restart
just change ssh to sshd if you are using centOS
Now you can login into your ec2 instance without key pairs.
1) You should be able to change the ssh configuration (on Ubuntu this is typically in /etc/ssh or /etc/sshd) and re-enable password logins.
2) There's nothing really AWS specific about this - Apache can handle VHOSTS (virtual hosts) out-of-the-box - allowing you to specify that a certain domain is served from a certain directory. I'd Google that for more info on the specifics.
I came here through Google looking for an answer to how to setup cloud init to not disable PasswordAuthentication on AWS. Both the answers don't address the issue. Without it, if you create an AMI then on instance initialization cloud init will again disable this option.
The correct method to do this, is instead of manually changing sshd_config you need to correct the setting for cloud init (Open source tool used to configure an instance during provisioning. Read more at: https://cloudinit.readthedocs.org/en/latest/). The configuration file for cloud init is found at:
/etc/cloud/cloud.cfg
This file is used for setting up a lot of the configuration used by cloud init. Read through this file for examples of items you can configure on cloud-init. This includes items like default username on a newly created instance)
To enable or disable password login over SSH you need to change the value for the parameter ssh_pwauth. After changing the parameter ssh_pwauth from 0 to 1 in the file /etc/cloud/cloud.cfg bake an AMI. If you launch from this newly baked AMI it will have password authentication enabled after provisioning.
You can confirm this by checking the value of the PasswordAuthentication in the ssh config as mentioned in the other answers.
Recently, AWS added a feature called Sessions Manager to the Systems Manager service that allows one to SSH into an instance without needing to setup a private key or opening up port 22. I believe authentication is done with IAM and optionally MFA.
You can find out more about it here:
https://aws.amazon.com/blogs/aws/new-session-manager/
su - root
Goto /etc/ssh/sshd_config
vi sshd_config
Authentication:
PermitRootLogin yes
To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
Change to no to disable tunnelled clear text passwords
PasswordAuthentication yes
:x!
Then restart ssh service
root#cloudera2:/etc/ssh# service ssh restart
ssh stop/waiting
ssh start/running, process 10978
Now goto sudoers files (/etc/sudoers).
User privilege specification
root ALL=(ALL)NOPASSWD:ALL
yourinstanceuser ALL=(ALL)NOPASSWD:ALL / This is the user by which you are launching instance.
AWS added a new feature to connect to instance without any open port, the AWS SSM Session Manager.
https://aws.amazon.com/blogs/aws/new-session-manager/
I've created a neat SSH ProxyCommand script that temporary adds your public ssh key to target instance during connection to target instance. The nice thing about this is you will connect without the need to add the ssh(22) port to your security groups, because the ssh connection is tunneled through ssm session manager.
AWS SSM SSH ProxyComand -> https://gist.github.com/qoomon/fcf2c85194c55aee34b78ddcaa9e83a1
Amazon added EC2 Instance Connect.
There is an official script to automate the process https://pypi.org/project/ec2instanceconnectcli/
pip install ec2instanceconnectcli
Then just
mssh <instance id>