Running EC2 instance suddenly refuses SSH connection - apache

I've set up the EC2 instance couple days ago and even last night I was able to SSH to it with no problems. Today morning, I can't ssh to it. Port 22 is already open in the security group and I haven't changed anything since last night.
Error:
ssh: connect to host [ip address] port 22: Connection refused
I had similar issue recently and i couldn't figure out why it was happening, so I had to create a new instance, set it up again, and connect and configure all EBS storages to the new one. Took me couple hours... and now it's happening again. In the previous one, I've installed denyhost, which might have blocked me, but in the current one, there are only apache2, and mysql running.
The current instance has been up for 16 hours now, so I don't think it's because it didn't finish booting... Also, port 22 is open to all sources (0.0.0.0/0) and is using tcp protocol.
Any ideas?
Thanks.

With the help of #abhi.gupta200297, we were able to resolve it.
The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.
UPDATE 4/23/2015
Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU

Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.
Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.

For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:
From the AWS Developer Forum post on this topic:
Try stopping the broken instance, detaching the EBS volume, and
attaching it as a secondary volume to another instance. Once you've
mounted the broken volume somewhere on the other instance, check the
/etc/sshd_config file (near the bottom). I had a few RHEL instances
where Yum scrogged the sshd_config inserting duplicate lines at the
bottom that caused sshd to fail on startup because of syntax errors.
Once you've fixed it, just unmount the volume, detach, reattach to
your other instance and fire it back up again.
Let's break this down, with links to the AWS documentation:
Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
Once you've mounted the broken volume somewhere on the other instance,
check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
cd /etc/ssh
sudo nano sshd_config
ctrl-v a bunch of times to get to the bottom of the file
ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
ctrl-x and Y to save and exit the edited file
#Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
cd /etc
sudo nano rc.local
look for the "PermitRootLogin..." lines and delete them
ctrl-x and Y to save and exit the edited file
Once you've fixed it, just unmount the volume,
detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
reattach to your other instance and
fire it back up again.

This happened to me on a Red Hat EC2 instance because these two lines were being automatically appended the end of the /etc/ssh/sshd_config file every time I launched my instance:
PermitRootLogin without-passwordUseDNS no
One of these append operations was done without a line break, so the tail of the sshd_config file looked like:
PermitRootLogin without-passwordUseDNS noPermitRootLogin without-passwordUseDNS no
That caused sshd to fail to start on the next launch. I think this was caused by the bug reported here: https://bugzilla.redhat.com/show_bug.cgi?id=956531 The solution was to remove all the duplicate entries at the bottom of the sshd_config file, and add extra line breaks at the end.

Go to your AWS management console > select instance > right click and select "Get System Logs"
This will list what went wrong.

Had the same issue, but sys logs had this:
Starting sshd: /var/empty/sshd must be owned by root and not group or world-writable.
[FAILED]
Used the same steps described above to detach volume and attach to connectable instance. Then used:
sudo chmod 755 /var/empty/sshd
sudo chown root:root /var/empty/sshd
(https://support.microsoft.com/en-us/help/4092816/ssh-fails-because-var-empty-sshd-is-not-owned-by-root-and-is-not-group)
Then detached and reattached to original EC2 Instance and could now access via ssh.

I got similar ssh locked out by detach an EBS but forgot to modify the /etc/fstab

If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes (except those that are misconfigured, obviously), then it continues the boot process until SSH is up, so you can connect to the instance and fix the incorrect fstab entries.

In my case, the volume was out of space and a service was failing to start. I used the AWS tutorial (from Sherzod's post) to mount it on a good EC2 instance and clean it up and remove the service from startup before remounting it and verifying that things worked.

For me it was that my IP had changed. Hope this helps someone. Navigate to the security groups and update your My IP in the inbound rules.

I had the same issue not able to connect to the aws instance with permission denied error.
I was able to connect with aws team on screen share call and they guided me to change the folder permission on the aws instance using the following user meta script.
steps :
stop the instance
Actions > Instance setting > Edit user meta
Enter the below script and save
**Content-Type: multipart/mixed; boundary="//" MIME-Version: 1.0
--// Content-Type: text/cloud-config; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config cloud_final_modules:
[scripts-user, always]
--// Content-Type:
text/x-shellscript; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash chown root:root /home chmod 755 /home chmod 700 /home/ubuntu chmod 700 /home/ubuntu/.ssh chmod 600 /home/ubuntu/.ssh/authorized_keys ls -ld /home /home/ubuntu /home/ubuntu/.ssh /home/ubuntu/.ssh/authorized_keys chown ubuntu:ubuntu /home/ubuntu -R
--//**
save and connect to the instance with correct pem key.
Resolved my problem
*change ubuntu to your instance username

Related

VScode remote connection error: The process tried to write to a nonexistent pipe

I use vscode with remote-ssh to connect my server, after configuring, I want to connect my host, but it failed, the dialog box display:"could not establish connection to XX, The process tried to write to a nonexistent pipe."
output:
[16:45:20.916] Log Level: 3
[16:45:20.936] remote-ssh#0.49.0
[16:45:20.936] win32 x64
[16:45:20.944] SSH Resolver called for "ssh-remote+aliyun", attempt 1
[16:45:20.945] SSH Resolver called for host: aliyun
[16:45:20.945] Setting up SSH remote "aliyun"
[16:45:21.012] Using commit id "c47d83b293181d9be64f27ff093689e8e7aed054" and quality "stable" for server
[16:45:21.014] Install and start server if needed
[16:45:21.019] Checking ssh with "ssh -V"
[16:45:21.144] > OpenSSH_for_Windows_7.7p1, LibreSSL 2.6.5
[16:45:21.214] Running script with connection command: ssh -T -D 5023 aliyun bash
[16:45:21.221] Terminal shell path: C:\WINDOWS\System32\cmd.exe
[16:45:21.504] >
>
>
> ]0;C:\WINDOWS\System32\cmd.exe
[16:45:21.505] Got some output, clearing connection timeout
[16:45:21.577] >
>
>
>
[16:45:21.592] > Bad owner or permissions on C:\\Users\\DY/.ssh/config
>
[16:45:21.689] > The process tried to write to a nonexistent pipe.
>
[16:45:22.091] "install" terminal command done
[16:45:22.092] Install terminal quit with output: The process tried to write to a nonexistent pipe.
[16:45:22.093] Received install output: The process tried to write to a nonexistent pipe.
[16:45:22.096] Resolver error: The process tried to write to a nonexistent pipe
[16:45:22.107] ------
Add the absolute file path to a custom SSH config file(C:\Users\{USERNAME}\.ssh\config), and my problem is solved.
If you format/re-install Server OS, but use same IP as before,
you may encounter fingerprint mismatched.
You may need to delete old fingerprint in this file:
C:\Users\xxx.ssh\known_host
and old IP in the file:
C:\Users\xxx.ssh\config
Then try to add host again.
What worked for me:
delete ssh config folder both in C:\Program Data\ssh and C:\<user>\.ssh
In VS Code, press F1, choose Remote-SSH: Connect to Host...
Do NOT enter anything in the prompt, but instead choose + Add New SSH Host..
Enter the full ssh command, including the key (in case of Windows,
you may want to enclose the path with double quote mark) ssh -i "C:\path\to\key" user#host. (you need to make sure the key has a limited permission. Remove all inherited permissions, and only give a full control to the owner.)
You will be asked to choose a folder in which a new config file will be created. Choose any of the two options.
There will a prompt notifying that the new config file
has been created. Click connect
At least three things may be happening:
Option 1
The location of your config file is not the absolute location, meaning you are probably using the location of the folder where the config file is.
If that is the case, access your User Settings in VSCode. Scroll to the Extensions>Remote - SSH. And add config at the end of the absolute file path of your custom SSH config file. In Windows, it can be
C:\Users\user\.ssh\config
See image below
Option 2
Authentication problems.
If that is the case, one of the things that may solve is generating new SSH keys.
In Windows, for that, I recommend using MobaXterm.
In MobaXterm, open a new terminal and write
ssh-keygen -b 4096 -t rsa
Then, in the config file, make sure that the IdentityFile points to the location of the key. MobaXterm's home directory, usually, is C:\Users\user\Documents\MobaXterm. If it makes it easy, one can copy/move the keys to C:\Users\user\.ssh and then just add, in the config file, IdentityFile ~/.ssh/KEY_rsa (where KEY_rsa is the name of the [public] key).
Note that if you use PuTTY to generate the keys, on the server OpenSSH authorized_keys file, one doesn't want the public key that one saves, but the one that appears on top (see image bellow):
Option 3
Your config file may be wrong.
The config file tends to look as follows. Double check if the fields have the information needed for the connection to be established.
Host Test # This is the name we want to give the host
User user # This is the username
Hostname blabla.com # This is the hostname
PreferredAuthentications publickey
IdentityFile ~/.ssh/KEY_rsa # This is the location of the key
IdentitiesOnly yes
Port 50 # This varies
What worked for me was to delete all of the contents of folder: C:\Users\MYNAME.ssh That meant to delete both the config file and known-hosts. The config was probably the most important one to delete.
The solution in my case was editing the json settings file for VSC as shown here: https://code.visualstudio.com/docs/remote/troubleshooting#_troubleshooting-hanging-or-failing-connections
In VSC go to File, Preferences, Setting and click on the upper right hand icon (Open Settings, JSON). Add these two lines to settings.json and retry connecting:
"remote.SSH.showLoginTerminal": true,
"remote.SSH.useLocalServer": false
In my case I had another setup:
Git bash in Windows was configured and I am using the ssh.exe provided by this tool
In the "Remote SSH" extension in VSCode, I specified the full path of this ssh.exe
I am using multiple servers (with ProxyJump)
The error message is the same as the OP but in the logs it was written that the ssh config file was not found, where all the folder names was concatenated (because it did not recognize the windows path separator)
Problem: the ssh.exe is using a different path convention thant VSCode. ssh.exe is using the "/c/Users/..." pattern and VSCode is using the "C:\Users..." pattern.
Solution:
Make sure the SSH config is at a standard place (C:\Users\LOGIN\.ssh\config)
Remove the absolute path of the config file in the "Remote SSH" settings in VSCode
VSCode will still be able to access the settings using the standard path, and the ssh.exe configuration will still look at the same standard path so the connexion is working.
Note:
I have the error only when connecting with multiple ssh servers (using ProxyJump). When connecting only to the first server, the solution of #pszrux and this one are both working for me.
This is probably something everyone has tried before looking here, but it worked for me. The server I was trying to ssh into was not responding, leading to the nonexistant pipe error. I rebooted the server and everything worked fine.
OS: windows 10
In my case there were permission issues. Repeatedly changing inheritance in windows did not solve the issue. Finally this worked
change the folder in which the config file is stored.
From C:/users/usr/.ssh/config to D:/config
and changed the config path in vscode remote ssh settings.
This worked for me.
This seems to be a problem with varied causes and corresponding remedies. In my case the problem had to do with the version of ssh I was using. In my Windows path there were two places were an instance of ssh.exe resided:
C:\Program Files (x86)\OpenSSH\bin
C:\WINDOWS\System32\OpenSSH\
After using both paths to set the "Remote.SSH: Path" parameter (which is in "Remote.SSH: Settings" [see here]), i.e. first C:\Program Files (x86)\OpenSSH\bin\ssh.exe and then C:\WINDOWS\System32\OpenSSH\ssh.exe, the problem still persisted.
Then I looked at this and tried the git-provided ssh.exe, which I already had on my system (otherwise, just install git, it's good stuff anyway :) )
Setting the SSH path parameter with that version did the trick for me, i.e. setting path to:
C:\Program Files\Git\usr\bin\ssh.exe
In my case, I did what dalilander said, but instead of deleting the entire '.ssh' folder, I just needed to delete the file 'known_hosts' and then it worked. So the servers I had saved were not deleted.
The path of that folder is C:\Users\yourUsername\.ssh
For Windows:
Adding the escape character before the private key file name & using quotes around the path solved my issue.
//config file
Host 12.12.12.12
HostName 12.12.12.12
IdentityFile "C:\Users\USERNAME/\PRIVATEKEY" <----Note /\
User username
Trying to add the full path in "IdentityFile" made the trick
" IdentityFile C:\Users\xxx.ssh\xxx"
The solution below may be the last resort but it perfectly solved the issue for me in a Windows 10 local machine. I simply delete the known_hosts file under the directory C:\Users\[your-username]\.ssh, relaunch VS Code and reconnect the remote server through Remote Explorer. Everything works normally afterward.
This seems to be a general error when the ssh connection fails for one of a multitude of reasons.
Adding what my issue was, and what helped, because I don't see it in the other answers in here: I had re-installed the box I was connecting to, and with it, reset the key it was using to authenticate. The ssh process tried to connect and failed with the usual "someone might be MITMing you this very moment, the identification changed" error, visible in the VSCode terminal. Solution was to go to my authorized_keys file and remove the offending key.
Obviously only know that if you know for sure why the identification changed, and that it's harmless. Don't actually get MITMed.
I had this problem once.
All you need to do is,
Go to /Users/XXX/.ssh
if you are on the windows, use command : "del /f known_hosts" to delete the known_hosts on the command prompt.
3.Then go to C:\Users\XXX.ssh\config on the vs code( config file )
4.Delete the host and the user if the host that you are trying to connect to is already there.
5.Then try to connect to the new host as usual.This will work.
The problem here could the mismatch of the finger prints once you reinstall the OS o n your host machine.
So to solve this problem by deleting the host that was saved.
once the config file on the vs code is edited it should look like..below picture is to show how the config file should look(after deleting the host saved)
If you're using WSL and might think that you should update ~/.ssh/config, that might not be the case.
Copy the content from ~/.ssh/config
Append it to C:\User\xxx\.ssh\config windows file
Make sure the public/private key is on C:\User\xxx\.ssh\ and is listed in config
Reconnect
Had an existing(working) configuration and had the same error when I added a new one. What worked for me is instead of just adding a new host configuration, I also commented out the first working config. Didn't know what happened but it worked.
In my case this was an offending key in my known_hosts in Windows (vscode on windows, remote developing via ssh on linux).
The error that comes back in vscode is not explaining in any way.
In my case, the path to key file was wrong.
For me, (windows) the permissions on the .pem file were the Critical issue. I had Administrator group only on the pem file and it was not working. I had to explicitly add the Admin user as well (even though admin is of course in administrator group).
In my case, I had no internet connection.
I was connecting to the server via VPN but the remote configuration was incorrect and I couldn't access the server. (DNS related issues) The connection indicator was showing no errors, so I didn't think of that at first.
Oops :)
I really didn't want to delete my C:Users\valo\.ssh\config, so I played a little with the various entries. It turned out that for me the option IdentitiesOnly yes was the problem. I also disabled security inheritance on all key files in the .ssh folder and left only myself, with Full Rights. Here is what my C:Users\valo\.ssh\config looks like now:
CanonicalizeHostname yes
Host aws.r3
HostName 3.31.45.216
ForwardAgent yes
User ubuntu
IdentityFile C:Users\valo\.ssh\u1-client-20210203-090555.pem
# IdentitiesOnly yes # VSCode Remote doesn't like this flag...?
Host github
HostName github.com
User git
IdentityFile ~/.ssh/id_val_ed25519
Host github.vm
HostName github.com
User git
IdentityFile ~/.ssh/id_valo_ed25519
Host *
ForwardAgent yes
AddKeysToAgent yes
LogLevel FATAL
ChallengeResponseAuthentication no
Now I can connect to aws.r3 with VSCode Remote.
A possible solution:
First run cat $HOME/.ssh/id_rsa.pub on your computer. That will get you a key. Save this key somewhere.
Then open this file by running vim $HOME/.ssh/authorized_keys on the computer that you're are ssh'ing to. Then copy the key in a new line of this file and close it by typing :wq.
You are all set.
In my case it was because the name I gave the host in config was myuser#myhostname. So if your config file looks like this:
Host myuser#myhostname
HostName 12.64.88.234
User myuser
Port 22
Try changing it to this:
Host myuser
HostName 12.64.88.234
User myuser
Port 22
Mine was solved by adding ".pem" extension while specifying the private key.
Here's the sample config text for reference.
Host ec2-3-234-8-176.compute-1.amazonaws.com
HostName ec2-3-234-8-176.compute-1.amazonaws.com
IdentityFile ~/.ssh/privatekey.pem
User username
There can be several reasons that have nothing to do with the accepted answer. For me:
Ubuntu 14.04.1 LTS didn't seem to work
Issues with EC2 auth, see pem file config and pem file permissions
And for yet another seeming cause/solution:
Adding the config path explicitly to settings only caused an EISDIR error.
Removing the listing from known_hosts made it need to confirm the fingerprint, but it didn't provide a way to do that. I could see it trying in the terminal output.
The solution a coworker recommended was to delete the vscode-server files from the server. That was the key step in my case. But...
Connecting to the server using another client, I attempted to rm -rf ~/.vscode-server. I could not delete many of the files because "device or resource busy".
That eventually required doing the following:
fuser ~/.vscode-server/[one of the child files ...]. But, you can probably skip this, because mainly I needed to know what to search for. Plus, fuser and lsof were finicky about returning results -- they often just sit waiting for something, I don't know what.
ps -e | grep node since node is the running process using vscode-server files.
For each PID in the list of node processes from step 2, run ps -o user= -p PID, substituting PID with each process PID in turn. This creates a formatted list of the process's associated user.
This is to determine which node processes you own, so you're not even trying to kill anybody else's.
Starting with the lowest node PID I own, run kill -9 PID. You won't need to run this for all PIDs, because killing a lower PID killed a child PID after a few seconds. So keep checking which node processes still exist after killing one: ps -e | grep node
Finally, once all mine are killed, I can rm -rf ~/.vscode-server
Then, I was able to connect via ssh in VS Code again. And, since I left the fingerprint removed from known_hosts, it even asked to confirm the connection to the server (up top, in the command prompt).
Also, for reference, I left the remote-ssh: settings config file entry, mentioned in other solutions, blank.
For reference or further explanation of certain steps above (I don't intend to elaborate more than I did):
rm: cannot remove ‘.vscode-server/bin/xxxxxx/.nfs000000000xxxxxxxxxxxx’: Device or resource busy
How find out which process is using a file in Linux?
https://unix.stackexchange.com/questions/284934/return-owner-of-process-given-pid/284938
In my case it worked when I added the port in my expression, eg ssh user#host-or-ip -p 22. With '22' the default port number, but you can check which port the ssh system is listening on with the sudo service ssh status command.
WSL Related
In my case, the issue was that my keys were set up on Ubuntu on WSL and VS Code was looking for them in Windows. I copied the keys over from the WSL side to Windows and voila! Worked like a charm!
Steps that I took.
Navigated to the /home//.ssh folder in WSL and then entered explorer.exe .
From there, I copied the id_rsa and id_rsa.pub files and pasted them in the windows side, under C:\Users<username>.ssh
Then I tried connecting again from VS Code and it worked perfectly.

redis snapshot location not as specified in config

After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.

Cannot ssh into server except with google dev console ssh

I cannot ssh from my computer into the server hosted on Google Cloud.
I tried the normal ssh-keygen with user#domain.com and uploading the public key, which worked last time, but this time it didn't. The issue started after I changed the password for the account. After that I could no longer ssh or sftp into the account, although I wasn't disconnected until I disconnected.
I then tried the gcloud ssh user#instance and it ran fine and told me it just hasn't propagated yet.
I added AllowUsers user to the server's ssh config file and I restarted ssh on the server, but still the same result
Here's the error:
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Update:
I've been working with Google tech support and this issue is still unresolvable. A file called authorized_keys permissions keep getting changed on boot to another user, who I also cannot log in as.
So I change it to:
thisUser:www-data 755
but on boot it changes it to:
otherUser:otherUser 600
There are a couple of things in order to fix this. You can take advantage of the metadata feature in GCE and add a startup script that would automatically change the permissions.
From the developers console, go to your Instance > Metadata and add a pair of Key/value
key : startup-script
value: chmod 755 /home/your_user/.ssh/authorized_keys OR chmod 755 ~/.ssh
after rebooting you should check the Serial Ouput option further down that page and see if it ran on startup. it should show you something along these lines :
startup script found in metadata.
startupscript: Running startup script /var/run/google.startup.script
Further information can be found HERE
Hope that helps!
I solved this by deleting the existing ssh key under Custom metadata in the VM settings. I then could login on ssh

Amazon AWS EC2 Instance - Can't connect with SSH

This shouldn't be this hard. I cannot connect to new AWS EC2 instance via SSH clients. I am connecting from a Win 7 box.
Instance OS: Debian 6
AMI: debian-squeeze-i386-20121119-e4554303-3a9d-412e-9604-eae67dde7b76-ami-1977f070.1(ami-a121a6c8)
User: tried root and also ec2-user
Using .pem keypair that AWS generated and I downloaded
Confirmed security group and Key Pair Name on instance
SSH port 22 is OPEN: Nmap says so and Telnet gets a welcome reply
Using 3 different clients: all clients connect ok
PuTTY replies: Server refused our key
MindTerm Java browser add-in replies: Authentication failed, permission denied
Bitvise SSH replies: Attempting 'publickey' auth; auth failed;
Rebooted instance, wash, rinse, repeat...
REBUILT new instance and new keypair, wash, rinse, repeat...
Connecting isn't the issue. Why would the instance not accept the .pem file as the password? Is there an additional step I am missing? I followed EVERY frigging guide I could Google. AWS support is a joke. stackoverflow to the rescue...
TIA.
According to the debian wiki which has documentation on the AMI you are using, the username you need to use to login is 'admin'.
I have had many issues with connecting to EC2 via ssh.
ssh -i the-keypair-filename root#yourdomain.com
- Keypair file must be in same directory.
- I just used terminal to connect.
Make sure you generate or assign the keypair when launching the instance.
Also you can verify the keypair you have set in the AWS Management Console, this is done by selecting the running instance and then looking for "Key Pair Name:".
I hope this is helpful.
My problem was that I didn't add a volume that was expected in the fstab file so the server didn't start fully and the sshd daemon wasn't running.
Check with:
telnet HOST 22
Check the server logs to make sure it starts properly before you waste lots of time like I did.
Amazon Linux AMIs that use ec2-user password are listed at the bottom of this page.
http://aws.amazon.com/amazon-linux-ami/
Check that you are using one of those if trying to use ec2-user, or check the documentation for the AMI you are using.
Teri
Try using the "admin" username and ignore the username suggested by Amazon.
I had the similar problem and I have solved the issue by following approach.
1) Edited the knife.rb file in my chef folder i.e. :\Users\Administrator\chef-starter\chef-repo.chef\knife.rb as bellow:
knife[:aws_access_key_id] = "xxxxxxxxxxxxxxxxxxxx"
knife[:aws_secret_access_key] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
knife[:region] = 'ap-southeast-1'
knife[:aws_ssh_key_id] = "ChefUser"
knife[:ssh_user]="ec2-user"
In the command prompt, issued the command to create an ec2-server:
knife ec2 server create -r "role[webserver]" --image ami-abcd1234 --flavor t1.micro -G ChefClient -x root -N server01 -i H:\Chef-files\ChefUser.pem
Note that, even though I had given all the details in the knife.rb file, I had to give the .pem file path in coomand line through -i option. That solved my problem.
Check, if the solution of mine helps you.
Cheers,
Chandan
Logging in as "ubuntu" worked for me:
ssh -i private_key.pem ubuntu#myubuntuserver
Hope this helps
--Erin

EC2 SSH port 22 connection refused even though previously worked

I have an EC2 instance which i've got a couple of sites hosted on and it's previously been workin really nice.
Today I try to SSH into it and it just refusing connections on port 22 even though the security group has it open and the security group is set to the instance.
Anyone help me out as to why this could be the case?
I just get this line each time - ssh: connect to host 54.247.99.86 port 22: Connection refused
If your instance is EBS based and not an Instance store then take an ami image first as a backup, then stop the instance, boot a new raw instance.
As your old instance is EBS based, detach the volume and attach it to new instance. Once attached mount it to some directory, change the permission /var/empty/sshd, also do cat /etc/fstab to know where was your / partition mounted. Now umount the volume from new instance attach to the old instance with the exact mount point which you had when in fstab like /dev/sda1 for /.
Once attached, start the old instance and check whether you are able to login or not.
To fix the issue, you can follow the steps below to do so:
Launch a new instance in same region (so we can attach the EBS volume to it later).
Stop your current instance and detach the EBS volume
Attach that volume to the new instance at /dev/sdf.
SSH into the new instance and run the following command: sudo mount /dev/sdf /mnt
Run this command: cat /mnt/etc/rc.d/rc.local
If the output contains these lines please remove it from the rc.local file:
cat <<EOL >> /etc/ssh/sshd_config
UseDNS no
PermitRootLogin without-password
EOL
After you modified the rc.local file you can un-mount the EBS volume,detach it from the temporary instance and attach it back to your original instance at /dev/sda1 . And you should be able to SSH into your instance.
This appears to be a temporary fix. This will reoccur if you reboot. I also looking for permanent fix.