EC2 (Remote Desktop) RDP refuses Administrator credentials - authentication

My EC2 (Windows Server) instance was accessible through RDP logging in as Administrator and the default password retrieved from EC2 dashboard (AWS Management console).
I relegated my domain name to Route 53, and I also created an Elastic IP address for my instance and modified the DNS CNAME record to point to the EIP.
http access to my app works fine using domain name.
However, RDP broke since my existing rdp link pointed to the old public DNS name given to my EC2 instance.
I tried to recreate the RDP link and all the following attempts failed:
Using an RDP link downloaded from EC2 dashboard on AWS management console.
Enter the Elastic IP's public DNS name into the 'Computer' field of the RCP General tab (click on options on the bottom left of the dialog)
Enter the Elastic IP's external IP address (ie the XX.YY.ZZ.VV taken from ec2-XX-WW-ZZ-VV.compute-1.amazonaws.com ) into 'Computer' field.
Enter the EC2 Private IP address taken from the EC2 console into the 'Computer' field
In all cases listed above I used the existing password and I double checked by decoding the the Administrator's password again (from EC2 console using the original key file).
And in all cases, I keep getting the invalid credentials error from RDP connection.
For all practical purposes, I am locked out of my running instance.
HELP
Thanks...

Firstly, the first thing you should do after Creating an EC2 instance is change the Administrator password. Its easy to lock yourself out of an EC2 instance permanently by neglecting to change this. For example if you create an EBS Image and restore it you'll no longer be able to decrypt the windows password.
Are you still able to retrieve the windows password using the Management Console? If not, then the password will irretrievable. Have you tried rebooting the instance. Double check your Route 53 settings to make sure you are pointing to the correct instance and you're not trying to log into someone elses.
If all else fails I'd suggest rebuilding the instance, and immediately changing the password before changing any other settings.

I had the same problem with newly created EC2 machine: after downloading RDP file and decrypting password, RDP doesn't work. I solved it by typing public IP adress in RDP interface input box and providing same password and user Administrator. Looks like another bug in EC2...

I had the same problem. The reason is because AWS does not support saved credentials. Now I know you will say, "but I'm typing in/ copy-pasting my existing password that I have checked and re-checked with my AWS console", yes but the silly thing takes 'Administrator' as a saved credential! :D
So all you have to do is click use another account on the RDP login dialogue box and type 'Administrator' and your existing password and voila! You'll get the trusted certificate dialogue, click continue and in you get.
I was frustrated too, hope this helps. Cheers.

Related

gcloud created a new account when submitting a new SSH key-pair and now I cannot access the original one

I just started with the Google Cloud Platform and created my first VM instance (Debian).
It all worked in a pretty straight-forward way, I hit the SSH button next to my instance and it opened up a command-line interface in a new browser window. My username was the handle (pre-#) part of my gmail.
However, I wanted to use Terminal on my macbook as a CLI for accessing it. Looking at the guides, this seemed to be a long convoluted process. I followed this process (detailed below) but now I can only access some new account on the VM; the username is my full gmail address this time (but with underscores replacing non-alphabet characters, so like the orignal but with "_gmail_com" tacked on to the end). I can no longer access the original account that seems to be the proper account with admin privileges. Note that I can sudo into the root account and open up the directories and files owned by the original account but this seems very dumb.
I've tried posting in the forum for this stuff, Google's group for Google Compute, "gce-discussion", but my posts are held at approval for some reason. It's as though Google are just hoping I cave and pay for technical support.
My aim is to have a python session running a discord bot that continues while I log off. It'd also be good to be able to serve up files (images) via http.
Thank you for any help you can be!
The steps I followed in the convoluted process given in the guide are as follows:
I created an SSH keypair (private and public)
I downloaded and installed the Google SDK to get the gcloud CLI applciation
I issued the gcloud command to set the public key up on my instance
it had me log in at a google page (OAuth-like thing)
I started an SSH session on Terminal, invoking the file containing my private key, trying with different permutations of options
finally got it to connect and log in using my-handle_gmail_com (ie the second username on my instance)
when I tried to access the SSH from within the Google Cloud Platform page, the browser-based CLI automatically logged my into this same second account, "my-handle_gmail_com". So now I have no access to the original.
Thanks!

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

Google Compute Engine: Adding SSH but can't connect like in their video

OK I need help.
I have created a simple instance with Google Compute Engine, and I have added SSH keys through their META DATA section. But every time I try to log in with Putty (I can do it with their console) I get Permission denied (publickey)
Even when I log in with their browser console , I can see all the users I created with WEB UI, and public keys in authorized_keys
However I can't SSH even though my private keys are in my .ssh
I did all the checked and my ssh is enabled by default
http://screencast.com/t/zI9vDr2s
The thing was that I was accessing the site via IP, since for me the site's DNS wasn't propagated at the moment. nevertheless I tried placing domain name instead of IP and it worked. Weird because I still can't access the site via domain ... and my neighbor can.

My website Cpanel login issue

My Cpanel shows invalid login on every browser only in my laptop but in other computer same username and password work fine.
I cleared my cache and cookies of browser still its not working.
Just type username password instead of copying paste or something similar. Most of the time we do such type of mistakes.
Follow the below steps
Step1.open command prompt and ping your domain name,
ping domainname
Also check whether it is pinging properly else contact your hosting provider and check your local isp ip blocked in firewall or not.
Step2.I suggest you to reinstall browser and try to access.

SSH to Amazon EC2 instance using PuTTY in Windows

I am a newbie to Amazon web services, was trying to launch an Amazon instance and SSH to it using putty from windows. These are the steps I followed:
Created a key pair.
Added a security group rule for SSH and HTTP.
Launched and instance of EC2 using the above key pair and security group.
Using PuTTYgen converted the *.pem file to *.ppk
Using putty tried connecting to the public DNS of the instance and provided the *.ppk file.
I logged in using 'root' and 'ec2-user', and created the PPK file using SSH1 and SSH2, for all these attempts I get the following error in putty,
"Server refused our key"
Can you guys please help, any suggestions would be greatly appreciated.
I assume that the OP figured this out or otherwise moved on, but the answer is to use ubuntu as the user (if the server is ubuntu).
1) Make sure you have port 22 (SSH) opened in Security Group of EC2 Instance.
2) Try connecting with Elastic IP instead of public DNS name.
I hope you have followed these steps Connecting EC2 from a Windows Machine Using PuTTY
Another situation where I got the "Server refused our key" error when using putty, from windows, to ssh to an EC2 instance running ubuntu:
The private key was wrongly converted from .pem to .ppk.
puttygen has two options for "converting keys".
Load your .pem file into puttygen using the File->Load Private Key option and then save as .ppk file using the Save Private Key Button.
DO NOT use the menu option Conversions->Import Key to load the .pem file generated by EC2.
See the puttygen screenshots below, with the two menu options marked.
Check the username, it should be "ubuntu" for your machine.
Check if traffic is enabled on port 22 in Security group.
Check if you are using the correct url i.e ubuntu#public/elasticip
Maybe worth of checking one more thing. Go to AWS console, right mouse click on the instance and choose "Connect...". It will show you the DNS name that you want to use. If you restarted that instance at some point, that DNS name could have changed.
I had a similar problem when I tried to connect an instance created automatically by the Elastic Beanstalk service (EBS). But, once I linked my existing key name to the EBS (under Environment Details -> Edit Configuration -> Server Tab -> Existing Key Pair), I was able to login with 'ec2-user' and my existing key file (converted to .ppk) with putty.
This, however, terminates the running instance and rebuilds a new instance with access through the key pair named above.
Just in case it helps anyone else, I encountered this error after changing the permissions on the home folder within my instance. I was testing something and had executed chmod -R 777 on my home folder. As soon as this had occurred, once I had logged out I was effectively locked out.
You won't face this error if you SSH AWS directly using ".pem" file instead of converted ".ppk" file.
1) Use Git Bash instead of putty. Since you can run all the Linux commands in Git Bash. By installing Git you get to access Git Bash Terminal
2) Right click from the folder where you have ".pem" and select "Git Bash Here".
3) Your key must not be publicly viewable for SSH to work. So run "chmod 400 pemfile.pem".
4) Connect to your instance using its Public DNS - "ssh -i "pemfile.pem" ec2-user#ec2-x-x-x-x.us-west-1.compute.amazonaws.com"
5) Make sure to whitelist your Network IP for SSH in your_instance->security_group->inbound_rules
I assume you're following this guide, and connecting using the instructions on the subsequent page. Verify a couple of things:
You converted the key correctly, e.g. selected the right .pem file, saved as private key, 1024-bit SSH-2 RSA
The Auth settings (step 4 in the connection tutorial) are correct
I was having the same trouble (and took the same steps) until I changed the user name to 'admin' for the debian AMI I was using.
You should lookup the user name ofthe AMI you are using. The debian AMI is documented here
http://wiki.debian.org/Cloud/AmazonEC2Image/Squeeze
I have had this same problem. The AMI you are using is the one that is also used by the "Cloud Formation" templating solution.
In the end I gave up with that, and created a Red Hat instance. I was then able to connect by SSH fine using the user root.
The instructions here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html work fine using a Red Hat instance but not using an Amazon Linux instance. I assume they have some username that I didn't think to try (root, ec2-user, and many other obvious ones, all were refused)
Hope that helps someone!
I use Debain AMI and I try ec2-user, root but correct login is 'admin'.
I was getting the same error when I tried to create a new key pair and tried to use that new pem/ppk file. I noticed that the Key Pair Name field on the instance was still the old one and in poking around. Apparently, you can't change a key pair. So I went back to the original key pair. Fortunately, I didn't delete anything so this was easy enough.
Try an alternative SSH client, like Poderosa. It accepts pem files, so you will not need to convert the key file.
If you already have a key pair, follow these steps:
Convert *.pem to *.ppk using PuTTYgen (Load pem file key then Save ppk)
Add ppk auth key file to Putty SSH>Auth options
Enter "Host Name (or IP address)" field: ubuntu#your-ip-address-of-ubuntu-ec2-host))