Can connect to EC2 as ubuntu user but not as the user i created - ssh

I created a new ebs backed EC2-instance and the necessary key-pair. Now I am able to connect to the instance as ubuntu user. Once i did that I created another user and added it to the sudoers list but I am unable to connect to the instance as the new user I created.
I get the following error. I am using the same key to connect with the new user i created. Can somebody help me. Am I missing something here?
Permission denied (publickey)"

Okay I think i figured it out.
The first technique is to login by password. The idea is to login as the ubuntu user or root user and go to /etc/ssh/sshd_config file and turn the PasswordAuthentication to yes and run
/etc/init.d/ssh reload
If you try to connect now you ec2 allows you to log with the password of the user that was created. Though this is not really secure.
Second is you create a key-pair and copy the id-rsa.pub file and move it to the /home/new-user/.ssh/authorized_keys file and change permissions to 600 and assign to the appropriate user(new-user in this case).
I found this to be amazingly useful
http://blog.taggesell.de/index.php?/archives/73-Managing-Amazon-EC2-SSH-login-and-protecting-your-instances.html

SSH is very picky about the directory and file permissions. Make sure that:
The directory /home/username/.ssh has permission "700" and is owned by the user (not root!)
The /home/username/ssh/authorized_keys has permission "600" and is owned by the user
Copy your public key into the authorized_keys file.
sudo chown -R username:username /home/username/.ssh
sudo chmod 0700 /home/username/.ssh
sudo chmod 0600 /home/username/.ssh/authorized_keys
You can do all that as a root user. There is no need to allow temporarily in /etc/ssh/ssh_config to passwords.

Sid, I did what you mentioned but I still got the same error
The first technique is to login by password. ...
It took a reboot to get it to work. (Just in case anyone else wants this insecure method to work) The public key method is a major pain to make it work with remote desktop apps

Related

Can't access VM Instance through SSH

I was connected to the VM instance through SSH and by mistake I ran the following command:
"chmod -R 755 /usr"
And then I started getting the following message:
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
I have read different solutions for it:
Setting a startup-script to change root password and connect through
gcloud beta compute ssh servername
However, I can't stop my instance because I have a local SSD assigned to it, so I don't think the startup-script will work and connecting through ssh asks me for a password:
user#compute.3353656325014536575's password:
But I have never set a password for the user I am using.
Is there any solution so I can connect again to the server and fix the mistake?
Edit:
I have a user which I created manually for an FTP, however this one doesn't have sudo permissions, is there a way to know the sudo password?
Thanks in advance.
From the issue at hand the command chimed-R 755 will give everybody root permission
Try this first before reading other steps down.
Ssh into your instance. To change password
Just type
Sudo passwd
Type new password
And confirm new password.
If that doesn't work,
Follow the steps below
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
This means the sudo root permission has been over written, which creates restriction of using sudo and it leads you into problems like all the root access you lost. The following steps should help resolve the issue
create a backup or snapshot of your instance
Create a totally new instance and detach your local ssd from the last instance attach, it back to the newly created instance.
Login to new instance and create a new folder in the root, and start operation as root.
check the attached drive in new instance : “mount”…… “fdisk -l | grep Disk”.
Create new folder in root directory :
“mkdir /newfolder”
Now mount the vol : “sudo mount /dev/xvdf1 /newfolder/”
After mount if you check the permission you will see that the newfolder permission got changed after mounting because of the effected volume:

Unable to connect using SSH to the pushed MobileFirst container image on Bluemix

I have built an MF container image and pushed it. I have copied the file in (Mac) ~/.ssh/id_rsa.pub to mfpf-server/usr/ssh before building the image.
I am trying to connect using the command in Mac terminal:
ssh -i ~/.ssh/id_rsa admin#public_ip
It says:
Permission denied (publickey).
Any idea? What is the user I shall use?
your problem is very probably related to the permissions of the pub key copied on the container or to the configuration of your key.
You could check the permissions of key copied on the container, sshd is really strict on permissions for the authorized_keys files: if authorized_keys is writable for anybody other than the user or can be made writable by anybody other than the user, sshd will refuse to authenticate (unless sshd is configured with StrictModes no)
Moreover such a problem won't be showed using ssh -v, it will showed only on daemon logs (on the container)
From man sshd(8):
~/.ssh/authorized_keys
Lists the public keys (RSA/DSA) that can be used for logging in
as this user. The format of this file is described above. The
content of the file is not highly sensitive, but the recommended
permissions are read/write for the user, and not accessible by
others.
If this file, the ~/.ssh directory, or the user's home directory
are writable by other users, then the file could be modified or
replaced by unauthorized users. In this case, sshd will not
allow it to be used unless the StrictModes option has been set to
“no”.
So I suggest you to check about the files and directories permissions.
Then check that the content of your pub key has been copied correctly on authorized_keys listing
/root/.ssh/authorized_keys
To access the container with the ssh key you need to use the "root" user.
ssh -i ~/.ssh/id_rsa root#<ip address>

Anyway to get more info on how Cloud9 connects via ssh

Trying to connect Cloud9 to my Digital Ocean droplet and I'm getting:
Cloud9 couldn't connect to SSH server
I've added the ssh public key into my .ssh/authorized_keys file and I know I can connect via ssh. Is there any way to get more info than just that is can't connect?
David
You need to add public key from your profile to .ssh/authorized_keys (not hosts) and make sure that .ssh/authorized_keys file belongs to you and have can be read written only by the owner (should say -rw------- when doing ls -la ~/.ssh)
To get a bit more info, you can try SSHing into your server from one of your other Cloud9 workspaces. Since your Cloud9 SSH key is the same across all workspaces, you'll be able to check if your key has been properly added to the server this way.
I wasn't able to figure out how to get more info on this but I was able to figure out that it was permissions on the .ssh/authorized_keys file / directory. Thanks again for all the help
David
Just finished chatting with cloud9 support and got this working. It's important to note that there are THREE items that require specific permissions:
you user home folder (~) folder should be drwxr-xr-x
your ~/.ssh folder should be drwx------
your ~/.ssh/authorized_keys folder should be -rw-------
Here's in my case
check your instance security groups and ACL
check your sshd_config, ssh, and PAM auth (default is fine, but in my case I mistakenly set PubKeyAuthentication to No), I tried to mimic the ssh connection with ssh -vvv user#ip_address from another server (after putting the public key in the ~/.ssh/authorized_keys file), make sure it's -rw-------
check for the files and folders permission, especially in the ~/.ssh folder
check incoming connection errors in /var/log/auth.log
check for node js if it's installed
put the node path (use which node command)

Store a private key outside of ~/.ssh

I have to deal with a rather annoying situation. I must transfer a file via shell script using scp from one server to another. The problem is that I do not have root access on either of them. I'm not allowed to install any packages like, sshpass, ssh2, expect etc. I don't even have write permission in the home directory of the user I have to use on the second server.
Since I can't use sshpass etc. to enable my script to enter the login credentials, I thought about using an ssh keypair for auth. Actually that was my first thought, but since the user on the second server doesn't have write permissions in its home directory but only in a subsequent directory, ssh-keygen fails as it can't put the keys in ~/.ssh.
Both are Debian servers btw.
Is there any way to generate a ssh keypair and use it outside of ~/.ssh?
Any help is greatly appreciated.
On the clientside yes. However, on serverside, unless configured differently, sshd will expect your credentials in that directory.
If you can scp from the server where you can't access .ssh to the one where you can, you can use -i option to specify the keyfile location.
Do you have an alternative transport mechanism? Can you put the filn your public_html and wget it on the other side?
You can have the keypairs anywhere. What is key is that the permissions are set correctly on the keypair. The ownership needs to be set to the user chown user:user keyfile and the permissions must be chmod 400 keyfile.
Once you have your key moved and permissions set all that's left is to tell scp which key to use. You can do this by using the -i flag.
IE: scp /source/file user#host:/target/location/ -i keyfile
Edit:
As Amadan alluded to in his answer - this assumes the server you're connecting to already has the key as an authorized key on the user. If not it would require an /etc/ssh/sshd_config change that only someone with the right access can do. It might be worth trying a cat /etc/ssh/sshd_config on the server if your user has access to it at all right now. If you have read access you'll be able to discern the expected authorized_keys location. It's possible the server admin has already customized the expected key location to something you have write access to.

Cannot ssh into server except with google dev console ssh

I cannot ssh from my computer into the server hosted on Google Cloud.
I tried the normal ssh-keygen with user#domain.com and uploading the public key, which worked last time, but this time it didn't. The issue started after I changed the password for the account. After that I could no longer ssh or sftp into the account, although I wasn't disconnected until I disconnected.
I then tried the gcloud ssh user#instance and it ran fine and told me it just hasn't propagated yet.
I added AllowUsers user to the server's ssh config file and I restarted ssh on the server, but still the same result
Here's the error:
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Update:
I've been working with Google tech support and this issue is still unresolvable. A file called authorized_keys permissions keep getting changed on boot to another user, who I also cannot log in as.
So I change it to:
thisUser:www-data 755
but on boot it changes it to:
otherUser:otherUser 600
There are a couple of things in order to fix this. You can take advantage of the metadata feature in GCE and add a startup script that would automatically change the permissions.
From the developers console, go to your Instance > Metadata and add a pair of Key/value
key : startup-script
value: chmod 755 /home/your_user/.ssh/authorized_keys OR chmod 755 ~/.ssh
after rebooting you should check the Serial Ouput option further down that page and see if it ran on startup. it should show you something along these lines :
startup script found in metadata.
startupscript: Running startup script /var/run/google.startup.script
Further information can be found HERE
Hope that helps!
I solved this by deleting the existing ssh key under Custom metadata in the VM settings. I then could login on ssh