I was connected to the VM instance through SSH and by mistake I ran the following command:
"chmod -R 755 /usr"
And then I started getting the following message:
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
I have read different solutions for it:
Setting a startup-script to change root password and connect through
gcloud beta compute ssh servername
However, I can't stop my instance because I have a local SSD assigned to it, so I don't think the startup-script will work and connecting through ssh asks me for a password:
user#compute.3353656325014536575's password:
But I have never set a password for the user I am using.
Is there any solution so I can connect again to the server and fix the mistake?
Edit:
I have a user which I created manually for an FTP, however this one doesn't have sudo permissions, is there a way to know the sudo password?
Thanks in advance.
From the issue at hand the command chimed-R 755 will give everybody root permission
Try this first before reading other steps down.
Ssh into your instance. To change password
Just type
Sudo passwd
Type new password
And confirm new password.
If that doesn't work,
Follow the steps below
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
This means the sudo root permission has been over written, which creates restriction of using sudo and it leads you into problems like all the root access you lost. The following steps should help resolve the issue
create a backup or snapshot of your instance
Create a totally new instance and detach your local ssd from the last instance attach, it back to the newly created instance.
Login to new instance and create a new folder in the root, and start operation as root.
check the attached drive in new instance : “mount”…… “fdisk -l | grep Disk”.
Create new folder in root directory :
“mkdir /newfolder”
Now mount the vol : “sudo mount /dev/xvdf1 /newfolder/”
After mount if you check the permission you will see that the newfolder permission got changed after mounting because of the effected volume:
I'm using Deployer for deploying my code to multiple servers. Today I got this error after starting a deployment:
[Deployer\Exception\RuntimeException (-1)]
The command "if hash command 2>/dev/null; then echo 'true'; fi" failed.
Exit Code: -1 (Unknown error)
Host Name: staging
================
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
Permission denied (publickey).
First I thought it would probably has something to do with this server configuration since I moved the complete installation to another hosting provider. I tried to trigger a deployment to a server which I deployed to just fine in the past days but then got the same error. This quickly turned my suspicions from server to local.
Since I'm running PHP in docker (Deployer is written in PHP), I thought it might had something to do with my ssh-agent not being forwarded correctly from my host OS to docker. I verified this by using a fresh PHP installation directly from my OS (Ubuntu if that would help). Same warning kept popping up in the logs.
When logging in using the ssh command everything seems to be alright. I still have no clue what going on here. Any ideas?
PS: I also created an issue at Deployer's GIT repo: https://github.com/deployphp/deployer/issues/1507
I have no experience with the library you are talking about, but the issue starts here:
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
So let's focus on that. Potential things I can think of:
Is the username really user? It says that the file lives at: /home/user. Verifying that that really is the correct path. For instance, just ls the file. If it doesn't exist, you will get an error:
$ ls /home/user/.ssh/id_rsa
That will throw a No such file or directory if it doesn't exist.
If 1. is not the issue, then most likely this is a user issue where the permissions are wrong for the user in the Docker container. If this is the issue, then INSIDE the Docker container, change the permissions on id_rsa before you need to do it:
$ chmod 600 /home/user/.ssh/id_rsa
Now do stuff with the key...
A lot of SSH agents won't work unless the key is only read-write accessible by the user who is trying to run the ssh agent. In this case, that is the user inside of the Docker container.
I'm distributing load between two web servers, which means all of the Apache settings and vhosts are pretty much identical, and I wanted to make sure they stay that way by using LSync (or if there's another solution that helps with the problem I'm having, let me know)
So obviously Apache runs as the apache user, and we cant enable root SSH logins, so I created an lsync user that can SSH between the two servers using RSA keys.
And now I'm running into some permissions errors, which is kinda what I expected to happen really. What I'm trying now is I added the lsync user to the apache group, and the apache user to the lsync group... and that seems to work ok, as long as the files are chowned 7 for both the user and the group...
I thought about setting a cron job to chown apache.apache every so often, and maybe even chmod +rwx for the group and user, but I'm sure that would cause some other issues.
I thought about having lsync run as the apache user, but it looks like the apache home directory needs to actually be owned by root.root.. so that would cause issues with the apache user trying to ssh in and read from the .ssh directory.
I couldn't find much about this when I looked on Google... Most people just used the root user for lsync, which is out of the question.
So if anyone has a fix, that would be great! thanks
P.S. I know that I can allow the lsync user to execute specific commands via sudo, if I properly configure the sudoers configuration... is there a way to have it sudo chown apache.apache /var/www && sudo chmod -R u+rwx /var/www or something?
rsync has an option for forcing the permissions of the files it creates on the destination: --chmod=<blah>. lsyncd does not have direct support for this, but can pass-through rsync flags.
Try adding this to your lsyncd configuration:
_extra = {"--chmod=Dug+rwx,Fug+rw"}
That should ensure that directories, D, have read/write/execute permissions for owner and group, and files, F, have read/write permissions for owner and group. Any other permissions should be set as they are on the source server.
If you need the files to be owned by the apache user then you could set up a chown cron job, as you suggest, but you might find that a constantly running script that reads the output from inotifywatch will be more responsive (and mostly idle).
You might consider having the apache user run an rsync daemon. It's little used since tunnelling rsync through ssh is more convenient and more secure, but it might help you side-step this problem.
You need to set up a configuration file, and then simply launch it with rsync --daemon using whatever init system your distro has.
You can then configure your lsynd with target = "rsync://server/path".
If the connection between the servers is local and the network is trusted then you're done, otherwise you should configure the rsync daemon to listen only on 127.0.0.1, and then use an ssh -L port mapping to route the traffic through an encrypted tunnel (the owner of the tunnel is not important).
I created a new ebs backed EC2-instance and the necessary key-pair. Now I am able to connect to the instance as ubuntu user. Once i did that I created another user and added it to the sudoers list but I am unable to connect to the instance as the new user I created.
I get the following error. I am using the same key to connect with the new user i created. Can somebody help me. Am I missing something here?
Permission denied (publickey)"
Okay I think i figured it out.
The first technique is to login by password. The idea is to login as the ubuntu user or root user and go to /etc/ssh/sshd_config file and turn the PasswordAuthentication to yes and run
/etc/init.d/ssh reload
If you try to connect now you ec2 allows you to log with the password of the user that was created. Though this is not really secure.
Second is you create a key-pair and copy the id-rsa.pub file and move it to the /home/new-user/.ssh/authorized_keys file and change permissions to 600 and assign to the appropriate user(new-user in this case).
I found this to be amazingly useful
http://blog.taggesell.de/index.php?/archives/73-Managing-Amazon-EC2-SSH-login-and-protecting-your-instances.html
SSH is very picky about the directory and file permissions. Make sure that:
The directory /home/username/.ssh has permission "700" and is owned by the user (not root!)
The /home/username/ssh/authorized_keys has permission "600" and is owned by the user
Copy your public key into the authorized_keys file.
sudo chown -R username:username /home/username/.ssh
sudo chmod 0700 /home/username/.ssh
sudo chmod 0600 /home/username/.ssh/authorized_keys
You can do all that as a root user. There is no need to allow temporarily in /etc/ssh/ssh_config to passwords.
Sid, I did what you mentioned but I still got the same error
The first technique is to login by password. ...
It took a reboot to get it to work. (Just in case anyone else wants this insecure method to work) The public key method is a major pain to make it work with remote desktop apps
I'm working to set up Panda on an Amazon EC2 instance.
I set up my account and tools last night and had no problem using SSH to interact with my own personal instance, but right now I'm not being allowed permission into Panda's EC2 instance.
Getting Started with Panda
I'm getting the following error:
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
Permissions 0644 for '~/.ec2/id_rsa-gsg-keypair' are too open.
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.
I've chmoded my keypair to 600 in order to get into my personal instance last night, and experimented at length setting the permissions to 0 and even generating new key strings, but nothing seems to be working.
Any help at all would be a great help!
Hm, it seems as though unless permissions are set to 777 on the directory, the ec2-run-instances script is unable to find my keyfiles.
I've chmoded my keypair to 600 in order to get into my personal instance last night,
And this is the way it is supposed to be.
From the EC2 documentation we have "If you're using OpenSSH (or any reasonably paranoid SSH client) then you'll probably need to set the permissions of this file so that it's only readable by you." The Panda documentation you link to links to Amazon's documentation but really doesn't convey how important it all is.
The idea is that the key pair files are like passwords and need to be protected. So, the ssh client you are using requires that those files be secured and that only your account can read them.
Setting the directory to 700 really should be enough, but 777 is not going to hurt as long as the files are 600.
Any problems you are having are client side, so be sure to include local OS information with any follow up questions!
Make sure that the directory containing the private key files is set to 700
chmod 700 ~/.ec2
To fix this,
you’ll need to reset the permissions back to default:
sudo chmod 600 ~/.ssh/id_rsa
sudo chmod 600 ~/.ssh/id_rsa.pub
If you are getting another error:
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/home/geek/.ssh/known_hosts).
This means that the permissions on that file are also set incorrectly, and can be adjusted with this:
sudo chmod 644 ~/.ssh/known_hosts
Finally, you may need to adjust the directory permissions as well:
sudo chmod 755 ~/.ssh
This should get you back up and running.
I also got the same issue, but I fix it by changing my key file permission to 600.
sudo chmod 600 /path/to/my/key.pem
The private key file should be protected. In my case i have been using the public_key authentication for a long time and i used to set the permission as 600 (rw- --- ---) for private key and 644 (rw- r-- r--) and for the .ssh folder in the home folder you will have 700 permission (rwx --- ---). For setting this go to the user's home folder and run the following command
Set the 700 permission for .ssh folder
chmod 700 .ssh
Set the 600 permission for private key file
chmod 600 .ssh/id_rsa
Set 644 permission for public key file
chmod 644 .ssh/id_rsa.pub
Change the File Permission using chmod command
sudo chmod 700 keyfile.pem
On windows, Try using git bash and use your Linux commands there. Easy approach
chmod 400 *****.pem
ssh -i "******.pem" ubuntu#ec2-11-111-111-111.us-east-2.compute.amazonaws.com
Keep your private key, public key, known_hosts in same directory and try login as below:
ssh -I(small i) "hi.pem" ec2-user#ec2-**-***-**-***.us-west-2.compute.amazonaws.com
Same directory in the sense,
cd /Users/prince/Desktop.
Now type ls command
and you should see
**.pem **.ppk known_hosts
Note: You have to try to login from the same directory or you'll get a permission denied error as it can't find the .pem file from your present directory.
If you want to be able to SSH from any directory, you can add the following to you ~/.ssh/config file...
Host your.server
HostName ec2-user#ec2-**-***-**-***.us-west-2.compute.amazonaws.com
User ec2-user
IdentityFile ~/.ec2/id_rsa-gsg-keypair
IdentitiesOnly yes
Now you can SSH to your server regardless of where the directory is by simply typing ssh your.server (or whatever name you place after "Host").
Just to brief the issue, that pem files permissions are open for every user on machine i.e any one can read and write on that file
On windows it difficult to do chmod the way I found was using a git bash.
I have followed below steps
Remove user permissions
chmod ugo-rwx abc.pem
Add permission only for that user
chmod u+rw
run chmod 400
chmod 400 abc.pem
4.Now try ssh -i for your instance
If you are on a windows machine just copy the .pem file into C drive any folder and
re-run the command.
ssh -i /path/to/keyfile.pem user#some-host
In my case, I put that file in downloads and this actually works.
Or follow this https://99robots.com/how-to-fix-permission-error-ssh-amazon-ec2-instance/
I am thinking about something else, if you are trying to login with a different username that doesn't exist this is the message you will get.
So I assume you may be trying to ssh with ec2-user but I recall recently most of centos AMIs for example are using centos user instead of ec2-user
so if you are
ssh -i file.pem centos#public_IP please tell me you aretrying to ssh with the right user name otherwise this may be a strong reason of you see such error message even with the right permissions on your ~/.ssh/id_rsa or file.pem
The solution is to make it readable only by the owner of the file, i.e. the last two digits of the octal mode representation should be zero (e.g. mode 0400).
OpenSSH checks this in authfile.c, in a function named sshkey_perm_ok:
/*
* if a key owned by the user is accessed, then we check the
* permissions of the file. if the key owned by a different user,
* then we don't care.
*/
if ((st.st_uid == getuid()) && (st.st_mode & 077) != 0) {
error("###########################################################");
error("# WARNING: UNPROTECTED PRIVATE KEY FILE! #");
error("###########################################################");
error("Permissions 0%3.3o for '%s' are too open.",
(u_int)st.st_mode & 0777, filename);
error("It is required that your private key files are NOT accessible by others.");
error("This private key will be ignored.");
return SSH_ERR_KEY_BAD_PERMISSIONS;
}
See the first line after the comment: it does a "bitwise and" against the mode of the file, selecting all bits in the last two octal digits (since 07 is octal for 0b111, where each bit stands for r/w/x, respectively).
sudo chmod 700 ~/.ssh
sudo chmod 600 ~/.ssh/id_rsa
sudo chmod 600 ~/.ssh/id_rsa.pub
The above 3 commands should solve the problem!
Just a note for anyone who stumbles upon this:
If you are trying to SSH with a key that has been shared with you, for example:
ssh -i /path/to/keyfile.pem user#some-host
Where keyfile.pem is the private/public key shared with you and you're using it to connect, make sure you save it into ~/.ssh/ and chmod 777.
Trying to use the file when it was saved elsewhere on my machine was giving the OP's error. Not sure if it is directly related.