I cannot ssh from my computer into the server hosted on Google Cloud.
I tried the normal ssh-keygen with user#domain.com and uploading the public key, which worked last time, but this time it didn't. The issue started after I changed the password for the account. After that I could no longer ssh or sftp into the account, although I wasn't disconnected until I disconnected.
I then tried the gcloud ssh user#instance and it ran fine and told me it just hasn't propagated yet.
I added AllowUsers user to the server's ssh config file and I restarted ssh on the server, but still the same result
Here's the error:
Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Update:
I've been working with Google tech support and this issue is still unresolvable. A file called authorized_keys permissions keep getting changed on boot to another user, who I also cannot log in as.
So I change it to:
thisUser:www-data 755
but on boot it changes it to:
otherUser:otherUser 600
There are a couple of things in order to fix this. You can take advantage of the metadata feature in GCE and add a startup script that would automatically change the permissions.
From the developers console, go to your Instance > Metadata and add a pair of Key/value
key : startup-script
value: chmod 755 /home/your_user/.ssh/authorized_keys OR chmod 755 ~/.ssh
after rebooting you should check the Serial Ouput option further down that page and see if it ran on startup. it should show you something along these lines :
startup script found in metadata.
startupscript: Running startup script /var/run/google.startup.script
Further information can be found HERE
Hope that helps!
I solved this by deleting the existing ssh key under Custom metadata in the VM settings. I then could login on ssh
Related
I was connected to the VM instance through SSH and by mistake I ran the following command:
"chmod -R 755 /usr"
And then I started getting the following message:
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
I have read different solutions for it:
Setting a startup-script to change root password and connect through
gcloud beta compute ssh servername
However, I can't stop my instance because I have a local SSD assigned to it, so I don't think the startup-script will work and connecting through ssh asks me for a password:
user#compute.3353656325014536575's password:
But I have never set a password for the user I am using.
Is there any solution so I can connect again to the server and fix the mistake?
Edit:
I have a user which I created manually for an FTP, however this one doesn't have sudo permissions, is there a way to know the sudo password?
Thanks in advance.
From the issue at hand the command chimed-R 755 will give everybody root permission
Try this first before reading other steps down.
Ssh into your instance. To change password
Just type
Sudo passwd
Type new password
And confirm new password.
If that doesn't work,
Follow the steps below
"/usr/bin/sudo must be owned by uid 0 and have the setuid bit set"
This means the sudo root permission has been over written, which creates restriction of using sudo and it leads you into problems like all the root access you lost. The following steps should help resolve the issue
create a backup or snapshot of your instance
Create a totally new instance and detach your local ssd from the last instance attach, it back to the newly created instance.
Login to new instance and create a new folder in the root, and start operation as root.
check the attached drive in new instance : “mount”…… “fdisk -l | grep Disk”.
Create new folder in root directory :
“mkdir /newfolder”
Now mount the vol : “sudo mount /dev/xvdf1 /newfolder/”
After mount if you check the permission you will see that the newfolder permission got changed after mounting because of the effected volume:
I'm using Deployer for deploying my code to multiple servers. Today I got this error after starting a deployment:
[Deployer\Exception\RuntimeException (-1)]
The command "if hash command 2>/dev/null; then echo 'true'; fi" failed.
Exit Code: -1 (Unknown error)
Host Name: staging
================
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
Permission denied (publickey).
First I thought it would probably has something to do with this server configuration since I moved the complete installation to another hosting provider. I tried to trigger a deployment to a server which I deployed to just fine in the past days but then got the same error. This quickly turned my suspicions from server to local.
Since I'm running PHP in docker (Deployer is written in PHP), I thought it might had something to do with my ssh-agent not being forwarded correctly from my host OS to docker. I verified this by using a fresh PHP installation directly from my OS (Ubuntu if that would help). Same warning kept popping up in the logs.
When logging in using the ssh command everything seems to be alright. I still have no clue what going on here. Any ideas?
PS: I also created an issue at Deployer's GIT repo: https://github.com/deployphp/deployer/issues/1507
I have no experience with the library you are talking about, but the issue starts here:
Warning: Identity file /home/user/.ssh/id_rsa not accessible: No such file or directory.
So let's focus on that. Potential things I can think of:
Is the username really user? It says that the file lives at: /home/user. Verifying that that really is the correct path. For instance, just ls the file. If it doesn't exist, you will get an error:
$ ls /home/user/.ssh/id_rsa
That will throw a No such file or directory if it doesn't exist.
If 1. is not the issue, then most likely this is a user issue where the permissions are wrong for the user in the Docker container. If this is the issue, then INSIDE the Docker container, change the permissions on id_rsa before you need to do it:
$ chmod 600 /home/user/.ssh/id_rsa
Now do stuff with the key...
A lot of SSH agents won't work unless the key is only read-write accessible by the user who is trying to run the ssh agent. In this case, that is the user inside of the Docker container.
I am trying to follow this vagrant tutorial. I get error after my first two command. I wrote these two command from command line
$ vagrant init hashicorp/precise64
$ vagrant up
After I ran vagrant up command I get this message.
The private key to connect to the machine via SSH must be owned
by the user running Vagrant. This is a strict requirement from
SSH itself. Please fix the following key to be owned by the user
running Vagrant:
/media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
And then if I run any command I get the same error. Even if I run vagrant ssh I get the same error message. Please help me to fix the problem.
I am on linux mint and using virutal box as well.
Exactly as the error message tells you:
The private key to connect to the machine via SSH must be owned
by the user running Vagrant.
Therefore check permissions of file using
stat /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
check what user you are running using
id
or
whoami
and then modify owner of the file:
chown `whoami` /media/bcc/Other/Linux/vagrant3/.vagrant/machines/default/virtualbox/private_key
Note that this might not be possible if your /media/bbc/ is some non-linux filesystem that does not support linux permissions. In that case you should choose more suitable location for you private key.
Jakuje has the correct answer - if the file system you are working on supports changing the owner.
If you are trying to mount the vagrant box off of NTFS, it is not possible to change the owner of the key file.
If you want to mount the file on NTFS and you are running a local instance you can try the following which worked for me:
Vagrant Halt
[remove the vagrant box]
[Add the following line to Vagrantfile]
config.ssh.insert_key=false
[** you may need to remove and clone your project again]
Vagrant Provision
This solution may not be suitable for a live instance - it uses the default insecure ssh key. If you require more security you might be able to find a more palatable soultion here https://www.vagrantup.com/docs/vagrantfile/ssh_settings.html
If you put vagrant data on NTFS you can use this trick to bypass the keyfile ownership/permissions check.
Copy your key file to $HOME/.ssh/ or where-ever on a suitable filesystem where you can set it to the correct ownership and permissions. Then simply create a symlink (!) to it inside the NTFS directory (where you have set $VAGRANT_HOME, for example) like this:
ln -sr $HOME/.ssh/your_key_file your_key_file
I tried to push my blog (Octopress) to github and got this error:
MacBook-Air:octopress bdeely$ git push origin source
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I generated an SSH key, saved it, and even linked it with my GitHub account in the SSH key settings, but I went ahead and checked the status and got the same error:
MacBook-Air:.ssh bdeely$ ssh -T git#github.com
Permission denied (publickey).
In addition to this, I checked github's help page, did the following and got this error message:
MacBook-Air:~ bdeely$ ssh-add -l
The agent has no identities.
Does anyone know what is wrong and how I can fix this?
On OSX, if you type
ssh-add -l
and you get back "no identities", that means your ssh agent does not have any identities loaded into it. Oftentimes, when the mac reboots, you have no identities.
I add mine back after a re-boot by explicitly running
ssh-add
This loads a default identity from ~/.ssh/id_rsa
You can also use the ssh-add command with a specific identity
ssh-add ~/foo/bar/is_rsa
After you add your identies, you can seem them all listed by typing
ssh-add -l
Make sure you have at least one listed.
Follow the commands:
mkdir ~/.ssh //in case that the folder doesnt exist...
cd ~/.ssh
ssh-keygen -t rsa -C "youremail#somewhere.gr"
#hit enter when asks for file to save the key.
#enter the passphrase
At last copy the id_rsa.pub into your github account.
Try this in your terminal:
eval `ssh-agent -s`
ssh-add ~/.ssh/id_rsa
enter your passphrase if any and it should work. Hope this helps :-)
I hope this helps you:
I was having the identical problem and about to take my own eyes out with insane frustration; nothing online led me to an answer and I was trying to use the git push command without specifying the URL exactly (which could also solve the problem I believe), so I didn't see how the connection was failing.
I had set up my .ssh/config correctly for two users with two different keys, even using IdentitiesOnly yes which is supposed to override ssh-agent that was automatically supplying the WRONG ssh identity.
I finally realized the problem as I examined the local repository configuration - it was the entry
[remote "origin"]
url = git#github.com:{my-username}/{my-repo-name}.git
My configuration in .ssh/config file was using the same HostName github.com entry for both users and I'm completely new to all this so I didn't realize that to correctly override ssh-agent, I had to specify the exact URL or else the specific identities in my .ssh/config file would be ignored and the first key that ssy-agent listed (which was the wrong one my my case) would be used by default.
I fixed this by changing the local repo URL to url = git#github-personal:{my-username}/{my-repo-name}.git, where I had set Host github-personal as the identity in my .ssh/config.
Another way to solve this would be specifying the user in the URL in the git push command itself, or even better, a solution described here in a post AFTER solving this my own crappy way:
https://superuser.com/questions/272465/using-multiple-ssh-public-keys
I can't believe that no official source could offer a solution for or even properly explain this edge-case that seems really common (accessing two different github accounts from one machine with SSL).
I experienced the same problem. The reason was that I moved the key-files to another folder; it worked successfully when I moved them back to where they were originally.
I have a unique problem when accessing a Cygwin based SSH Server through public key (rsa) based authentication.
If I login to the server via password auth:
ssh Administrator#domain.com
I login just fine and can then either execute:
cd //anotherpc/shareName
or cd /backup/anotherpc where this is a symlink to the aforementioned network share
This is successful and I can access anything on that share without issue.
The problem arises if i do the same thing above just after logging in using a public key authentication mechanism.
The error output is:
cd //anotherpc/shareName
-bash: cd: //anotherpc/shareName: Not a directory
Update:
The /etc/sshd_config file has the following commands having removed all commented out lines:
Port 22
StrictModes no
AuthorizedKeysFile .ssh/authorized_keys
UsePrivilegeSeparation yes
Subsystem sftp /usr/sbin/sftp-server
It is extremely strange. Any help would be hugely appreciated!
Kind Regards
If you run this command before trying to access a network share, the required authentication token will be created.
net use '\machineName\shareName' /user:"DOMAIN\Username" password
For full details see:
See http://cygwin.com/cygwin-ug-net/ntsec.html#ntsec-setuid-overview