I have been locked out of ssh. I'm on the Google Cloud, so I can move the hd over and change the ssh config files, but after a few attempts, I cannot login still. The problem began shortly after I changed the password to the primary account, but since SSH was not using password authentication, I am surprised that affected SSH. I tried turning password authentication on, generating new keys, have Google's platform generate new keys, etc, but nothing has allowed me to log in.
I keep getting this error, regardless of key combo or whether or not password authentication is on.
Permission denied (publickey).
I have a slightly older backup (a couple hours, before the issue), and it's telling me too many authentication failures for any user (regarless of user#domain.com).
I was wondering if there are any config setting I can set to be able to log back in.
Not sure this belongs stackoverflow or serverfault but..
Try adding -vv to your ssh command. It shows a lot more debugging info
For example:
ssh -vv username#host
See if that gets you any hints! It could be a number of things, it searching for private key in the wrong place, etc.
The issue could be ssh keys saved in your local computer. Can you move the ssh keys from .ssh/ to a different directory in your local computer and see if that resolves the issue.
Or can you enable password authentication for your ssh and use -o flag with ssh command which forces non-key authentication to confirm if the issue was with the key: ssh -o PubkeyAuthentication=no username#
You also set MaxAuthTries to higher number in your sshd_config.
Related
I have a strange issue regarding my setup login into my hetzner cloud via SSH.
initial situation
I have made a fresh SSH Key, added that to a fresh Hetzner Cloud solution and made the initial login into the cloud. I was able to access the cloud via terminal with the command ssh root#MY_IP
the issue
When I retry to access my server with ssh root#MY_IP a few days after I've made the setup, I get the following error message: root#MY_IP: Permission denied (publickey).
I haven't made any changes in the meantime, didn't to anything with the ssh connection, didn't created new ssh key, nothing. I don't understand why it just denies my connection try since it was working fine before.
Probably your ssh-agent was configured in a different shell?
Try listing your stored keys with
ssh-add -l
If you don't see the one you created for this specific machine/cluster, try adding it again with:
ssh-add <absolute_path_to_your_private_key>
If your agent is not even running, start it in the background with:
eval "$(ssh-agent -s)"
Recently our web hoster (Domainfactory) changed the method to externally access our online mysql database. From simple ssh "port forwarding" to a "unix socks tunnel".
The ssh call looks like this (and it works!):
ssh -N -L 5001:/var/lib/mysql5/mysql5.sock ssh-user#ourdomain.tld
The problem: you have to enter the password every single time.
In the past I used BitVise SSH client to create a profile (which also stores the encrypted password). By simply double-clicking on the profile you'll be automatically logged in.
Unfortunately, neither the "BitVise SSH client" nor "Putty" (plink.exe) supports the "Unix socks tunnel" feature/extension, so I can't use these tools any more.
Does anyone have an idea how to realize an automated login (script, tool, whatever)?.
The employees who access the database must not know the SSH password in any case!
I got a solution. The trick is to generate a SSH Key pair (private and public) on client side (Windows machine) calling 'ssh-keygen'. Important: don't secure the ssh keys with a password (simply press [enter] if you're asked for a password, otherwise you'll be asked for the SSH-Key password every time you try to SSH). Two files will be generated inside 'c:\Users\your_user\.shh\': 'id_rsa' (private key) and 'id_rsa.pub ' (public key).
On server side create a '.shh' directory within your user's home directory. Inside the '.ssh' directory create a simple text file called 'authorized_keys'. Copy the contents of 'id_rsa.pub' into this file (unfortunately 'ssh-copy-id' isn't available yet for Windows. So you have to do the copy and paste stuff on your own.) Set permissions of 'authorized_keys' file to '600'.
Now you should be able to simply SSH into your server by calling 'ssh-user#ourdomain.tld' without entering a password. Create a batch file with your individual ssh-call and you're done.
Thanks to Scott Hanselman for his tutorial: https://www.hanselman.com/blog/how-to-use-windows-10s-builtin-openssh-to-automatically-ssh-into-a-remote-linux-machine
I'm trying to connect to my Debian Google Compute Engine server through PuTTy (I've tried other alternatives too) but when I do I get the error "Disconnected: No supported authentication methods available (server sent: publickey)
The google server came without a username and password, only a url to automatically login to their own terminal.
I had PuTTY working and then one day got this error.
Solution: I had revised the folder path name containing my certificates (private keys), and this caused Pageant to lose track of the certificates and so was empty.
Once I re-installed the certificate into Pageant then Putty started working again.
Turn on Password Authentication
By default, you need to use keys to ssh into your google compute engine machine, but you can turn on password authentication if you do not need that level of security.
Tip: Use the Open in browser window SSH option from your cloud console to gain access to the machine. Then switch to the root user with sudo su - root to make the configuration changes below.
Edit the /etc/ssh/sshd_config file.
Change PasswordAuthentication and ChallengeResponseAuthentication to yes.
Restart ssh /etc/init.d/ssh restart.
Please follow this guide: https://gist.github.com/feczo/7282a6e00181fde4281b
with pictures.
In short:
Using Puttygen, click 'Generate' move the mouse around as instructed and wait
Enter your desired username
Enter your password
Save the private key
Copy the entire content of the 'Public key for pasting into OpenSSH authorized_keys file' window. Make sure to copy every single character from the beginning to the very end!
Go to the Create instances page in the Google Cloud Platform Console and in the advanced options link paste the contents of your public key.
Note the IP address of the instance once it is complete.
Open putty, from the left hand menu go to Connection / SSH / Auth and define the key file location which was saved.
From the left hand menu go to Connection / Data and define the same username
Enter the IP address of your instance
name the connection below saved Sessions as 'GCE' click on 'Save'
double click the 'GCE' entry you just created
accept the identy of the host
Now login with the password you specified earlier and run
sudo su - and you are all set.
You need to use an SSH key to login to your instance.
The GCE documentation explains the process here.
I had the same problem but got it working by changing enable-oslogin from TRUE to FALSE in google cloud.
from:
to:
I had the same issue and just figured it out !!
Assuming that you already went and created private/public key added your public key on the remote server ... type in username#remotehost.com and THEN go to Connection -> SSH -> Auth and click Browse to locate your private key. After you choose it will populate the input field. After that click OPEN ...
So the important thing here is the order... make sure you first enter parameters for the host and then locate your private key.
I got this error because I had forgotten to add my username behind the key in the GCE metadata section. For instance, you are meant to add an entry into the metadata section which looks like this:
sshKeys username:key
I forgot the username: part and thus when I tried to login with that username, I got the no supported auth methods error.
Or, to turn off the ssh key requirement entirely, check out my other answer.
Apparently running sudo chmod -R a+rw on your home folder causes this to happen as well.
This problem mainly caused by your connected username not have the access to the shell in GCE. So you use the following steps to solve this issue.
gcloud auth list
If you are using the correct login. please follow the below steps. otherwise use
gcloud auth revoke --all
gcloud auth login [your-iam-user]
and you get the token or it automatically detect the token.
gcloud compute --project "{projectid}" ssh --zone "{zone_name}" "{instance_name}" .
if you dont know this above line click to compute engine-> ssh dropdown arrow-> view google command-> copy that code and use it
Now it update your metadata and it is available in your computer's folder Users->username
~/.ssh/google_compute_engine.ppk
~/.ssh/google_compute_engine.pub
Then you create a new ppk file using puttygen and you give the username, which you want like my_work_space. Then
save the publickey and privatekey in a folder.
Next step: Copy the public key data from puttygen and create new ssh key in gcloud metadata
cloud console ->compute engine->metadata->ssh key->add new item->paste the key and save it
and now return your shell commandline tool then enter
sudo chown -R my_work_space /home/my_work_space
now you connect this private key using sftp to anywhere. and it opens the files without showing the permission errors
:) happy hours.
If the private key has been generated with ssh-keygen in Linux it needs to be converted with puttygen because Putty does not support openssh keys.
Start puttygen, and click on Conversions - Import key, then click Browse and select the private key generated with openssh, then click on Save private key.
Use your new key to connect.
I faced the same issue and solve after several trial and error.
In the /etc/ssh/ssh_config, set
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication no
AuthenticationMethods publickey
then, open putty.
In the "Saved Sessions", enter the server IP, go through the path Connection->SSH->Auth->Browse on the left panel to search your private key and open it.
Last but not least, go back to Session of putty on the left panel and you can see the server IP address is still in the field, "Saved Sessions", then click "Save", which is the critical step.
It will let the user login without password any more.
Have fun,
Download "PuttyGEN" get publickey and privatekey
use gcloud SSH edit and paste your publickey located in /home/USER/.ssh/authorized_keys
sudo vim ~/.ssh/authorized_keys
Tap the i key to paste publicKEY.
To save, tap Esc, :, w, q, Enter.
Edit the /etc/ssh/sshd_config file.
sudo vim /etc/ssh/sshd_config
Change
PasswordAuthentication no
[...]
ChallengeResponseAuthentication to no.
[...]
UsePAM no
[...]
Restart ssh
/etc/init.d/ssh restart.
the rest config your putty as tutorial
NB:choose the pageant add keys and start session would be better
Electricity went down and got this error. Solution was to double click your .ppk (Putty Private Key) and enter your password.
PasswordAuthentication and ChallengeResponseAuthentication default set to NO in rhel7.
Change them to NO and restart sshd.
Similar problem - same error message. I got the same message when trying to clone something from bitbucket with ssh. The problem was in my ssh configuration configured in the mercurial.ini: I used the wrong bitbucket username. After I corrected the user name things worked.
For me these was my problem, solution from https://unix.stackexchange.com/questions/282908/server-refused-public-key-signature-despite-accepting-key-putty
"Looking at the log /var/log/secure showed that it was just downright refused. I'm somewhat new to centos since I'm mainly a debian kind of guy, so I was unaware of /var/log/secure
After checking this and doing a bit of searching, it turns out PermitRootLogin no needs to be PermitRootLogin without-password if you want to specifically use just keys for root login. That did the trick. Thanks everyone for contributing."
I had the same error message and discovered that my mistake was in the username I used with putty. Apparently GCE SSH Keys listing would change your username characters in some of the listing. In my case, the underscore was changed to period. i.e: my_username becomes my.username
I inadvertently copied the wrong username from the listing and got the same error message.
I know this is an old question, but I had the same problem and solved it thanks to this answer.
I use Putty regularly and have never had any problems. I use and have always used public key authentication. Today I could not connect again to my server, without changing any settings.
Then I saw the answer and remembered that I inadvertently ran chmod 777 . in my user's home directory. I connected from somewhere else and simply ran chmod 755 ~. Everything was back to normal instantly, I didn't even have to restart sshd.
I hope I saved some time from someone
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.
I know that we shuld do
ssh user#target
but where do we specify the password ?
Hmm thanks for all your replies.
My requirement is I have to start up some servers on different machines. All servers should be started with one shell script. Well, entering password every time seems little bad but I guess I will have to resort to that option. One reason why I don't want to save the public keys is I may not connect to same machines every time. It is easy to go back and modify the script to change target addresses though.
The best way to do this is by generating a private/public key pair, and storing your public key on the remote server. This is a secure way to login w/o typing in a password each time.
Read more here
This cannot be done with a simple ssh command, for security reasons. If you want to use the password route with ssh, the following link shows some scripts to get around this, if you are insistent:
Scripts to automate password entry
The ssh command will prompt for your password. It is unsafe to specify passwords on the commandline, as the full command that is executed is typically world-visible (e.g. ps aux) and also gets saved in plain text in your command history file. Any well written program (including ssh) will prompt for the password when necessary, and will disable teletype echoing so that it isn't visible on the terminal.
If you are attempting to execute ssh from cron or from the background, use ssh-agent.
The way I have done this in the past is just to set up a pair of authentication keys.
That way, you can log in without ever having to specify a password and it works in shell scripts. There is a good tutorial here:
http://linuxproblem.org/art_9.html
SSH Keys are the standard/suggested solution. The keys must be setup for the user that the script will run as.
For that script user, see if you have any keys setup in ~/.ssh/ (Key files will end with a .pub extension)
If you don't have any keys setup you can run:
ssh-keygen -t rsa
which will generate ~/.ssh/id_rsa.pub (the -t option has other types as well)
You can then copy the contents of this file to ~(remote-user)/.ssh/authorized_keys on the remote machine.
As the script user, you can test that it works by:
ssh remote-user#remote-machine
You should be logged in without a password prompt.
Along the same lines, now when your script is run from that user, it can auto SSH to the remote machine.
If you really want to use password authentication , you can try expect. See here for an example