Add ssh verified fingerprint to known hosts - ssh

I'm using Chef and trying to add an SSH fingerprint to the known_hosts file of a particular service user account so that I can pull in repos via git. My script is failing because the host verification failed. I do not want to skip verification. I'd like chef to install the fingerprint into the known hosts file.
Requirements:
Do not disable verification
Do not skip verification
Do not add duplicate entries to the known_hosts file (make it idempotent for chef)
Don't use DNS. SSH can use DNS for verification, but this isn't overly secure and it's not enabled by default for installs.
Make it easy to change later, don't pre-compile the hashed line for known_hosts, the input should be an ssh key's fingerprint.
Any thoughts on how to accomplish this? I've been looking at ssh-keyscan and ssh-keygen. There are search functions and remove functions, but no method to add a key, so it seems.

Use the ssh cookbook from the supermarket. https://supermarket.chef.io/cookbooks/ssh
it has an LWRP that makes adding the keys very easy.

Related

Passphrase Certificate Key on Apache reboot options

Currently there is this web in an apache server with SSL key with passphrase, so when the server restart you must manually unlock the key by entering the passphrase.
I know I could recreate the key without passphrase.
I also know I could use SSLPassPhraseDialog and auto-unlock the key with a script.
But doesn't seems right to me to protect the key with a passphrase, if this is going to be written down in a clear text file. Event though this file is root 000 file permissions.
What I'm I missing? Should I just remove the passphrase and focus on perimetral defense and other security methods?
I feel must be a third choice, the good one, that I'm not aware of.
What are the best choice to protect a certificate key on the apache server?
Thank you

Force ssh to use a particular algorithm for host identification

I am trying to better understand how ssh does host authentication. I am ssh'ing from a macbook pro (OSX 10.14.6) to several CentOS 8.1 servers. There are several files on the remote CentOS servers in /etc/ssh/ that are used for the host-based authentication (e.g. ssh_host_ed25519_key.pub, ssh_host_dsa_key.pub, ssh_host_rsa_key.pub).
If I look at my macbook's local ~/.ssh/known_hosts, I see entries that use ssh-rsa which corresponds to /etc/ssh/ssh_host_rsa_key.pub. I also see entries for ecdsa-sha2-nistp256 which correspond to /etc/ssh/ssh_host_ecdsa_key.pub.
Question :
When I ssh into my remote server, is there a way for me to force ssh to use a particular algorithm for the host authentication or is this something that I'll have to change by hand in known_hosts? E.g. force it to use ssh_host_ecdsa_key.pub instead of ssh_host_rsa_key.pub.
How does ssh by default decide which algorithm to use for host authentication?
You can use the -o flag to specify options for SSH. One of these options is HostKeyAlgorithms which will control which algorithms your client offers, see: https://man.openbsd.org/ssh.
If you run ssh with the -vv flag you can see the offer that is made by your client. Then the server chooses the first algorithm used by the client that it supports. I would guess that the different support different algorithms.

Unable to git clone bitbucket over ssh

I've created public and private keys, and added the public key to the Bitbucket repository.
These private/public key pair is available in the .ssh folder of the user account I hope to clone to.
Attempting to clone with SSH produces a connection refused:
This lead me to believe that either the Bitbucket project/server is not configured for ssh, or maybe this is a firewall/port issue for my companies network. However, port 7999 and port 22 are open.
 
This lead me to investigate other means of cloning with ssh, but over http as described here (just in case if port 22 or 7999 was blocked): https://support.atlassian.com/bitbucket-cloud/docs/troubleshoot-ssh-issues/ 
To do this I modified my ssh config as follows:
Doing this allowed me to atleast establish a connection with bitbucket, but it acts as if the request was bad:
This lead me to believe that maybe i signed the cert incorrectly so I attempted a flavor of this: https://unix.stackexchange.com/questions/503851/how-to-generate-a-certificate-file-which-to-be-used-with-ssh-config 
To add the certificatefile provided in the ssh config. It sounded like I would need to add that public key of the private key used to sign the user key (that generated the certificate). However, I won't have access to the private key for the cert available on the bitbucket server.
Separately, I was able to grab the public cert from altssh.bitbucket.di2e.net:443 and I did try using this, but still got a bad request... This probably doesn't make sense to use since this is in PEM format, but I figured it was worth a try.. 
 
openssl s_client connect altssh.bitbucket.di2e.net:443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE/,/END CERTIFICATE/p' > public.crt
 
I'm wondering if I've incorrectly signed the user key with the cert key, but would like advise on best steps to do this. 
Thanks!
Come to find out the bitbucket proxy server I was trying to connect to was not configured to handle altssh.bitbucket.di2e.net which caused the connection over :443 to get dropped.
Root of the issue was a combination of corporate firewall blocking 7999 to external (wasn't blocked internally), as well as /etc/ssh/sshd_config on the remote machine I was attempting to clone to not being configured to AllowAgentForwarding

Why does SSH seem to remember my valid connection settings even though they're now invalid?

I'm troubleshooting some stuff with an application I'm working on that uses SFTP. Along the way, I'm using the openSSH command line client to connect to a separate SFTP server, using a configuration file (~/.ssh/config). In between tests, I'm changing the configurations, and at times I try to deliberately test an invalid configuration.
Currently, I just changed my config file to remove the IdentityFile line. Without this, it shouldn't know what key file to use to try and make the connection, and as such, the connection should fail. However, every time I ssh to that hostname, the connection succeeds without even so much as a password prompt.
This is BAD. My server requires the use of the keyfile, I know this because my application cannot connect without one. Yet it's almost like SSH is remembering an old, valid configuration for the server even though my current configuration is invalid.
What can I do to fix this? I don't want SSH to be hanging onto old configurations like this.
If you don't specify IdentityFile, the ssh will use the keys in default location (~/.ssh/id_{rsa,dsa,ecdsa}), as described in the manual page for ssh:
IdentityFile
Specifies a file from which the user's DSA, ECDSA, Ed25519 or RSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa,
~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa for protocol version 2. [...]

Host key verification failed - amazon EC2

I am working with win 7 and git bash as well as an amazon EC2 instance. I tried to log into my instance:
$ ssh -i f:mykey.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
71:00:d7:d8:a------------------26.
Please contact your system administrator.
Add correct host key in /m/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /m/.ssh/known_hosts:27
ECDSA host key for ec2-52-10-**-**.us-west-2.compute.amazonaws.com has changed and you have request
ed strict checking.
Host key verification failed.
Logging in like this has worked fine in the past, but this problem started after I rebooted my EC2 instance. How can I get this working again?
edit:
$ ssh -i f:tproxy.pem ubuntu#ec2-52-10-**-**.us-west-2.compute.amazonaws.com
ssh: connect to host ec2-52-10-**-**.us-west-2.compute.amazonaws.com port 22: Bad file number
tried again:
The authenticity of host 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com (52.10.**-**)' can't be
established.
ECDSA key fingerprint is d6:c4:88:-----------fd:65.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'ec2-52-10-**-**.us-west-2.compute.amazonaws.com,52.10.**-**' (ECDSA) t
o the list of known hosts.
Permission denied (publickey).
what should I do now?
The hostname has a new ssh key, so ssh tells you something has changed.
The hint is here:
Offending ECDSA key in /m/.ssh/known_hosts:27
If you're sure the server on the other side is authentic, you should delete line 27 in /m/.ssh/known_hosts.
This error says that something has been changed since your last login to this server and that the server you try to ssh to, might not be the server you think it is.
One thing to be aware of...
When you create an EC2 instance, No fixed IP assigned to this instance.
When you start this instance, it will get (dynamic) IP number and a DNS name which will be based on that IP.
If you shutdown the instance and start it again few hours later, it might get a new IP and a new DNS name.
If you are still trying to access the old DNS name/IP, you are actually trying to access a server that might not belong to you.
This will end with the same error msg as you had.
(It can happen because you pointed a DNS entry to the old IP, or you are using scripts that try to access the old DNS name/IP, or you just repeating the ssh command from your history...)
If this is the case, the solution is to use Elastic IP.
You can assign Elastic IP to your server, and this will force it to keep its IP address between reboots.
Elastic IP is free while your (attached) server is up.
But it will cost you some minor fees when the attached server is down.
This is done to make sure you are not "reserving" IP while not using/need it
In BeanStalk environment, the issue is that it refers to the key from known_hosts for the respective IP. But it has changed. So using the same key would not work.
Removing the key for the IP from ~/.ssh/known_hosts and then connecting by ssh would work.
(Basically, when the entry is not there in ~/.ssh/known_hosts it will create a new one, and thus resolve the conflict)
Type the following command to set the permissions. Replace ~/mykeypair.pem with the location and file name of your key pair private key file.
chmod 400 ~/mykeypair.pem
In your case mykeypair.pem is tproxy.pem
I was facing the same issue and after making pem file private it was fixed.
Here is some more information on SSH Key Permissions