ERROR: Net::SSH::HostKeyMismatch: fingerprint - ssh

I'm trying to bootstrap a node sudo knife bootstrap 10.40.116.100 --ssh-user ubuntu --sudo --identity-file /home
/ec2-user/.ssh/ihies-east-1.pem -N newsite -r "recipe[pilot_sec-update]","recipe[vmpilot]" and I get ERROR: Net::SSH::HostKeyMismatch: fingerprint 16:78:0d:29:7d:5e:cf:25:01:92:df:3a:94:64:5d:b6 does not match for "10.40.116.100"
1. i can ssh with ssh -i /home/ec2-user/.ssh/ihies-east-1.pem decs#10.40.116.100
2. i cleared my known-host file
Still get the error

As https://stackoverflow.com/users/78722/coderanger sugested to clear the known host from the root user as well and that fixed it

Related

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Setup ICP 2.1.0 (IBM Cloud Private) fails due to ssh troubles. Single host installation under Ubuntu

When running
sudo docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install
I get fatal: [192.168.201.130] => Failed to connect to the host via ssh: Permission denied (publickey,password).
I have debugged the session:
root#icpecm:/opt/ibm-cloud-private-2.1.0/cluster# ssh -vvv -i cluster/ssh_key root#192.168.201.130
this is successful.
Have you copied the public key in all the nodes?
In your case:
$ ssh-copy-id -i .ssh/id_rsa root#192.168.201.130

How to automatically setup SSH key pass on first ansible command run for each new server?

Today I started learning ansible and first thing I came across while trying to run the command ping on remote server was
192.168.1.100 | UNREACHABLE! => {
"changed": false,
"msg": "(u'192.168.1.100', <paramiko.rsakey.RSAKey object at 0x103c8d250>, <paramiko.rsakey.RSAKey object at 0x103c62f50>)",
"unreachable": true
}
so I manually setup the SSH key, I think I faced this as no writeup or Tutorial by any devops explains the step why they don't need it or if they have manually set it up before the writing a tutorial or a video.
So I think it would be great if we can automate this step too..
If ssh keys haven't been set up you can always prompt for an ssh password
-k, --ask-pass ask for connection password
I use these commands for setting up keys on CentOS 6.8 under the root account:
cat ~/.ssh/id_rsa.pub | ssh ${user}#${1} -o StrictHostKeyChecking=no 'mkdir .ssh > /dev/null 2>&1; restorecon -R /root/; cat >> .ssh/authorized_keys'
ansible $1 -u $user -i etc/ansible/${hosts} -m raw -a "yum -y install python-simplejson"
ansible $1 -u $user -i etc/ansible/${hosts} -m yum -a "name=libselinux-python state=latest"
${1} is the first parameter passed to the script and should be the machine name.
I set ${user} elsewhere, but you could make it a parameter also.
${hosts} is my hosts file, and it has a default, but can be overridden with a parameter.
The restorecon command is to appease selinux. I just hardcoded it to run against the /root/ directory, and I can't remember exactly why. If you run this to setup a non-root user, I think that command is nonsense.
I think those installs, python-simplejson and libselinux-python are needed.
This will spam the authorized_keys files with duplicate entries if you run it repeatedly. There are probably better ways, but this is my quick and dirty run once script.
I made some slight variations in the script for CentOS 7 and Ubuntu.
Not sure what types of servers these are, but nearly all Ansible tutorials cover the fact that Ansible uses SSH and you need SSH access to use it.
Depending on how you are provisioning the server in the first place you may be able to inject an ssh key on first boot, but if you are starting with password-only login you can use the --ask-pass flag when running Playbooks. You could then have your first play use the authorized_key module to set up your key on the server.

SSH to remote server node and change directory

I wanted to achieve the following task:
step 1: ssh to the remote server
step 2: ssh to a node connected to that server
step 3: change to a particulat directory of that node
I was looking for a ssh one liner and issued the following command
ssh -t -t user#remote.server "ssh node; cd /my/directory/"
However, the last cd command did not work. I am still in my home
directory of the node in remote server. I tried to remove the ";" part,
and issued the following one liner:
ssh -t -t user#remote.server "ssh node cd /my/directory/"
No success. The message was "Connection to remote.server closed"
I was wondering whether it is possible to achieve this task
using an ssh one liner.
Thanking you in advance for your inputs
I was close and could have played around a little bit more.
This page helped, and apparently the following syntax worked:
ssh -t user#remote.server "ssh -t node 'cd /my/directory/ ; bash'"
However, I do not understand the role of the "bash" part.

autossh not working and run without any error message

I have no problem when running this command on ssh , but in autossh its not working.
the list of command i have done till now :
1- ssh-keygen -t rsa
2- cp id_rsa.pub /home/sshUser/.ssh/authorized_keys2/
3- cp id_rsa /home/sshUser/.ssh/authorized_keys2/
4- autossh -fNg -L 3307:127.0.0.1:3306 sshUser#10.100.20.25
and after the last line nothing happen.
ssh still done and ive check it with : "sudo lsof -i -n | egrep '\'"
but if i use ssh instead of autossh it works.
I ran into a very similar problem: autossh would not react, but show the help text.
The solution is to add the monitoring port, i.e. the -M <port> parameter. If you set -M 0 monitoring will be disabled.
Unfortunately until the current Version 1.40, the help shows the -M parameter as optional. This is an known problem.
On Linux Mint 17 (~Ubuntu 14.04), I need to run autossh as sudo in order to have it to work.