`gcloud app instances ssh` command disable SSH host key checking - ssh

Let's say I have some command/script that I want to execute on my gcloud app instance:
gcloud app instances ssh --quiet \
--version=${version} --service=${service} ${instance_id} --container=gaeapp -- \
bash commands.sh
How to disable SSH host key checking for gcloud app instances ssh? Because currently I have the following result:
Executing command in container blah-blah (version=20180813t144010, service=default)
...
Sending public key to instance [apps/blah-blah].
Waiting for operation [apps/blah-blah] to complete...done.
The authenticity of host 'apps/blah-blah (123.123.123.123)' can't be established.
ECDSA key fingerprint is SHA256:....
Are you sure you want to continue connecting (yes/no)?

There isn't currently an option for that, but you could make a feature request at the public issue tracker.

Related

Connecting to my remote site using git bash shell SSH

I can connect using these credentials through ftp but not through ssh.
Timothy#ement MINGW64 ~
$ ssh timothy#mywebsite.com
ssh: connect to host mywebsite.com port 22: Connection timed out
I'm sure this question has been asked a million times before. Does it have anything to do with ssh keys?
I'm using siteground and in the ssh/shell access area i've added this:
t r timothy#mywebsite.com KtV/T4QvP4K9n7Zki9n+ZWp6 0.0.0.0/0 - ALL Remove Key | Add IP | Private Key
any help would be appreciated. Thank you.
Does it have anything to do with ssh keys?
Yes: see the official SiteGround documentation How to use SSH.
you need to enable ssh access and register your public ssh key.
then you can use ssh (provided in your <path-to-git>/usr/bin) in order to access
ssh -p18765 <user>#yourdomain
SiteGround chooses to run its sshd on port 18765, not the default 22.
The siteground tutorials are junk, two out of the three chat support staff I spoke with just referred me to the tutorials when I was attempting to make a connection to my siteground server over ssh.
These are the steps that finally worked:
From the cPanel Advanced section select SSH/Shell Access
Generate a new key using their utility (make note of the password you used for later use).
*** They have a tutorial that should allow you to create a private key on linux then upload the public key to their site. That is "not recommended" and I was unable to get that to work.
Once you have their key listed in the current keys table click the Private Key link
Copy the Private Key to a file in your local .ssh directory (make sure the mask is 0600)
run the following command:
ssh-add
enter the passphrase you used when generating the key using their utility
If you get a response "Identity added: ..." you are all set
you should now be able to use the command:
ssh # -p18765
It doesn't look like they have X11 forwarding enabled though so if you use ssh -X you will get:
X11 forwarding request failed on channel 0

Issue remoting into a device and doing a simple ping test with Ansible

After following instructions both online and in a couple of books, I am unsure of why this is happening. I have a feeling there is a missing setting, but here is the setup:
I am attempting to use the command:
ansible all -u $USER -m ping -vvvv
Obviously using the -vvvv for debugging, but not much output aside from the fact it says it's attempting to connect. I get the following error:
S4 | FAILED => FAILED: Authentication failed.
S4 stands for switch 4, a Cisco switch I am attempting to automate configuration and show commands on. I know 100% the password I set in the host_vars file is correct, as it works when I use it from a standard SSH client.
Here are my non-default config settings in the ansible.cfg file:
[defaults]
transport=paramiko
hostfile = ./myhosts
host_key_checking=False
timeout = 5
My myhosts file:
[cisco-switches]
S4
And my host_vars file for S4:
ansible_ssh_host: 192.168.1.12
ansible_ssh_pass: password
My current version is 1.9.1, running on a Centos VM. I do have an ACL applied on the management interface of the switch, but it allows remote connections from this particular IP.
Please advise.
Since you are using ansible to automate commands in a Cisco switch, I guess you want to perform the SSH connection to the switch without been prompted for password or been requested to press [Y/N] to confirm the connection.
To do that I recommend to configure the Cisco IOS SSH Server on the switch to perform RSA-Based user authentication.
First of all you need to generate RSA key pair on your Linux box:
ssh-keygen -t rsa -b 1024
Note: You can use 2048 instead 1024 but consider that some IOS versions will accept maximum 254 characters for ssh public key.
At switch side:
conf t
ip ssh pubkey-chain
username test
key-string
Copy the entire public key as appears in the cat id_rsa.pub
including the ssh-rsa and username#hostname.
Please note that some IOS versions will accept
maximum 254 characters.
You can paste multiple lines.
exit
exit
If you need that 'test' user can execute privileged IOS commands:
username test privilege 15 secret _TEXT_CLEAR_PASSWORD_
Then, test your connection from your Linux box in order to add the switch to known_hosts file. This will only happen one time for each switch/host not found in the known_hosts file:
ssh test#10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:d6:4b:d1:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts.
ciscoswitch#
ciscoswitch#exit
Finally test the connection using ansible over SSH and raw module, for example:
ansible inventory -m raw -a "show env all" -u test
I hope you find it useful.

gcloud compute ssh from one VM to another VM on Google Cloud

I am trying to ssh into a VM from another VM in Google Cloud using the gcloud compute ssh command. It fails with the below message:
/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/compute/lib/base_classes.py:9: DeprecationWarning: the sets module is deprecated
import sets
Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. See https://cloud.google.com/compute/docs/troubleshooting#ssherrors for troubleshooting hints.
I made sure the ssh keys are in place but still it doesn't work. What am I missing here?
There is an assumption that you have connected to the externally-visible instance using SSH beforehand with gcloud.
From your local machine, start ssh-agent with the following command to manage your keys for you:
me#local:~$ eval `ssh-agent`
Call ssh-add to load the gcloud compute public keys from your local computer into the agent, and use them for all SSH commands for authentication:
me#local:~$ ssh-add ~/.ssh/google_compute_engine
Log into an instance with an external IP address while supplying the -A argument to enable authentication agent forwarding.
gcloud compute ssh --ssh-flag="-A" INSTANCE
source: https://cloud.google.com/compute/docs/instances/connecting-to-instance#sshbetweeninstances.
I am not sure about the 'flags' because it's not working for me bu maybe I have a different OS or Gcloud version and it will work for you.
Here are the steps I ran on my Mac to connect to the Google Dataproc master VM and then hop onto a worker VM from the master MV. I ssh'd to the master VM to get the IP.
$ gcloud compute ssh cluster-for-cameron-m
Warning: Permanently added '104.197.45.35' (ECDSA) to the list of known hosts.
I then exited. I enabled forwarding for that host.
$ nano ~/.ssh/config
Host 104.197.45.35
ForwardAgent yes
I added the gcloud key.
$ ssh-add ~/.ssh/google_compute_engine
I then verified that it was added by listing the key fingerprints with ssh-add -l. I reconnected to the master VM and ran ssh-add -l again to verify that the keys were indeed forwarded. After that, connecting to the worker node worked just fine.
ssh cluster-for-cameron-w-0
About using SSH Agent Forwarding...
Because instances are frequently created and destroyed on the cloud, the (recreated) host fingerprint keeps changing. If the new fingerprint doesn't match with ~/.ssh/known_hosts, SSH automatically disables Agent Forwarding. The solution is:
$ ssh -A -o UserKnownHostsFile=/dev/null ...

Can't SSH after creating an Instance from Command line

I am creating an instance from command line using command
nova boot --no-service-net --no-public --disk-config AUTO --config-drive=true --flavor 2 --key-name key1 --image c28bc1e8-a25f-413c-9e13-fecdd5d6f522 test
when instance launched successfully I tried to ssh instance by using this command
ssh -i key1.key fedora#10.0.0.10
but it gives me an permission error,
Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
And when I create an instance from Dashboard/Horizon. I can ssh without any problem or issue with the same command ssh -i key2.key fedora#10.0.0.12
Guide me what is the problem why I can't ssh after creating an instance from command line.
There was a problem in ssh key generation, I was generating ssh key like
ssh-keygen -t rsa -f newdemokey.key
and then add this key into nova keypair-list. That was not working to ssh instance
But the best way is to generate ssh key is,
ssh-keygen
And add this key into nova keypair-list,
nova keypair-add --pub-key ~/.ssh/id_rsa.pub test-key
It will work with the new instance to ssh.

Disabling unidentified host confirmation when connecting to Amazon EC2 instances using SSH

I am writing a script using boto and Python to automatically launch an Amazon EC2 instance and interact with it using SSH. Everything works fine except that every time I establish the connection, SSH prompts me to confirm the authenticity of the host like this:
The authenticity of host 'ec2-174-129-121-25.compute-1.amazonaws.com (174.129.121.25)' can't be established.
RSA key fingerprint is 26:09:bd:21:4f:55:20:3f:0d:fc:5f:cc:3e:08:30:db.
Are you sure you want to continue connecting (yes/no)?
My SSH command is:
ssh -i ssh2.pem root#ec2-174-129-121-25.compute-1.amazonaws.com
Since every EC2 instance is a new host, I have to confirm this every time, but I want an automatic script without any user input. What is the best solution?
Use -O StrictHostKeyChecking=no and, optionally, set the KnownHostsFile of /dev/null (if you want to be totally insecure about things). But remember, you're bypassing security measures meant to protect you!
edit and probably CheckHostIP=no too. man ssh and see all the gory bits.
For PuTTY and windows you can use
echo y | plink -pw yourpassword root#yourservername.com