Can I use autossh for gcloud compute ssh? - ssh

For some reason ssh doesn't work to set up a tunnel to my Google Compute Engine instance. I have to use gcloud compute ssh. I'd really like to set up a persistent/resilient tunnel, like one gets with autossh. Is there any way I can do so using gcloud compute ssh?

gcloud compute ssh simply copies your ssh key to the project sshKeys metadata (see Cloud Console > Compute Engine > Metadata > SSH Keys) and runs standalone SSH with the ~/.ssh/google_compute_engine key. To see the exact command line invoked, run gcloud compute ssh --dry-run .... Anything that's possible with typical SSH is possible with gcloud compute ssh.
Another option to investigate is gcloud compute config-ssh, which syncs your ~/.ssh/google_compute_engine SSH key to the project and sets up your ~/.ssh/config file so that you can run ssh without gcloud.

Related

Unable to connect to worker node using ssh in google cloud

I am using ssh command to connect to one of the nodes in cluster from master node (google cloud shell)
$ kubectl get nodes
$ kubectl describe node gke-hello-server-default-pool-03b44665-ng8w
I selected the external IP and then tried using
ssh 35.247.97.140
Permission denied (publickey).
again below
where nodename is
ssh -i ~/.ssh/id_rsa.pub hostname#35.247.97.140
Permission denied (publickey).
But in both case I am getting permission denied.
Easiest way would be using
gcloud compute ssh --project [PROJECT_ID] --zone [ZONE] [INSTANCE_NAME]
to connect to an Linux instance.
There is a documentation available on cloud about Connecting to instances.
And if for some reasons that will not work you should navigate to GCP > Compute Engine > VM instances, choose the instance you wish to connect and click on ssh button.
This will connect you on your home user.
So you can add a new user to the system and generate ssh-key for it.
Here is a documentation about Managing SSH keys in metadata.

How to ssh from k8s pod to other computers

Edit: Before you down vote, please comment why you down vote. so I can improve next time, thank you.
I tried to ssh from pod in kubernetes to another VM in GCE, mainly because I want to use rsync between these two. At the moment, I use gcloud compute scp to copy file to local computer then kubectl cp.
I used kubectl exec to access the pod, setting up ssh with ssh-keygen then copy rsa_id.pub to designated VM to /home/user/.ssh/, but when I try ssh -v user#ip it just said error connection timed out.
I tried setup gcloud inside pods and to be able to use gcloud compute ssh and I also tried gcloud compute config-ssh, the results are the same.
When I ssh with my own computer it works fine
I think firewall or network configuration is causing this problem but I'm not really sure how to fix it. Should I expose ssh port with k8s service LoadBalancer or should I edit my firewall rules in VPC network?
Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.
You can find the solution HERE
First, find your cluster's network:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
Then get the cluster's IPv4 CIDR used for the containers:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:
gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp

Permission Denied (public key)

I'm running a google cloud instance. I'm able to successfully connect to the instance via ssh.
But I'm not able to do the port forwarding to my localhost.
Here's the command I used:
ssh -L 16006:127.0.0.1:8080 username#instance_external_ip
When I run the above command , I get the following error
The authenticity of the host cannot be determined.
username#instance_external_ip : Permission Denied (public key)
How to solve this problem?
I found the answer for this question. The problem I had was that the server did not know the ssh keys. So, I did the following and it worked.
I deleted all the ssh keys in the my local machine and connect to my gcloud instance using the following command. gcloud command creates the ssh keys automatically and it transfers to the cloud ssh keys automatically. So, no need to manually copy paste the keys.
gcloud compute --project "project_name" ssh --zone "zone_name" "instance_name"
After this I connected to my instance using ssh. Before doing if you try to ssh tunnel , as the server won't be aware of the localhost, it will say permission denied on running ssh -L .....
Therefore, instead of directly connecting through ssh -L ... , connect along with ssh-key file stored in .ssh directory. Use the following command.
ssh -i ~/.ssh/google_compute-engine -L <ur localhost port number>:127.0.0.1:<remote_host_port> username#server_ip

Google DataProc Spark - getting "permission denied (publickey)" error when trying to SSH to a worker node

small cluster. 1 master, 2 workers. I can access all nodes (master+slave) just fine using gcloud SDK. However, once I access the master node and try to ssh to a slave node, I get "permission denied (publickey)" error. Note that I can ping the node successfully, but SSH does not work.
Dataproc does not install SSH keys between the master and worker nodes, so that is working as intended.
You may be able to use SSH agent forwarding. With something like:
# Add Compute Engine private key to SSH agent
ssh-add ~/.ssh/google_compute_engine
# Forward key to SSH agent of master
gcloud compute ssh --ssh-flag="-A" [CLUSTER]-m
# SSH into worker
ssh [CLUSTER]-w-0
You could also configure SSH keys using an initialization action or use gcloud ssh from the master node (if you gave the cluster the compute.rw scope).

gcloud compute ssh from one VM to another VM on Google Cloud

I am trying to ssh into a VM from another VM in Google Cloud using the gcloud compute ssh command. It fails with the below message:
/usr/local/bin/../share/google/google-cloud-sdk/./lib/googlecloudsdk/compute/lib/base_classes.py:9: DeprecationWarning: the sets module is deprecated
import sets
Connection timed out
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]. See https://cloud.google.com/compute/docs/troubleshooting#ssherrors for troubleshooting hints.
I made sure the ssh keys are in place but still it doesn't work. What am I missing here?
There is an assumption that you have connected to the externally-visible instance using SSH beforehand with gcloud.
From your local machine, start ssh-agent with the following command to manage your keys for you:
me#local:~$ eval `ssh-agent`
Call ssh-add to load the gcloud compute public keys from your local computer into the agent, and use them for all SSH commands for authentication:
me#local:~$ ssh-add ~/.ssh/google_compute_engine
Log into an instance with an external IP address while supplying the -A argument to enable authentication agent forwarding.
gcloud compute ssh --ssh-flag="-A" INSTANCE
source: https://cloud.google.com/compute/docs/instances/connecting-to-instance#sshbetweeninstances.
I am not sure about the 'flags' because it's not working for me bu maybe I have a different OS or Gcloud version and it will work for you.
Here are the steps I ran on my Mac to connect to the Google Dataproc master VM and then hop onto a worker VM from the master MV. I ssh'd to the master VM to get the IP.
$ gcloud compute ssh cluster-for-cameron-m
Warning: Permanently added '104.197.45.35' (ECDSA) to the list of known hosts.
I then exited. I enabled forwarding for that host.
$ nano ~/.ssh/config
Host 104.197.45.35
ForwardAgent yes
I added the gcloud key.
$ ssh-add ~/.ssh/google_compute_engine
I then verified that it was added by listing the key fingerprints with ssh-add -l. I reconnected to the master VM and ran ssh-add -l again to verify that the keys were indeed forwarded. After that, connecting to the worker node worked just fine.
ssh cluster-for-cameron-w-0
About using SSH Agent Forwarding...
Because instances are frequently created and destroyed on the cloud, the (recreated) host fingerprint keeps changing. If the new fingerprint doesn't match with ~/.ssh/known_hosts, SSH automatically disables Agent Forwarding. The solution is:
$ ssh -A -o UserKnownHostsFile=/dev/null ...