My overall-goal is to run scikit-scripts on GCE and therefore I try to use Anaconda/IPython (which I use on my desktop) on GCE.
I am following this tutorial here (https://cloud.google.com/dataproc/tutorials/jupyter-notebook#verify_cluster_and_notebook_creation) but struggling on the following step:
gcloud compute ssh --zone=<master-host-zone> \
--ssh-flag="-D 1080" --ssh-flag="-N" --ssh-flag="-n" <master-host-name>
My console always responds with the following error message and I have no idea what is wrong:
unknown option "-D 1080"
Thanks for your help!
This makes sense because on Windows, gcloud compute ssh uses PuTTY for SSH; the PuTTY client doesn't respect the -D flag. You'll have to use PuTTY-specific options for creating an SSH tunnel; I'm not a Windows user so I don't know what those are.
I'll get the tutorial updated.
Related
I am having no trouble sshing into a Google Cloud compute engine VM, but am unable to ssh into the master node of a Google Cloud Dataproc cluster.
Specifically,
gcloud compute ssh my-vm
works just fine, while
gcloud compute ssh mycluster-m
fails with error message:
admin#IP.ADDRESS: Permission denied (publickey).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
The compute engine VM and the Dataproc cluster are in the same project. I understand from the error message it is something related to the ssh keys, but I am not sure how to fix it - I checked the ssh keys in the project via cloud console, and it is correct, and tried the usual gcloud auth login to reset gcloud project login details.
Any hints on how to fix this?
Edits: I am trying to ssh from my machine, not the cloud console- that's a good point, I will try that and see if that is possible. But in the end I want to use this to connect to a Jupyter notebook from my local computer, so that does not solve the issue of being unable to SSH from my machine to the VM.
Concerning the command to create the Dataproc cluster, I use tools from the hail dataproc python library, but these are basically just convenience shells for the gcloud compute commands, and this is what is failing. But the command I used to create the Dataproc cluster was:
gcloud beta dataproc clusters create \
test \
--image-version=1.4-debian9 \
--properties=^|||^spark:spark.task.maxFailures=20|||spark:spark.driver.extraJavaOptions=-Xss4M|||spark:spark.executor.extraJavaOptions=-Xss4M|||spark:spark.speculation=true|||hdfs:dfs.replication=1|||dataproc:dataproc.logging.stackdriver.enable=false|||dataproc:dataproc.monitoring.stackdriver.enable=false|||spark:spark.driver.memory=41g \
--initialization-actions=gs://hail-common/hailctl/dataproc/0.2.53/init_notebook.py \
--metadata=^|||^WHEEL=gs://hail-common/hailctl/dataproc/0.2.53/hail-0.2.53-py3-none-any.whl|||PKGS=aiohttp>=3.6,<3.7|aiohttp_session>=2.7,<2.8|asyncinit>=0.2.4,<0.3|bokeh>1.1,<1.3|decorator<5|dill>=0.3.1.1,<0.4|gcsfs==0.2.1|humanize==1.0.0|hurry.filesize==0.9|nest_asyncio|numpy<2|pandas>0.24,<0.26|parsimonious<0.9|PyJWT|python-json-logger==0.1.11|requests>=2.21.0,<2.21.1|scipy>1.2,<1.4|tabulate==0.8.3|tqdm==4.42.1|google-cloud-storage==1.25.* \
--master-machine-type=n1-highmem-8 \
--master-boot-disk-size=100GB \
--num-master-local-ssds=0 \
--num-preemptible-workers=0 \
--num-worker-local-ssds=0 \
--num-workers=2 \
--preemptible-worker-boot-disk-size=40GB \
--worker-boot-disk-size=40GB \
--worker-machine-type=n1-standard-8 \
--initialization-action-timeout=20m \
--labels=creator=my_name \
--max-idle=10m
Turns out the problem is that the cluster creates a new account called my_username on the cluster master VM, but I am logged into my laptop as a user called 'admin'. So there is a mismatch between account name and key at the destination, so the login fails.
Can be fixed by adding username to the gcloud command:
gcloud compute ssh my_username#mycluster-m
Though I still don't really understand why the ssh keys are different for the dataproc VM and a compute engine VM, I'd be happy if someone can enlighten me.
Hi I am still learning docker's wonderful magical world. I use docker on linux with docker-machine I already added 2 already existing Linux servers with the docker-machine create and successfully run my containers on them. Now I try to do the same with an already existing google compute engine based machine which has Linux too. I use the command:
docker-machine create --driver generic --generic-ip-address ipaddress --generic- ssh-key path_To_Key --generic-ssh-user user_Name machine_Name
And I get an error:
Error creating machine: Error checking the host: Error checking and/or
regenerating the certs: There was an error validating certificates for
host "X.X.X.X:2376": dial tcp X.X.X.X:2376: i/o timeout You can
attempt to regenerate them using 'docker-machine regenerate-certs
[name]'.
Then the docker-machine does not know it's ip But I seems to give it a command trought docker-machine ssh
Altough I am not able to log in with ssh anywhere else and I must stop/remove the created machine and restart it.
Anyone has a similar problem?
According to generic driver's page at docker docs, try to edit --generic-ip-address=ip_address with equal sign.
For some reason ssh doesn't work to set up a tunnel to my Google Compute Engine instance. I have to use gcloud compute ssh. I'd really like to set up a persistent/resilient tunnel, like one gets with autossh. Is there any way I can do so using gcloud compute ssh?
gcloud compute ssh simply copies your ssh key to the project sshKeys metadata (see Cloud Console > Compute Engine > Metadata > SSH Keys) and runs standalone SSH with the ~/.ssh/google_compute_engine key. To see the exact command line invoked, run gcloud compute ssh --dry-run .... Anything that's possible with typical SSH is possible with gcloud compute ssh.
Another option to investigate is gcloud compute config-ssh, which syncs your ~/.ssh/google_compute_engine SSH key to the project and sets up your ~/.ssh/config file so that you can run ssh without gcloud.
This question may have been asked before but I don't understand the concept. Can you please help me here?
Weird issue from this morning .. see i just push my file to google cloud computing then showing below error.. I don't know where to look that error.
ri#ri-desktop:~$ gcloud compute --project "project" ssh --zone "europe-west1-b" "instance"
Warning: Permanently added '192.xx.xx.xx' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
This occurs when your compute instance has PermitRootLogin no in it's SSHD config and you try to login as root. You can change the login user by adding username# before the instance name. Here is a complete example:
gcloud compute instances create my-demo-compute \
--zone us-central1-f \
--machine-type f1-micro \
--image-project debian-cloud \
--image-family debian-8 \
--boot-disk-size=10GB
gcloud --quiet compute ssh user#hostname --zone us-central1-f
In the example above, gcloud will set the correct credentials and will make sure you login. You can add the --quiet to ignore the ssh-password question.
One possible cause is that someone else in your project set the per-instance metadata for sshKeys (which overrides the project-wide metadata). When you run gcloud compute instances describe your-instance-name do you see a key called sshKeys in the metadata items?
It would also be helpful to see the contents of the latest log in ~/.config/gcloud/logs/. However, please make sure to scrub it of sensitive information.
I have a MacBook after facing with same problem, I re-created my SSH key in this format and works fine.
Generate your key with:
ssh-keygen -t rsa -C your_username
Copy the key and paste the ssh key under compute Engine metadata:
cat ~/.ssh/id_rsa.pub
It should work fine
I've an f1-micro instance which I've been testing docker on created as such:
$ gcloud compute instances create dockerbox \
--image container-vm-v20140731 \
--image-project google-containers \
--zone europe-west1-b \
--machine-type f1-micro
This all works fine.
I'm now in the process of upgrading to a larger google compute engine VM. I've taken a snapshot of the fi-micro dockerbox, then used this as the Boot Source for the larger n1-standard-8 VM... this seems to create without problems until I try to ssh onto it.
via the command line:
$ gcloud compute --project "secure-electron-631" ssh --zone "europe-west1-b" "me#biggerbox"
ssh: connect to host xx.xx.xx.xx port 22: Connection timed out
ERROR: (gcloud.compute.ssh) Your SSH key has not propagated to your instance yet. Try running this command again.
via the browser, ssh connection I get:
Connection Failed
We are unable to connect to the VM on port 22. Please check that the VM is healthy and the SSH server is running.
I've tried multiple times but same result
I've confirmed it biggerbox is RUNNING. not sure about sshd
OK, problem seemed to stem from not detaching the micro instance from a mounted persistant disk when I took the snapshot. Detached and unmounted the PD volume and snapshotted the micro-instance again and based a new n1-standard-8 on it. Works ok now.
FYI, also handy for those troubleshooting GCE instance ssh:
https://github.com/GoogleCloudPlatform/compute-ssh-diagnostic-sh