Unable to Get JupyterLab Link - ssh

Google recommends using:
gcloud compute instances describe --project NAME --zone ZONE INSTANCE | grep googleusercontent.com | grep datalab
But when I run this, nothing shows up. I can access the JupyterLab through normal SSH tunnelling however. How should I fix this problem?

This issue is because of the last grep. Your command should be like this:
gcloud compute instances describe --project NAME --zone ZONE INSTANCE | grep googleusercontent.com
Good luck!

The current datalab documentation appears to not ask this:
https://cloud.google.com/datalab/docs/quickstart
datalab create ${INSTANCE} --project=${PROJECT} --zone=${ZONE}
datalab connect ${INSTANCE} --project=${PROJECT} --zone=${ZONE}
As I suspect you're doing, you may also:
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--ssh-flag="-L 8081:localhost:8080"
same-same.
Please reference the documentation that you're using in your question so that we may better help.

Related

Using aws emr add-steps for spark-submit

We have a complicated spark-submit command that we would like to submit to AWS EMR using the aws emr add-steps CLI command. We are having trouble figuring out the correct syntax to use. For example, consider the example command from Apache's Running Spark on YARN page:
$ ./bin/spark-submit --class my.main.Class \
--master yarn \
--deploy-mode cluster \
--jars my-other-jar.jar,my-other-other-jar.jar \
my-main-jar.jar \
app_arg1 app_arg2
Following the guidance from this EMR command-runner page, we created something like this:
$ aws emr add-steps\
--cluster-id j-123456789 \
--steps Type=CUSTOM_JAR,Name='Test_Job',Jar='command-runner.jar',ActionOnFailure=CONTINUE,Args=[\
./bin/spark-submit,\
--class,my.main.Class,\
--master,yarn,\
--deploy-mode cluster,\
--jars,my-other-jar.jar,my-other-other-jar.jar,my-main-jar.jar,app_arg1,app_arg2]
However, due to the apparent parsing at commas, the command appears to only associate --jars with "my-other-jar", whereas "my-other-other-jar" is not. I'm hoping somebody can tell us the proper syntax to use. For example, should we use --jars for each extra jar, like:
--jars,my-other-jar.jar,--jars,my-other-other-jar.jar
or maybe there is some special list syntax, e.g.,
--jars,[my-other-jar.jar,my-other-other-jar.jar]
or something else. Can anybody tell us, or point us to, the correct syntax to use for spark-submit arguments that might take a list, i.e., not just --jars, but also --conf, --files, ...?

Is there a way to execute commands remotely using gcloud compute ssh utility

I know it is possible to execute commands remotely using ssh (in a file) like so;
#!/bin/sh
ssh ${USER}#${SERVER_IP} <<EOF
cd ${PROJECT_PATH}
git pull
exit
EOF
my question is this: it possible to do the same with gcloud compute ssh? I mean;
gcloud compute ssh vm-internal --zone us-central1-c --tunnel-through-iap << EOF
...
...
EOF
PS: The instance is private with no external IP hence the use of iap
As long as you can ssh onto the instance, you should be able to:
gcloud compute ssh .... --command="bash -s" <<EOF
echo "Hello Freddie"
ls -l
EOF
You can test this (thanks to gcloud consistency using Cloud Shell):
gcloud alpha cloud-shell ssh --command="bash -s" <<EOF
echo "Hello Freddie"
ls
EOF
NOTE you may not need the -s but I think it's preferred
Yields:
Automatic authentication with GCP CLI tools in Cloud Shell is disabled. To enable, please rerun command with `--authorize-session` flag.
Hello Freddie
Projects
README-cloudshell.txt -> /google/devshell/README-cloudshell.txt
stackoverflow

how copy / scp files via gsutil to google cloud VM?

I am playing with linux VM in GCP.
Any idea how copy files from local machine to GCP VM via scp gsutil SDK ?
this command not working:
gcloud beta compute scp file user#test01:
ERROR: (gcloud.beta.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
But I can login via ssh with below gsutil command
Cloud beta compute ssh --zone "europe-west3-c" "test01" --project "pyton-app"
Here are other options:
gcloud beta compute scp
ERROR: (gcloud.beta.compute.scp) argument [[USER#]INSTANCE:]SRC [[[USER#]INSTANCE:]SRC ...] [[USER#]INSTANCE:]DEST: Must be specified.
Usage: gcloud beta compute scp [[USER#]INSTANCE:]SRC [[[USER#]INSTANCE:]SRC ...] [[USER#]INSTANCE:]DEST [optional flags]
optional flags may be --compress | --dry-run | --force-key-file-overwrite |
--help | --internal-ip | --plain | --port | --recurse |
--scp-flag | --ssh-key-expiration |
--ssh-key-expire-after | --ssh-key-file |
--strict-host-key-checking | --tunnel-through-iap |
--zone
...yes, instance is in this project as I can login via ssh, but unable to copy via scp.. trying also your advice command, but not working also
gcloud beta compute scp r.script user#test01:/tmp --project "pyton-app"
ERROR: (gcloud.beta.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
Tried also 2nd option, not working also
gcloud compute scp r.script user#test01:/tmp --project "pyton-app"
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
You aren't in the correct zone.
Look at the error message
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
-> europe-west3-a
Look at your ssh command
gcloud beta compute ssh --zone "europe-west3-c" "test01" --project "pyton-app"
-> europe-west3-c
For solving this, 2 solutions:
Add the zone in your scp command
gcloud beta compute scp --zone "europe-west3-c" file user#test01:
Set the zone by default in your gcloud config
gcloud config set compute/zone europe-west3-c

Error when doing the Tensorflow tutorial

I follow this tutorial to do the training on Google cloud ml-engine. I follow it step by step but I am facing error when submit the ml job to cloud. I ran this command.
sam#sam-VirtualBox:~/models/research$ gcloud ml-engine jobs submit training whoami_object_detection_date +%s --job-dir=gs://tf_testing/train --packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz --module-name object_detection.train --region us-central1 --config object_detection/samples/cloud/cloud.yml -- --train_dir=gs://tf_testing/train --pipeline_config_path=gs://tf_testing/data/faster_rcnn_resnet101_pets.config
and I got this error.
ERROR: (gcloud.ml-engine.jobs.submit.training) FAILED_PRECONDITION: Field: package_uris Error: The provided GCS paths [gs://tf_testing/train/packages/8ec87a281aadb58d3d82462bbffafa9d7e521cc03025209704bc643eb9f3bc37/slim-0.1.tar.gz, gs://tf_testing/train/packages/8ec87a281aadb58d3d82462bbffafa9d7e521cc03025209704bc643eb9f3bc37/object_detection-0.1.tar.gz] cannot be read by service account service-499049193648#cloud-ml.google.com.iam.gserviceaccount.com. - '#type': type.googleapis.com/google.rpc.BadRequest fieldViolations: - description: The provided GCS paths [gs://tf_testing/train/packages/8ec87a281aadb58d3d82462bbffafa9d7e521cc03025209704bc643eb9f3bc37/slim-0.1.tar.gz, gs://tf_testing/train/packages/8ec87a281aadb58d3d82462bbffafa9d7e521cc03025209704bc643eb9f3bc37/object_detection-0.1.tar.gz] cannot be read by service account service-499049193648#cloud-ml.google.com.iam.gserviceaccount.com. field: package_uris
I saw this post and this post and tried the solution but it did not help. FYI, I did not change PATH_TO_BE_CONFIGURED when ran this command. Could it be the reason?
sed -i "s|PATH_TO_BE_CONFIGURED|"gs://${YOUR_GCS_BUCKET}"/data|g" \
object_detection/samples/configs/faster_rcnn_resnet101_pets.config
You need to to allow the service account to read/write to your bucket:
gsutil acl ch -u $SVCACCT:WRITE gs://$BUCKET/
gsutil defacl ch -u $SVCACCT:O gs://$BUCKET/
Alternately:
gcloud ml-engine init-project
Will add the service account as an editor on the project. Make sure to do this in the project that owns the bucket

Setting Static Hostname on GCP VMs

On AWS, setting a static IP is a breeze.
But it is damn right frustrating getting this to work on GCP.
I have used:
gcloud compute instances add-metadata $instanceName --metadata hostname=$instanceStaticHostname
sudo crontab -e
#reboot hostname $(curl --silent "http://metadata.google.internal/computeMetadata/v1/instance/attributes/hostname" -H "Metadata-Flavor: Google")
edited /etc/rc.local with:
hostnamectl set-hostname --static $instanceStaticHostname
chmod +x /etc/rc.d/rc.local
but none of these has helped.
Please, anyone with a better sense of how to set static hostname on GCP that it stays?
Thanks!
1 - vi /etc/dhcp/dhclient.d/google_hostname.sh
2 - Clear file and write line hostnamectl set-hostname server1.example.biz --static
3 - Reboot