I am playing with linux VM in GCP.
Any idea how copy files from local machine to GCP VM via scp gsutil SDK ?
this command not working:
gcloud beta compute scp file user#test01:
ERROR: (gcloud.beta.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
But I can login via ssh with below gsutil command
Cloud beta compute ssh --zone "europe-west3-c" "test01" --project "pyton-app"
Here are other options:
gcloud beta compute scp
ERROR: (gcloud.beta.compute.scp) argument [[USER#]INSTANCE:]SRC [[[USER#]INSTANCE:]SRC ...] [[USER#]INSTANCE:]DEST: Must be specified.
Usage: gcloud beta compute scp [[USER#]INSTANCE:]SRC [[[USER#]INSTANCE:]SRC ...] [[USER#]INSTANCE:]DEST [optional flags]
optional flags may be --compress | --dry-run | --force-key-file-overwrite |
--help | --internal-ip | --plain | --port | --recurse |
--scp-flag | --ssh-key-expiration |
--ssh-key-expire-after | --ssh-key-file |
--strict-host-key-checking | --tunnel-through-iap |
--zone
...yes, instance is in this project as I can login via ssh, but unable to copy via scp.. trying also your advice command, but not working also
gcloud beta compute scp r.script user#test01:/tmp --project "pyton-app"
ERROR: (gcloud.beta.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
Tried also 2nd option, not working also
gcloud compute scp r.script user#test01:/tmp --project "pyton-app"
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
You aren't in the correct zone.
Look at the error message
- The resource 'projects/pyton-app/zones/europe-west3-a/instances/test01' was not found
-> europe-west3-a
Look at your ssh command
gcloud beta compute ssh --zone "europe-west3-c" "test01" --project "pyton-app"
-> europe-west3-c
For solving this, 2 solutions:
Add the zone in your scp command
gcloud beta compute scp --zone "europe-west3-c" file user#test01:
Set the zone by default in your gcloud config
gcloud config set compute/zone europe-west3-c
Related
I'm trying to get an ML job to run on AWS Batch. The job runs in a docker container, using credentials generated for a Task IAM Role.
I use DVC to manage the large data files needed for the task, which are hosted in an S3 repository. However, when the task tries to pull the data files, it gets an access denied message.
I can verify that the role has permissions to the bucket, because I can access the exact same files if I run an aws s3 cp command (as shown in the example below). But, I need to do it through DVC so that it downloads the right version of each file and puts it in the expected place.
I've been able to trace down the problem to s3fs, which is used by DVC to integrate with S3. As I demonstrate in the example below, it gets an access denied message even when I use s3fs by itself, passing in the credentials explicitly. It seems to fail on this line, where it tries to list the contents of the file after failing to find the object via a head_object call.
I suspect there may be a bug in s3fs, or in the particular combination of boto, http, and s3 libraries. Can anyone help me figure out how to fix this?
Here is a minimal reproducible example:
Shell script for the job:
#!/bin/bash
AWS_CREDENTIALS=$(curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDENTIALS" | jq .AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDENTIALS" | jq .SecretAccessKey -r)
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDENTIALS" | jq .Token -r)
echo "AWS_ACCESS_KEY_ID=<$AWS_ACCESS_KEY_ID>"
echo "AWS_SECRET_ACCESS_KEY=<$(cat <(echo "$AWS_SECRET_ACCESS_KEY" | head -c 6) <(echo -n "...") <(echo "$AWS_SECRET_ACCESS_KEY" | tail -c 6))>"
echo "AWS_SESSION_TOKEN=<$(cat <(echo "$AWS_SESSION_TOKEN" | head -c 6) <(echo -n "...") <(echo "$AWS_SESSION_TOKEN" | tail -c 6))>"
dvc doctor
# Succeeds!
aws s3 ls s3://company-dvc/repo/
# Succeeds!
aws s3 cp s3://company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2 mycopy.txt
# Fails!
python3 download_via_s3fs.py
download_via_s3fs.py:
import os
import s3fs
# Just to make sure we're reading the credentials correctly.
print(os.environ["AWS_ACCESS_KEY_ID"])
print(os.environ["AWS_SECRET_ACCESS_KEY"])
print(os.environ["AWS_SESSION_TOKEN"])
print("running with credentials")
fs = s3fs.S3FileSystem(
key=os.environ["AWS_ACCESS_KEY_ID"],
secret=os.environ["AWS_SECRET_ACCESS_KEY"],
token=os.environ["AWS_SESSION_TOKEN"],
client_kwargs={"region_name": "us-east-1"}
)
# Fails with "access denied" on ListObjectV2
print(fs.exists("company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2"))
Terraform for IAM role:
data "aws_iam_policy_document" "standard-batch-job-role" {
# S3 read access to related buckets
statement {
actions = [
"s3:Get*",
"s3:List*",
]
resources = [
data.aws_s3_bucket.company-dvc.arn,
"${data.aws_s3_bucket.company-dvc.arn}/*",
]
effect = "Allow"
}
}
Environment
OS: Ubuntu 20.04
Python: 3.10
s3fs: 2023.1.0
boto3: 1.24.59
I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.
I know it is possible to execute commands remotely using ssh (in a file) like so;
#!/bin/sh
ssh ${USER}#${SERVER_IP} <<EOF
cd ${PROJECT_PATH}
git pull
exit
EOF
my question is this: it possible to do the same with gcloud compute ssh? I mean;
gcloud compute ssh vm-internal --zone us-central1-c --tunnel-through-iap << EOF
...
...
EOF
PS: The instance is private with no external IP hence the use of iap
As long as you can ssh onto the instance, you should be able to:
gcloud compute ssh .... --command="bash -s" <<EOF
echo "Hello Freddie"
ls -l
EOF
You can test this (thanks to gcloud consistency using Cloud Shell):
gcloud alpha cloud-shell ssh --command="bash -s" <<EOF
echo "Hello Freddie"
ls
EOF
NOTE you may not need the -s but I think it's preferred
Yields:
Automatic authentication with GCP CLI tools in Cloud Shell is disabled. To enable, please rerun command with `--authorize-session` flag.
Hello Freddie
Projects
README-cloudshell.txt -> /google/devshell/README-cloudshell.txt
stackoverflow
I have some problem in executing command lxc. when i try without sudo i get the error:
$ lxc storage list
Error: Get http://unix.socket/1.0: dial unix /var/snap/lxd/common/lxd/unix.socket: connect: permission denied
when i try with sudo i get:
$ sudo lxc storage list
sudo: lxc: command not found
i don't understand the problem about permission and i cannot solve this type of issue. Any suggestion is appreciated
INFO: i'm runnign Debian 10 buster on a virtual machine, i installed lxd and lxc by:
$ sudo snap install lxd
$ sudo apt install lxc
modified PATH with:
/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/snap/bin:/snap/bin:/var/lib/snapd/snap/bin:/snap/bin/lxc:/snap/bin/lxd
i added my account to sudoers:
moro ALL=(ALL)ALL
if i run
$ su-
root#debian:~# lxc storage list
+---------+-------------+--------+--------------------------------------------+---------+
| NAME | DESCRIPTION | DRIVER | SOURCE | USED BY |
+---------+-------------+--------+--------------------------------------------+---------+
| default | | btrfs | /var/snap/lxd/common/lxd/disks/default.img | 14 |
+---------+-------------+--------+--------------------------------------------+---------+
As far as my understanding goes, lxc uses the lxc group, in which your $USER has to be. Thus everything should work as expected if you add your user to the lxc group, e.g. via
sudo adduser $USER lxd
This is mentioned without an example on the lxd page getting started under access control and with an example in this
nice tutorial for Ubuntu 16.04, which should be applicable to many other debian based OS.
Google recommends using:
gcloud compute instances describe --project NAME --zone ZONE INSTANCE | grep googleusercontent.com | grep datalab
But when I run this, nothing shows up. I can access the JupyterLab through normal SSH tunnelling however. How should I fix this problem?
This issue is because of the last grep. Your command should be like this:
gcloud compute instances describe --project NAME --zone ZONE INSTANCE | grep googleusercontent.com
Good luck!
The current datalab documentation appears to not ask this:
https://cloud.google.com/datalab/docs/quickstart
datalab create ${INSTANCE} --project=${PROJECT} --zone=${ZONE}
datalab connect ${INSTANCE} --project=${PROJECT} --zone=${ZONE}
As I suspect you're doing, you may also:
gcloud compute ssh ${INSTANCE} \
--project=${PROJECT} \
--zone=${ZONE} \
--ssh-flag="-L 8081:localhost:8080"
same-same.
Please reference the documentation that you're using in your question so that we may better help.