I have projects P1,P2 in europe-west2. In both projects I have same dataset/table structure at same location europe-west2.
In P1, I created a service account and added the same service account (SA) to P2, like here:
https://gtseres.medium.com/using-service-accounts-across-projects-in-gcp-cf9473fef8f0
In both projects, the SA has role BigQuery Admin.
I want to copy a table from P1 to P2. I do
bq --project_id P1 --service_account_credential_file <path to SA json> cp P1:dataset.table P2:dataset.table
The script seems to find the tables and asks
cp: replace P2:dataset.table? (y/n)
After confirming, cp says:
BigQuery error in cp operation: Access Denied: Project P1: User does
not have bigquery.jobs.create permission in project P1.
If I try to copy in the other direction then I get:
BigQuery error in cp operation: Access Denied: Permission bigquery.tables.get denied on table
P1:dataset.sessions (or it may not exist).
This issue could be about permissions. You need these permissions:
On the source dataset:
bigquery.tables.get
bigquery.tables.getData
On the destination dataset:
bigquery.tables.create.
Permission to run a copy job:
bigquery.jobs.create IAM permission.
bigquery.datasets.create permission
Also, you need these IAM roles that include the permissions you need to run a copy job:
roles/bigquery.user
roles/bigquery.jobUser
roles/bigquery.admin
You can use this command to copy tables:
bq --location=location cp \
-a -f -n \
project_id:dataset.source_table \
project_id:dataset.destination_table
You can read this documentation.
Related
I need to use root user to run scripts at crontab, for example to read and write on all /home folders.
But something that I need to do also in the shell script is to run psql. Problem:
my user (me = whoami and not is root) can run for example psql -c "\l"
the root user not works (!) with psql -c "\l"... And error not make sense "psql: error: could not connect to server: FATAL: database "root" does not exist".
How to enable root to run psql?
PS: looking for a kind of "GRANT ALL PRIVILEGES ON ALL DATABASES TO root".
root is allowed to run psql, but nobody can connect to a database that doesn't exist.
The default value for the database user name with psql is the operating system user name, and the default for the database is the same as the database user name.
So you have to specify the correct database and database user explicitly:
psql -U postgres -d postgres -l
The next thing you are going to complain about is that peer authentication was denied.
To avoid that, either run as operating system user postgres or change the rules in pg_hba.conf.
I want to export table from ma database using remote access (ssh) and psql (\copy command) but failed
to resume:
I have a database named mydatabase and user named myuser granted on this database
table I try to extract is named mytable
I connect to my remote server using Putty
once connected, I run psql : sudo -u postgres psql (I tried to connect using myuser but failed because mysuser is unknown ???)
I connect to my database : \c mybase
I run \copy mytable TO '/home/path/to/my/file/file.txt'
and get error message: '/home/path/to/my/file/file.txt' : Permission denied
as I said, I tried to connect using myuser thinking it could solve permission issue but don't know why it failed...
psql -d mydatabase -U myuser
psql: error: could not connect to server: FATAL: Peer authentication failed for user "myuser"
Let's assume the operating system user you connect with using Putty is x.
Since you run psql as user postgres using the sudo command, it is the operating system user postgres that needs permissions to write /home/path/to/my/file/file.txt.
So user x must give postgres the required permissions.
I am trying to transfer file from cloud bucket to VM instance.
I connected to VM instance by SSH, And used gsutil code like this :
gsutil cp gs://[bucket name]/[file name] /home
and then error was occur
Copying gs://huji/final-project-deep-learning-19-tf.tar...
OSError: Permission denied. GiB]
So I tried this command :
sudo gsutil cp gs://[bucket name]/[file name] /home
And then, downloading completed.
Copying gs://huji/final-project-deep-learning-19-tf.tar...
| [1 files][ 1.7 GiB/ 1.7 GiB] 76.2 MiB/s
but I cannot find any files and the disk volume hasn't changed.
Can anybody explain why this happens and how to copy files from cloud? thanks.
Create a service account link:
gcloud iam service-accounts create transfer --description "transfer" --display-name "transfer"
Set up a new instance to run as a service account:
gcloud compute instances create instance --service-account transfer#my-project.iam.gserviceaccount.com --scopes https://www.googleapis.com/auth/devstorage.full_control
For example, the scope for full access to Cloud Storage is
https://www.googleapis.com/auth/devstorage.full_control. The alias for
this scope is storage-full
SSH to the VM instance:
gcloud beta compute --project "my-project" ssh --zone "us-central1-a" "instance"
Copy the files :
gsutil cp gs://bucket/file.csv .
List the files:
ls
file.csv
I have the following two functions:
drop_linked_resources() {
local readonly host="$1"
local readonly master_username="$2"
local readonly master_password="$3"
local readonly username="$4"
local readonly password="$5"
local readonly db="$6"
docker run --rm \
-e PGPASSWORD="$master_password" \
postgres:9.6.3-alpine psql -h "$host" -d postgres -U "$master_username" <<-EOSQL
DROP DATABASE IF EXISTS $db;
DROP USER IF EXISTS $username;
EOSQL
}
create_user() {
local readonly host="$1"
local readonly master_username="$2"
local readonly master_password="$3"
local readonly username="$4"
local readonly password="$5"
local readonly db="$6"
docker run --rm \
-e PGPASSWORD="$master_password" \
postgres:9.6.3-alpine psql -h "$host" -d postgres -U "$master_username" \
-c "CREATE USER $username WITH CREATEDB CREATEROLE ENCRYPTED PASSWORD '$password';"
}
very easy functions. drop_linked_resources run first to clean and after create_user runs. The problem is that I get the following error:
2018-03-20 16:21:56 [INFO] [create-database.sh] Drop linked existing resources
2018-03-20 16:21:57 [INFO] [create-database.sh] Create my_user user
role "my_user" already exists
I get this error because I run already the functions so in order to make it idempotent I must clean it and then create again.
Any idea what I'm doing wrong?
You probably missed the steps where all the objects the user/role owns must be dropped and any privileges must be revoked for any role has been granted.
From PostgreSQL manual (The DROP USER is an alias for DROP ROLE):
A role cannot be removed if it is still referenced in any database of the cluster; an error will be raised if so. Before dropping the role, you must drop all the objects it owns (or reassign their ownership) and revoke any privileges the role has been granted. The REASSIGN OWNED and DROP OWNED commands can be useful for this purpose.
However, it is not necessary to remove role memberships involving the role; DROP ROLE automatically revokes any memberships of the target role in other roles, and of other roles in the target role. The other roles are not dropped nor otherwise affected.
Use the REASSIGNED OWNED or DROP OWNED command.
Try adding below command somewhere before the DROP USER command:
DROP OWNED BY $username CASCADE;
Reference:
https://www.postgresql.org/docs/8.4/static/sql-droprole.html
https://www.postgresql.org/docs/8.4/static/sql-drop-owned.html
I would like to use rsync via ssh to copy files from source-machine to dest-machine (both Linux boxes). Due to a security policy that is beyond my control, the files on dest-machine must be owned by user1 but user1 is not allowed to log in. I am user2 and can log in via ssh to both machines, user2 is in the same group as user1, and both users exist on both machines. After logging into either machine user1 can become user2 by first doing sudo -s (no password prompt) and then su user1.
The files typically have the following permissions:
source-machine:
-rw-rw-r-- user2 group1 file.txt
dest-machine:
-rw-rw-r-- user1 group1 file.txt
Rsync always changes the ownership on dest-machine to be user2, becuase I am using
/usr/bin/rsync -rlvx --delete --exclude-from ignore-file.txt --rysnc-path="/usr/bin/rsync" /path/to/files/ -e ssh user2#dest-machine.example.com:/path/to/files/
as part of the rsync command. At the moment, I have to work out which files have been copied and change the ownership back to user1.
I saw in this discussion that it may be possible to use
--rsync-path='sudo -u user2 rsync'
but I need the intermediate step of sudo -s.
Is there a way to get rsync to leave the files on dest-machine owned by user1?
UPDATE: Thanks to mnagel's comment, I tried that permutation, and when that didn't work, I was exploring why and added two more permutations: (1) I ran the script at source-machine as root and (2) I had somehow not included -go as options. (I hadn;t used -a, as the security policy doesn't allow preserving times). When put altogether, it works.