How to grant a user account access for GCE Cloud Storage - permissions

I have a Compute VM that has storage permission of read-write. When I try to run the following command:
gsutil rsync -R local-dir gs://folder1/prod/www.domain.tld
I get insufficient permission error:
Building synchronization state...
Skipping cloud sub-directory placeholder object (gs://folder1/prod/www.domain.tld/) because such objects aren't needed in (and would interfere with) directories in the local file system
Starting synchronization
Copying file://local-dir/.gitignore [Content-Type=application/octet-stream]...
Uploading gs://folder1/prod/www.domain.tld/.gitignore: 174 B/174 B
AccessDeniedException: 403 Insufficient Permission
I am not sure what account needs to have specific permissions or how I would even assign the permissions. Can someone provide some direction as to what I need to look into? The only permissions I can think of is a service account but I am not sure how to create one or if that is what I even need to do. Also, once I grant an account access/permission, how would I use the account to authenticate?

Use gcloud auth list command to lookup the active account. The service account of Comoute Engine is similar to the following:
123845678986-compute#developer.gserviceaccount.com (active)
By default, this service account is a member of your project with Edit permission. Check ACLs of your GCS bucket and its folders and make sure the GCE service account or the group which it is belong to, has ownership or editing rights to them.

Related

S3 Replication - s3:PutReplicationConfiguration

I have been attempting to introduce S3 bucket replication into my existing project's stack. I kept getting an 'API: s3:PutBucketReplication Access Denied' error in CloudFormation when updating my stack through my CodeBuild/CodePipeline project after adding the Replication rule on the source bucket + S3 replication role. For testing, I've added full S3 permission ( s3:* ) to the CodeBuild Role for all resources ( "*" ), as well as full S3 permissions on the S3 replication role -- again I got the same result.
Additionally, I tried running a stand-alone, stripped down version of the CF template (so not updating my existing application infrastructure stack) - which creates the buckets (source + target) and the S3 replication role. It was deployed/run through CloudFormation while logged in with my Admin role via the console and again I got the same error as when attempting the deployment with my CodeBuild role in CodePipeline.
As a last ditch sanity check, again being logged in using my admin role for the account, I attempted to perform the replication setup manually on buckets that I created using the S3 console and I got the below error:
You don't have permission to update the replication configuration
You or your AWS admin must update your IAM permissions to allow s3:PutReplicationConfiguration, and then try again. Learn more about Identity and access management in Amazon S3 API response
Access Denied
I confirmed that my role has full S3 access across all resources. This message seems to suggest to me that the permission s3:PutReplicationConfiguration may be different then other S3 permissions somehow - needing to be configured with root access to the account or something?
Also, it seems strange to me that CloudFormation indicates the s3:PutBucketReplication permission, where as the S3 console error references the permission s3:PutReplicationConfiguration. There doesn't seem to be an IAM action for s3:PutBucketReplication (ref: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) only s3:PutReplicationConfiguration.
Have you checked Permission Boundary? Is this in a corporate control tower or stand alone account?
Deny always wins so if you have a Permission Boundary that excludes some actions even when you have explicitly allowed it you may run into issues like this.
It turns out that the required permission (s3:PutReplicationConfiguration) was actually being blocked by a preventive ControlTower Guard Rail that was put in place on the OU the AWS account exists in. Unfortunately, this DENY is not visible as a user from anywhere within the AWS account, as it exists outside of any Permission Boundary or IAM Policy. This required some investigation from our internal IT team to identify the source of the DENY from the guard rail control.
https://docs.aws.amazon.com/controltower/latest/userguide/elective-guardrails.html#disallow-s3-ccr

Best practice to make S3 file accessible for Redshift through COPY operation for anyone

I want to publish a tutorial where a data from sample tsv file S3 is used by Redshift. Ideally I want it to be simple copy paste operation required to follow the exercises step by step, similar to what's in Load Sample Data from Amazon S3. The problem is with the first data import task using COPY command as it only supports S3, or EMR based load.
This seems like a simple requirement but no hassle-free way to really do it with Redshift COPY (I can make the file available for browser download without any problem but COPY requires CREDENTIALS parameter…)
Variety of options for Redshift COPY Authorization parameters is quite rich:
Should I ask user to Create an IAM Role for Amazon Redshift
himself?
Should I create it myself and publish the IAM role ARN? Sounds most hassle
free (copy paste) but security wise doesn't sound well…? Do I need to restrict S3 permissions to limit the access to only that particular file for that role?
Should I try temporary access instead?
You are correct:
Data can be imported into Amazon Redshift from Amazon S3 via the COPY command
The COPY command requires permission to access the data stored in Amazon S3. This can be granted either via:
Credentials (Access Key + Secret Key) associated with an IAM User, or
An IAM Role
You cannot create a Role for people and let them use it, because their Amazon Redshift cluster will be running in a different AWS Account than your IAM Role. You could possibly grant trust access so that other accounts can use the role, but this is not necessarily a wise thing to do.
As for credentials, they could either use their own or ones the you supply. They can access their own Access Key + Secret Key in the IAM console.
If you wish to supply credentials for them to use, you could create an IAM User that has permission only to access the Amazon S3 files they need. It is normally unwise to publish your AWS credentials because they might expose a security hole, so you should think carefully before doing this.
At the end of the day, it's probably best to show them the correct process so they understand how to obtain their own credentials. Security is very important in the cloud, so you would also be teaching them good security practice, in additional to Amazon Redshift itself.

GCS permission recovery

I have screwed up royally with google-cloud-storage recently when I deleted all permissions from a bucket. Now this bucket is basically untouchable (except for read), how can I regain permission over my bucket?
As a recovery mechanism you (or a project administrator) can grant yourself the Storage > Storage Admin role at the project level. This will allow you to edit the bucket policy.
Note: The Storage Admin role grants unlimited access to all Cloud Storage resources in that project. You may want to remove the grant after you are done fixing the ACL.

How to disable Google compute engine from resetting SFTP folder permissions when using SSH-Key

Currently running a Google compute engine instance and using SFTP on the server.
Followed details to lock a user to the SFTP path using steps listed here: https://bensmann.no/restrict-sftp-users-to-home-folder/
To lock the user to a directory, the home directory of that user needs to be owned by root. Initially, the setup worked correctly but found that Google compute engine sporadically "auto-resets" the permissions back to the user.
I am using an SSH key that is set in the Google Cloud Console and that key is associated with the username. My guess is that Google Compute Engine is using this "meta-data" and reconfiguring the folder permissions to match that of the user associated with the SSH key.
Is there any way to disable this "auto-reset"? Or, rather, is there a better method to hosting SFTP and locking a single user to a SFTP path without having to change the home folder ownership to root?
Set your sshd rule to apply to the google-sudoers group.
The tool that manages user accounts is accounts daemon. You can turn it off temporarily but it's not recommended. The tool syncs the instance metadata's SSH keys with the linux accounts on the VM. If you do this any account changes won't be picked up, SSH from Cloud Console will probably stop working.
sudo systemctl stop google-accounts-daemon.service
That said it may be what you want if you ultimately want to block SSH access to the VM.

Sharing data between several Google projects

A question about Google Storage:
Is it possible to give r/o access to a (not world-accessible) storage bucket to a user from another Google project?
If yes, how?
I want it to backup data to another Google project, for the case if somebody may incidentally delete all storage buckets from our project.
Yes. Access to Google Cloud Storage buckets and objects are controlled by ACLs that allow you to specify individual users, service accounts, groups, or project role.
You can add users to any existing object through the UI, the gsutil command-line utility, or via any of the APIs.
If you want to grant one specific user the ability to write objects into project X, you need only specify the user's email:
$> gsutil acl ch -u bob.smith#gmail.com:W gs://bucket-in-project-x
If you want to say that every member of the project my-project is permitted to write into some bucket in a different project, you can do that as well:
$> gsutil acl ch -p members-my-project:W gs://bucket-in-project-x
The "-u" means user, "-p" means 'project'. User names are just email addresses. Project names are the strings "owners-", "viewers-", or "editors-" and then the project's ID. The ":W" bit at the end means "WRITE" permission. You could also use O or R or OWNER or READ or WRITE instead.
You can find out more by reading the help page: $> gsutil help acl ch