Accessing a GCS bucket from GCE without credentials using a S3 library - amazon-s3

I am trying to migrate an existing application that was using IAM permissions to write to a S3 bucket from EC2. According to Google documentation, you have a way to keep the same code and take advantage of the compatibility of GCS apis with S3. However, using the same code (I am just overriding the endpoint to use storage.googleapis.com instead), I hit the following exception:
com.amazonaws.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:115)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
at
Is there a way to do that without having to pass an access key and a secret key to my app?

If you want to keep using your existing API, the only way is by using a Google developer key, a simple migration always requires these two steps:
Change the request endpoint to to the Cloud Storage request endpoint: As you mentioned, you already completed this step by overriding to the Cloud Storage request endpoint:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
Replace the AWS access and secret Key with your Google developer key:
Because you are no longer going to be able to keep using the same IAM permissions you have previously set on AWS, authorization must be done using and access key and and a secret key, you will need to include an Authorization request header using your Google access key and create a signature using your Google secret key:
Authorization: AWS GOOG-ACCESS-KEY:signature
For further information, please check Authenticating in a simple migration scenario.

Related

Vault Hashicorp: Passing aws dynamic secret to a script

1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?
DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.

Use Google Storage Transfer API to transfer data from external GCS into my GCS

I am working on a web application which comprises of ReactJs frontend and Java SpringBoot backend. This application would require users to upload data from their own Google Cloud storage into my Google Cloud Storage.
The application flow will be as follows -
The frontend requests the user for read access on their storage. For this I have used oauth 2.0 access tokens as described here
The generated Oauth token will be passed to the backend.
The backend will also have credentials for my service account to allow it to access my Google Cloud APIs. I have created the service account with required permissions and generated the key using the instructions from here
The backend will use the generated access token and my service account credentials to transfer the data.
In the final step, I want to create a transfer job using the google Storage-Transfer API. I am using the Java API client provided here for this.
I am having difficulty providing the authentication credentials to the transfer api.
In my understanding, there are two different authentications required - one for reading the user's bucket and another for starting the transfer job and writing the data in my cloud storage. I haven't found any relevant documentation or working examples for my use-case. In all the given samples, it is always assumed that the same service account credentials will have access to both the source and sink buckets.
tl;dr
Does the Google Storage Transfer API allow setting different source and target credentials for GCS to GCS transfers? If yes, how does one provide these credentials to the transfer job specification.
Any help is appreciated. Thanks!
This is not allowed for the the GCS Transfer API unfortunately, for this to work it would be required that the Service Account have access to both the source and the sink buckets, as you mentioned.
You can try opening a feature request in Google's Issue Tracker if you'd like so that Google's Product Team can consider such a functionality for newer versions of the API, also you could mention that this is subject is not touched in the documentation, so it can be improved.

How to restrict public user access to s3 buckets or minIO?

I have got a question about minio or s3 policy. I am using a stand-alone minio server for my project. Here is the situation :
There is only one admin account that receives files and uploads them to minio server.
My Users need to access just their own uploaded objects. I mean another user is not supposed to see other people's object publicly (e.g. by visiting direct link in URL).
Admin users are allowed to see all objects in any circumstances.
1. How can i implement such policies for my project considering i have got my database for user authentication and how can i combine them to authenticate the user.
2. If not what other options do i have here to ease the process ?
Communicate with your storage through the application. Do policy checks, authentication or authorization in the app and store/grab files to/from storage and make the proper response. I guess this is the only way you can have limitation on uploading/downloading files using Minio.
If you're using a framework like Laravel built in S3 driver works perfectly with Minio; Otherwise it's just matter of a HTTP call. Minio provides HTTP APIs.

Getting 403 Forbidden on Google Cloud Run with API key

I have set up a very simple Node application with Express on Google Cloud Run.
It works great, but when I set it up with "Allow unauthenticated invocations to [service] (y/N)?" to No, I get a 403 Forbidden even though I created an API key and I'm making the calls adding key=[My API key] in the query string, as told in the documentation. My URL ends up looking like
https://service-wodkdj77sba-ew.a.run.app?key=[My API key].
I've tried with restricted (for Google Cloud Run) and unrestricted API keys.
Is there anything I'm missing?
Cloud Run, like many product in GCP, doesn't support API Key authorization. As detailed in your provided link, only a subset of service use API KEY.
It's also mentioned :
API keys do not identify the user or the application making the API request, so you can't restrict access to specific users or service accounts.
Where Cloud Run authentication section specify this here
All Cloud Run services are deployed privately by default, which means that they can't be accessed without providing authentication credentials in the request.
By the way, the Cloud Run expectation and the API Key capabilities aren't compatible.
However, if you want to access to your Cloud Run private service with API Key a workaround exist. You can deploy an Extensible Service Proxy (ESP) on another Cloud Run service. In it, authenticate the API Key and, if it's valid, call the Cloud Run private service with the ServiceAccount of your ESP (which must have roles/run.invoke role).

How to pass access credentials in request with AWS API

I'm trying to pass a request with AWS API but I get this error
AWS was not able to validate the provided access credentials
My request is:
https://ec2.amazonaws.com/?Action=RunInstances&ImageId=ami-6df1e514&KeyName=key1&InstanceType=t2.micro&Placement.AvailabilityZone=us-west-2&AWSSecretAccessKey=**********************&AWSAccessKeyId=******************
My access credentials are right, there is no doubt about that.
I found that the problem could be caused by clock delay but My PC's clock is correct.
Could someone help me, I did not find a solution.
You never send your secret access key to AWS when making an API call. Instead, you sign your request with the access/secret keys (as described here). You can read more about signing here.
Ideally, use the JavaScript SDK rather than manually generating query requests.
You should also rotate the credentials that you were using because you have exposed the secret key.
Where did you find this way of authentication by passing access-key and secret-key as parameters?
AWS SDK like java-sdk or boto3 provide easy to use APIs to make such calls so I would recommend using them.
Also look at authentication mechanism required by AWS API calls:
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html