Vault Hashicorp: Passing aws dynamic secret to a script - amazon-s3

1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?

DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.

Related

Add Github Identity Provider to AWS Cognito

I have created a Github OAuth app and I am trying to add the app as an OIDC application to AWS Cognito.
However, I cannot find a proper overview about the endpoints and data to fill in anywhere in the Github Docs.
The following fields are required:
Issuer -> ?
Authorization endpoint => https://github.com/login/oauth/authorize (?)
Token endpoint => https://github.com/login/oauth/access_token (?)
Userinfo endpoint => https://api.github.com/user (?)
Jwks uri => ?
I couldn't find the Jwks uri anywhere. Any help would be highly appreciated.
Seems like there is no way to get this working out of the box.
https://github.com/TimothyJones/github-cognito-openid-wrapper seems to be a way to get this working.
If any Cognito dev sees this, please add Github/Gitlab/Bitbucket support.
GitLab 14.7 (January 2022) might help:
OpenID Connect support for GitLab CI/CD
Connecting GitLab CI/CD to cloud providers using environment variables works fine for many use cases.
However, it doesn’t scale well if you need advanced permissions management or would prefer a signed, short-lived, contextualized connection to your cloud provider.
GitLab 12.10 shipped initial support for JWT token-based connection (CI_JOB_JWT) to enable HashiCorp Vault users to safely retrieve secrets. That implementation was restricted to Vault, while the logic we built JWT upon opened up the possibility to connect to other providers as well.
In GitLab 14.7, we are introducing a CI_JOB_JWT_V2 environment variable that can be used to connect to AWS, GCP, Vault, and likely many other cloud services.
Please note that this is an alpha feature and not ready for production use. Your feedback is welcomed in this epic.
For AWS specifically, with the new CI_JOB_JWT_V2 variable, you can connect to AWS to retrieve secrets, or to deploy within your account. You can also manage access rights to your cluster using AWS IAM roles.
You can read more on setting up OIDC connection with AWS.
The new variable is automatically injected into your pipeline but is not backward compatible with the current CI_JOB_JWT.
Until GitLab 15.0, the CI_JOB_JWT will continue to work normally but this will change in a future release. We will notify you about the change in time.
The secrets stanza today uses the CI_JOB_JWT_V1 variable. If you use the secrets stanza, you don’t have to make any changes yet.
See Documentation and Issue.

Using Hashicorp Vault and Spring Cloud in different environments

We are using vault to get rid of all secrets from our codebase and config servers. Vault aws auth with its secure introduction seems like the perfect fit for this. However our dev environment is not on aws and vault cannot work with config server to fetch different configurations per environment.
Do you guys see a way out where I could still use the aws auth for staging and prod and a different auth for dev?
Thanks,
Chris.
Authentication methods can be configured via bootstrap.properties so ideally have multiple profiles, one of them to use AWS authentication and the other one for dev.

Long lived key/token based way to download google storage bucket objects with curl?

O.k. my fellow devops and coders. I have spent the last week trying to figure this out with Google (GCP) Cloud Storage objects. Here is my objective.
The solution needs to be light weight as it will be used to download images inside a docker image, hence the curl requirement.
The GCP bucket and object needs to be secure and not public.
I need a "long" lived ticket/key/client_ID.
I have tried the OAuth2.0 setup that Google's documentation mentions but everytime I want to setup an OAuth2.0 key it I do not get the option to have the "offline" access. AND to top it off it requires you to put in source URL's that will be accessing the auth request.
Also Google Cloud Storage does not support the key= like some of their other services. So here I have a an API KEY for my project as well as an OAuth JSON file for my service user and they are useless.
I can get a curl command to work with the temp OAuth bearer key but I need a long term solution for this.
RUN curl -X GET \
-H "Authorization: Bearer ya29.GlsoB-ck37IIrXkvYVZLIr3u_oGB8e60UyUgiP74l4UZ4UkT2aki2TI1ZtROKs6GKB6ZMeYSZWRTjoHQSMA1R0Q9wW9ZSP003MsAnFSVx5FkRd9-XhCu4MIWYTHX" \
-o "/home/shmac/test.tar.gz" \
"https://www.googleapis.com/storage/v1/b/mybucket/o/my.tar.gz?alt=media"
A long term key/ID/secret that will allow me to download a GCP bucket object from any location.
The solution needs to be lightweight as it will be used to download
images inside a docker image, hence the curl requirement.
This is a vague requirement. What is lightweight? No external libraries, everything written in assembly language, must fit in 1 KB, etc.
The GCP bucket and object needs to be secure and not public.
This normal requirement. With some exceptions (static file storage for websites, etc) you want your buckets to be private.
I need a "long" lived ticket/key/client_ID.
My advice is to stop thinking "long-term keys". The trend in security is to implement short-term keys. In Google Cloud Storage, seven-days is considered long-term. 3600 seconds (one hour) is the norm almost everywhere in Google Cloud.
For Google Cloud Storage you have several options. You did not specify the environment so I will include both user credentials, service account, and presigned-url based access.
User Credentials
You can authenticate with User Credentials (eg username#gmail.com) and save the Refresh Token. Then when an Access Token is required, you can generate one from the Refresh Token. In my website article about learning the Go language, I wrote a program on Day #8 which implements Google OAuth, saves the necessary credentials and creates Access Tokens and ID Tokens as required with no further "login" required. The comments in the source code should help you understand how this is done. https://www.jhanley.com/google-cloud-and-go-my-journey-to-learn-a-new-language-in-30-days/#day_08
This is the choice if you need to use User Credentials. This technique is more complicated, requires protecting the secrets file but will give you refreshable long term tokens.
Service Account Credentials
Service Account JSON key files are the standard method for service-to-service authentication and authorization. Using these keys, Access Tokens valid for one hour are generated. When they expire new ones are created. The max time is 3600 seconds.
This is the choice if you are programmatically accessing Cloud Storage with programs under your control (the service account JSON file must be protected).
Presigned-URLs
This is the standard method of providing access to private Google Cloud Storage objects. This method requires the URL and generates a signature with an expiration so that objects can be accessed for a defined period of time. One of your requirements (which is unrealistic) is that you don't want to use source URLs. The max time is seven-days.
This is the choice if you need to provide access to third-parties to access your Cloud Storage Objects.
IAM Based Access
This method does not use Access Tokens, instead, it uses Identity Tokens. Permissions are assigned to Cloud Storage buckets and objects and not to the IAM member account. This method requires a solid understanding of how Identities work in Google Cloud Storage and is the future direction for Google security - meaning for many services access will be controlled on a service/object basis and not via roles that grant wide access to an entire service in a project. I talk about this in my article on Identity Based Access Control
Summary
You have not clearly defined what will be accessing Cloud Storage, how secrets are stored, if the secrets need to be protected from users (public URL access), etc. The choice depends on a number of factors.
If you read the latest articles on my website I discuss a number of advanced techniques on Identity Based Access Control. These features are starting to appear on a number of Google Services in the beta level commands. This includes Cloud Scheduler, Cloud Pub/Sub, Cloud Functions, Cloud Run, Cloud KMS and soon more. Cloud Storage supports Identity Based Access which requires no permissions at all - the identity is used to control access.

Accessing a GCS bucket from GCE without credentials using a S3 library

I am trying to migrate an existing application that was using IAM permissions to write to a S3 bucket from EC2. According to Google documentation, you have a way to keep the same code and take advantage of the compatibility of GCS apis with S3. However, using the same code (I am just overriding the endpoint to use storage.googleapis.com instead), I hit the following exception:
com.amazonaws.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:115)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
at
Is there a way to do that without having to pass an access key and a secret key to my app?
If you want to keep using your existing API, the only way is by using a Google developer key, a simple migration always requires these two steps:
Change the request endpoint to to the Cloud Storage request endpoint: As you mentioned, you already completed this step by overriding to the Cloud Storage request endpoint:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
Replace the AWS access and secret Key with your Google developer key:
Because you are no longer going to be able to keep using the same IAM permissions you have previously set on AWS, authorization must be done using and access key and and a secret key, you will need to include an Authorization request header using your Google access key and create a signature using your Google secret key:
Authorization: AWS GOOG-ACCESS-KEY:signature
For further information, please check Authenticating in a simple migration scenario.

Multiple Gateways to handle production and sandbox requests separately

I am to configure the gateway in separate environment(production and Sandbox) and I have a Doubt:
https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways#MaintainingSeparateProductionandSandboxGateways-MultipleGatewaystohandleproductionandsandboxrequestsseparately
In the store and publisher configuration I need to configure the <RevokeAPIURL>
In the document https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0#ClusteringAPIManager2.0.0-ConfiguringtheAPIPublisher
<RevokeAPIURL>https://<IP of the Gateway>:8243/revoke</RevokeAPIURL>
How I have the Gateway Production and Sandbox separated what address gateway I have in this configuration?
Thanks a lot.
<RevokeAPIURL> is used by store node to call revoke and token APIs of gateway node, when you (re)generate tokens (by client credential grant type) from store UI.
But in this deployment pattern, there is a kind of limitation that is you have to pick one gateway node and configure that for <RevokeAPIURL> in store node's api-manager.xml.
Fo example, let's say you configured prod-gateway there. So, when you generate keys (either prod or sandbox) from store UI, it will call prod-gateway's revoke and token APIs. Since both gateways are pointing to the same keymanager (or km cluster), token generation should work without a problem.
The only downside is with caching. When you regenerate sandbox keys from store UI, it calls prod-gateway and clears the key cache of that gateway only. Therefore, the sandbox-gateway key cache won't be invalidated. So you will be able to call your sandbox-gateway's APIs with old revoked token for about 15 more minutes until the cache expires.
But, if you don't use Store UI to generate keys (i.e. client credentials grant type), which do not happen in a typical production environment where password grant type is usually used, (and calls gateway's token API for that directly), you won't experience this limitation.