Gcloud : Authenticating with Service Account - authentication

Authenticating with service account using gcloud
We are using below command for activating service account using .json file.
gcloud auth activate-service-account <service_account> --key-file <file_name>
After doing this we are able to deploy templates.
But we are not supposed to keep json file on server for authentication purpose.
Is there any other way of authenticating for deploying templates?
Is there any way to deploy templates using client secret and client id without using json file ?

To authorize Cloud SDK tools without storing private key, alternatively use tokens, see OAuth:
gcloud init on your local terminal, see Run gcloud init documentation
gcloud init on Compute Engine VM Instance, see Set up gcloud compute documentation
To avoid prompts, provide parameters for gcloud init in the command line (works only when $ gcloud config set disable_prompts false)
$ gcloud init --account=[account-name] --configuration=[config-name] --project=[prj-name] --console-only
For more details see documentation Managing SDK Configurations and Managing SDK Properties
There is also Google Cloud Shell, with 5GB of persistent disk storage and no additional authorization required to use Cloud SDK, see Starting Cloud Shell
To provide authorization you also can use Cloud Identity and Access Management API. You may also find helpful the answer for similar question on Stack Overflow

Related

Vertex AI Managed Notebooks Authentication during Execution

I created a Managed Notebooks instance using the Compute Engine Service Account. I have some python code which reads from a BigQuery table and does some processing. I did 'gcloud auth application default login', logged into my google account, and then was able to access that BQ table (which otherwise gave access denied error).
Now, I want to run this notebook using the Executor. However, I get access denied errors since the Executor runs the notebook in a tenant project. This page mentions
Also, the executor cannot use end-user credentials to authenticate access to resources, for example, the gcloud auth login command.
To resolve these issues, in your notebook file's code, authenticate access to resources through a service account.
Then when you create an execution or schedule, specify the service account.
How do I authenticate access to resources through a service account? I tried setting the compute engine service account as the service account to be used in Executor settings, but it still gives me access denied error for that BQ table. What can I do within my code that is similar to running 'gcloud auth application default login'?

Vault Hashicorp: Passing aws dynamic secret to a script

1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?
DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.

How to know my Google Cloud Identity from the command line?

I'm running Python code on my computer that makes calls to Google Cloud Platform. I'm trying to know if my application is using my own credentials or service account keys to get authorizations on GCP.
On AWS, I could use aws sts get-caller-identity to know who the caller is (IAM user or IAM role).
Is there a GCP equivalent, something like gcloud whoami, that I could run from the command line or from my Python code itself to know the identity used by my application?
Use the command gcloud auth list in your cli to view the active credentials account.

How to authenticate google service account credentials

I have a google service account private key (json format) from google console.
How do I create a new client in golang (what google api do I use) and authenticate the credentials I got without setting environment variable?
I would like to provide the Google service account credentials manually in Golang.
I started by passing the json file as byte array:
creds, err := google.CredentialsFromJSON(ctx, blob). Blob is the byte array of the json file.
I can create a client successfully with cloud.google.com/go/secretmanager/apiv1 even after I changed the private key (ouch). So I wonder at what point and how do I authenticate the creds?
Thanks.
Question is unclear. I allow myself to rephrase it a "How do you create a valid credential from a service account's keyfile?"
Well, Google indeed implement a strategy to give an authenticated indentity to the calling process.
At a coarse grain, it indeed look for an environment variable definition called GOOGLE_APPLICATION_CREDENTIAL which value contains a path to a valid SA keyfile. Otherwise it uses the default service account from the piece of compute it runs from (GCE, GKE pod, Appengine, cloud function,...).
Finally you've got an error otherwise.
Working locally, a good practice to set this default credential is to use the cloud SDK with command gcloud auth application-default login. That way the application would act on the behalf of the logged person (you for instance and consume with your permissions and quotas).
Otherwise you could set up the env var manually to point to the service account's keyfile you downloaded.
Now if you run outside from a google cloud environment you can manually build credential unsing a keyfile like you do. The following example is explicit. What you must understand is that the credential you forge is an argument to any Google API client constructor.
Once the API client is built with your credential, you just consume it by calling the methods it exposes. Authentication happens under the hood. Every call to Google API is authenticated with an OAuth2 token which retrieving flow is described here.
You could dig into the source code client of the client API if you need to be convinced, but the nice thing is you don't have to.

Add Github Identity Provider to AWS Cognito

I have created a Github OAuth app and I am trying to add the app as an OIDC application to AWS Cognito.
However, I cannot find a proper overview about the endpoints and data to fill in anywhere in the Github Docs.
The following fields are required:
Issuer -> ?
Authorization endpoint => https://github.com/login/oauth/authorize (?)
Token endpoint => https://github.com/login/oauth/access_token (?)
Userinfo endpoint => https://api.github.com/user (?)
Jwks uri => ?
I couldn't find the Jwks uri anywhere. Any help would be highly appreciated.
Seems like there is no way to get this working out of the box.
https://github.com/TimothyJones/github-cognito-openid-wrapper seems to be a way to get this working.
If any Cognito dev sees this, please add Github/Gitlab/Bitbucket support.
GitLab 14.7 (January 2022) might help:
OpenID Connect support for GitLab CI/CD
Connecting GitLab CI/CD to cloud providers using environment variables works fine for many use cases.
However, it doesn’t scale well if you need advanced permissions management or would prefer a signed, short-lived, contextualized connection to your cloud provider.
GitLab 12.10 shipped initial support for JWT token-based connection (CI_JOB_JWT) to enable HashiCorp Vault users to safely retrieve secrets. That implementation was restricted to Vault, while the logic we built JWT upon opened up the possibility to connect to other providers as well.
In GitLab 14.7, we are introducing a CI_JOB_JWT_V2 environment variable that can be used to connect to AWS, GCP, Vault, and likely many other cloud services.
Please note that this is an alpha feature and not ready for production use. Your feedback is welcomed in this epic.
For AWS specifically, with the new CI_JOB_JWT_V2 variable, you can connect to AWS to retrieve secrets, or to deploy within your account. You can also manage access rights to your cluster using AWS IAM roles.
You can read more on setting up OIDC connection with AWS.
The new variable is automatically injected into your pipeline but is not backward compatible with the current CI_JOB_JWT.
Until GitLab 15.0, the CI_JOB_JWT will continue to work normally but this will change in a future release. We will notify you about the change in time.
The secrets stanza today uses the CI_JOB_JWT_V1 variable. If you use the secrets stanza, you don’t have to make any changes yet.
See Documentation and Issue.