How to authenticate google APIs with different service account credentials? - authentication

As anyone who has ever had the misfortune of having to interact with the panoply of Google CLI binaries programmatically will have realised, authenticating with the likes of gcloud, gsutil, bq, etc. is far from intuitive or trivial, especially when you need to work across different projects.
I am running various cron jobs that interact with Google Cloud Storage and BigQuery for different projects. Since the cron jobs may overlap, renaming config files is clearly not an option, and nor would any sane person take that approach.
There must surely be some sort of method of passing a path to a service account's key pair file to these CLI binaries, but bq help yields nothing.
The Google documentation, while verbose, is largely useless, taking one on a tour of how OAuth2 works, etc, instead of explaining what must surely be a very common requirement, vis-a-vis, how to actually authenticate a service account without running commands that modify central config files.
Can any enlightened being tell me whether the engineers at Google decided to add a feature as simple as passing the path to a service account's key pair file to the likes of gsutil and bq? Or perhaps I could simply export some variable so they know which key pair file to use for authentication?
I realise these simplistic approaches may be an insult to the intelligence, but we aren't concerning ourselves with harnessing nuclear fusion, so we needn't even consider what Amazon got so right with their approach to authentication in comparison...

Configuration in the Cloud SDK is global for the user, but you can specify what aspects of that config to use on a per command basis. To accomplish what you are trying to do you can:
gcloud auth activate-service-account foo#developer.gserviceaccount.com --key-file ...
gcloud auth activate-service-account bar#developer.gserviceaccount.com --key-file ...
At this point, both sets of credentials are in your global credentials store.
Now you can run:
gcloud --account foo#developer.gserviceaccount.com some-command
gcloud --account bar#developer.gserviceaccount.com some-command
in parallel, and each will use the given account without interfering.
A larger extension of this is 'configurations' which do the same thing, but for your entire set of config (including settings like account and project).
# Create first configuration
gcloud config configurations create myconfig
gcloud config configurations activate myconfig
gcloud config set account foo#developer.gserviceaccount.com
gcloud config set project foo
# Create second configuration
gcloud config configurations create anotherconfig
gcloud config configurations activate anotherconfig
gcloud config set account bar#developer.gserviceaccount.com
gcloud config set project bar
And you can say which configuration to use on a per command basis.
gcloud --configuration myconfig some-command
gcloud --configuration anotherconfig some-command
You can read more about configurations by running: gcloud topic configurations
All properties have corresponding environment variables that allow you to set that particular property for a single command invocation or for a terminal session. They take the form:
CLOUDSDK_<SECTION>_<PROPERTY>
for example: CLOUDSDK_CORE_ACCOUNT
You can see all the available config settings by running: gcloud help config
The equivalent of the --configuration flag is: CLOUDSDK_ACTIVE_CONFIG_NAME
If you really want complete isolation, you can also change the Cloud SDK's config directory by setting CLOUDSDK_CONFIG to a directory of your choosing. Note that if you do this, the config is completely separate including the credential store, all configurations, logs, etc.

Related

Calling gcloud with different service accounts in parallel and automatically using its project ID

I know this has been asked many times because of the complete mess Google have made with authentication but I can't find an answer. I'm trying to create a CI pipeline that can use service account credentials from a file. I want to be able to run it locally or from a server. From what I've read gcloud inexplicably ignores the GOOGLE_APPLICATION_CREDENTIALS env var so I have to globally set my creds with the following, meaning I can kiss goodbye to any kind of parallelisation:
gcloud auth activate-service-account --key-file=$(GOOGLE_APPLICATION_CREDENTIALS)
Surely it must be possible to run multiple commands in parallel with different SA credentials?
Also, the above approach ignores the project ID specified in the key file, so gcloud tries to target the last project ID I personally set for myself.
Is there a solution to this ridiculousness? I'm looking for a non-interactive, non-destructive (i.e. won't trash my personal creds) way of calling gcloud in parallel with different service accounts and automatically using their project IDs. Is this possible?
Well it actually is possible with this:
CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$(GOOGLE_CREDENTIALS_FILE) \
CLOUDSDK_CORE_PROJECT=$(GCP_PROJECT) \
gcloud run deploy --allow-unauthenticated $(CLOUD_RUN_CONFIG) --image $(GCR_DOCKER_IMAGE)
It's a shame the docs are so poor it's taken me forever to find this info. Why gcloud doesn't just use the same env vars as all the libraries will remain a mystery to everyone outside Google...

Serverless Framework deploy through CircleCI

I'm trying to integrate serverless to my circleci workflow.
I tried first adding both, key and secret to AWS permissions, but that did not work.
Then, I added key and secret to Environment variables and in my config file:
sudo npm install -g serverless
sls config credentials --provider aws --key $AWS_ACCESS_KEY_ID --secret $AWS_SECRET_ACCESS_KEY
sls deploy -v
But I see the same error:
Serverless Error ---------------------------------------
You are not currently logged in. Follow instructions in http://slss.io/run-in-cicd to setup env vars for authentication.
Anyone had this issue? I could not find an answer or hint online. Thanks.
This likely only applies to those trying to use Serverless Enterprise with the monitoring & dashboards they have set up. #wintvelt's answer wouldn't work for me because if i deleted the org variable, it would likely break the connection needed for Enterprise. So steps for my CircleCI setup:
In CircleCI, create a Context for each environment with the AWS Key ID and Secret as environment variables (putting them in a context is a nice to have, you could use other methods of making Circle inject environment variables into builds).
In your Serverless Framework dashboard, create a new access key which you will use in Circle.
Create a new environment variable SERVERLESS_ACCESS_KEY with the value from step 2.
I got this idea from reading how Seed.run has users integrate with Serverless. For more info read this link: https://seed.run/docs/integrating-with-serverless-enterprise.
Just checked Circleci stopped supporting AWS Permissions as a configurable option in the settings page.
You need to set the credentials as environment variables for the projects. The credentials should be named exactly AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
that's all you need to do. you don't have to do any additional step. I tried this on my project and it worked.
Your deployment step should simply be
sls deploy
As a follow-up to the previous answer: I had exactly the same error.
I took the solution from the chat as a solution.
For me the fixes I applied:
In CircleCI project settings, under "AWS permissions" I added the AWS Access Key ID and Secret Access key
In CircleCI project settings, under "Environment variables", I also added the AWS Access Key ID and Secret Access key
From my serverless.yml file, I deleted the line with org variable
For me, 1. and 2. alone was not enough. I also had to remove the line from my yml file to make deployment via CircleCI work.
For those landing here with the same issue, hope this helps!

Can I frontload user input, automating Google Cloud SDK gcloud init - interactive command?

I have a very similar question to this one. #cherba already gave a very rich and helpful dissection of the gcloud init command which has been very helpful.
So what I really want to do, automating gcloud init is:
Front load my interactive input: I want the users to supply all input at the beginning and not be prompted again.
Request a token, before gcloud is even installed, probably from a static perma-link, the resulting token should be usable only once, probably with a limited lifetime, maybe an hour. This is very similar to how gcloud init —-console-only already works, except with an unchanging initial URL.
I specifically want this to be for a user account, not a service account.
This would allow me to prompt the user, upfront, for all configuration input, and build the fully configured system automatically, over lunch or a long coffee break; not needing additional babysitting.
The goal here is distinct development environments, not deploying to an array of boxes.
How can I accomplish this?
This is not supported officially and is not recommended. Service accounts are meant for this kind of thing. You should use service accounts as explained in the earlier answer.
What the SDK is essentially doing is submitting a token request to https://accounts.google.com/o/oauth2/auth with following scopes:
'https://www.googleapis.com/auth/userinfo.email'
'https://www.googleapis.com/auth/cloud-platform'
'https://www.googleapis.com/auth/appengine.admin'
'https://www.googleapis.com/auth/compute'
'https://www.googleapis.com/auth/accounts.reauth'
For this to succeed you need to provide the regular oauth parameters like client_id, client_secret. To generate these you will need to register your app as an oauth app in the developer console.
This may not work if third party authorizations are not supported. I have not tried it.
You said "Front load my interactive input:" and also "Request a token, before gcloud is even installed". The problem with your request above, is that you will need to install gcloud at some point in time and gcloud will use its own authentication methods to connect, meaning that authentication should happen after gcloud is installed because you will always use the command “gcloud ….” to somehow connect. The previous post that you linked explains this.
Due to this, I'm suspecting that you need a workflow where simultaneous gcloud commands will run on multiple users/projects at the same time, by running gcloud many times in parallel. As you know, Linux runs one command at a time and "front loading" the authentication (as you call it) can either be the "screen" command inside one SSH session or running multiple SSH sessions at the same time. If that's not what you need, then a simple shell script should do. The shell script will run commands one after the other rather than in parallel.
For example, let's say that you want to install a package that will take a long time and be able to run another command at the same time, then you could do the following:
$ screen
$ sudo apt-get install [package-name]
Press Ctrl-A” and “d“ to temporarily exit this session
$ … (do another process here)
$ screen -r (re-attaches screen to continue on previous process on line 2)
The example above is somewhat the equivalent of having multiple SSH sessions open at the same time. You could maybe open multiple “screens” and launch multiple authentications at the same time, thereby also controlling when you want to stop a session. Keep in mind that if you run things in parallel, you will definitely need to load the authentication file as mentioned in the post you linked. Otherwise, you can use simple shell scripting and pass arguments. Since i'm not sure of the process that comes before/after your authentication, it's hard for me to provide a more precise example. There's a lot to consider and many unknowns about your workflow. I've included references below that show all the possibilities.
References:
- https://www.linode.com/docs/networking/ssh/using-gnu-screen-to-manage-persistent-terminal-sessions/
- https://www.geeksforgeeks.org/screen-command-in-linux-with-examples/
- https://www.lifewire.com/pass-arguments-to-bash-script-2200571
- https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
- https://cloud.google.com/sdk/gcloud/reference/auth/login
- https://cloud.google.com/sdk/docs/scripting-gcloud

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

(EC2) Launch Windows instance programmatically via command line

I'd like to launch a Windows 2008 (64bits, base install) instance programmatically, kinda like clicking on the Launch Instance link & following the "Create a New Instance" wizard.
I read about this command ec2-run-instances, I tried running it on putty using this syntax:
/opt/aws/bin/ec2-run-instances ami_id ami-e5784391 -n 1
--availability-zone eu-west-1a --region eu-west-1 --instance-type m1.small --private-key /full/path/MyPrivateKey.pem --group MyRDP
but it always complain that:
Required option '-C, --cert CERT' missing (-h for usage)
According to the documentation, this option isn't required!!
Can someone tell me what's wrong anyway? I'm just trying to programmatically launch a fresh Windows install, run some tests on the clouds & shut it down after that.
The error message is correct (just try adding --cert ;) - to what documentation are you referring here?
The requirement is clearly outlined in the Microsoft Windows Guide for Amazon EC2, specifically in Task 4: Set the EC2_PRIVATE_KEY and EC2_CERT Environment Variables:
The command line tools need access to an X.509 certificate and a
corresponding private key that are associated with your account. [...]
You can either specify your credentials with the --private-key and
--cert parameters every time you issue a command or you can create environment variables that point to the credential files on your local
system. If the environment variables are properly configured, you can
omit the parameters when you issue a command.
[emphasis mine]
Maybe the option of using environment variables has been misleading somehow somewhere?
Alternative
Please note that you can ease and speed up working with EC2 considerably by using alternate scripting environments covering the same ground, in particular the excellent boto, which is a Python package that provides interfaces to Amazon Web Services.
Boto uses the nowadays more common authentication scheme based on access keys only rather than X.509 certificates (e.g. an AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY pair), which furthermore can (and should) be managed via AWS Identity and Access Management (IAM) to avoid the risk of exposing your main AWS account credentials in the first place. See my answer to How to download an EC2 X.509 certificate with an IAM User account? for more details on this.
Good luck!