GCP external application to app-engine endpoint authentication - authentication

We are building a small web-UI using React that will be served up by GCP App-Engine (standard). The UI will display a carousel of images along with some image metadata to our client's employees when they click on a link inside of their internal GIS system. We are looking to authenticate these calls since the App-Engine endpoint will be exposed publicly, and are hoping to use a GCP Service Account private key that will be used by the client to create a time-limited JSON web-token that will give temporary access to the GIS user when they open the web-UI. We are following this GCP documentation. In summary:
We create a new service-account with necessary IAM permissions in GCP along with a key
We share the private key with client which they then use to sign a Json Web Token which is passed in the call to our endpoint when user accesses our web-UI from their GIS system
Call is authenticated by GCP backend (ESP/OpenAPI)
Question: is this a recommended approach for external system accessing GCP resources or is there a better pattern more applicable to this type of situation (external system accessing GCP resource)?

I believe this is the recommended approach for your use case.
According to the official documentation:

Related

How to allow authenticated Identity Platform user to upload to Cloud Storage from web

I am not able to use Firebase Storage however I am using Identity Platform (firebase auth). Once a user is logged in to my web application, I would like them to be able to upload to a Cloud Storage bucket. The current way I am thinking about doing this is by have a Cloud Function which first uses firebase admin library to verify the token of the user and then generates a signed-url for the upload.
Is this the correct method for doing this?
Google Cloud Identity Platform uses the same SDKs and most of the same back-end as Firebase Authentication. The main difference in is the set of features it supports, and its pricing model.
If your project is set up for using Cloud Identify Platform, you can still use the Firebase SDKs for Cloud Storage to upload, and use Firebase's server-side security rules to control read/write access for it. A common security model to get started with is content-owner only access.

Use Google Storage Transfer API to transfer data from external GCS into my GCS

I am working on a web application which comprises of ReactJs frontend and Java SpringBoot backend. This application would require users to upload data from their own Google Cloud storage into my Google Cloud Storage.
The application flow will be as follows -
The frontend requests the user for read access on their storage. For this I have used oauth 2.0 access tokens as described here
The generated Oauth token will be passed to the backend.
The backend will also have credentials for my service account to allow it to access my Google Cloud APIs. I have created the service account with required permissions and generated the key using the instructions from here
The backend will use the generated access token and my service account credentials to transfer the data.
In the final step, I want to create a transfer job using the google Storage-Transfer API. I am using the Java API client provided here for this.
I am having difficulty providing the authentication credentials to the transfer api.
In my understanding, there are two different authentications required - one for reading the user's bucket and another for starting the transfer job and writing the data in my cloud storage. I haven't found any relevant documentation or working examples for my use-case. In all the given samples, it is always assumed that the same service account credentials will have access to both the source and sink buckets.
tl;dr
Does the Google Storage Transfer API allow setting different source and target credentials for GCS to GCS transfers? If yes, how does one provide these credentials to the transfer job specification.
Any help is appreciated. Thanks!
This is not allowed for the the GCS Transfer API unfortunately, for this to work it would be required that the Service Account have access to both the source and the sink buckets, as you mentioned.
You can try opening a feature request in Google's Issue Tracker if you'd like so that Google's Product Team can consider such a functionality for newer versions of the API, also you could mention that this is subject is not touched in the documentation, so it can be improved.

Getting 403 Forbidden on Google Cloud Run with API key

I have set up a very simple Node application with Express on Google Cloud Run.
It works great, but when I set it up with "Allow unauthenticated invocations to [service] (y/N)?" to No, I get a 403 Forbidden even though I created an API key and I'm making the calls adding key=[My API key] in the query string, as told in the documentation. My URL ends up looking like
https://service-wodkdj77sba-ew.a.run.app?key=[My API key].
I've tried with restricted (for Google Cloud Run) and unrestricted API keys.
Is there anything I'm missing?
Cloud Run, like many product in GCP, doesn't support API Key authorization. As detailed in your provided link, only a subset of service use API KEY.
It's also mentioned :
API keys do not identify the user or the application making the API request, so you can't restrict access to specific users or service accounts.
Where Cloud Run authentication section specify this here
All Cloud Run services are deployed privately by default, which means that they can't be accessed without providing authentication credentials in the request.
By the way, the Cloud Run expectation and the API Key capabilities aren't compatible.
However, if you want to access to your Cloud Run private service with API Key a workaround exist. You can deploy an Extensible Service Proxy (ESP) on another Cloud Run service. In it, authenticate the API Key and, if it's valid, call the Cloud Run private service with the ServiceAccount of your ESP (which must have roles/run.invoke role).

Long lived key/token based way to download google storage bucket objects with curl?

O.k. my fellow devops and coders. I have spent the last week trying to figure this out with Google (GCP) Cloud Storage objects. Here is my objective.
The solution needs to be light weight as it will be used to download images inside a docker image, hence the curl requirement.
The GCP bucket and object needs to be secure and not public.
I need a "long" lived ticket/key/client_ID.
I have tried the OAuth2.0 setup that Google's documentation mentions but everytime I want to setup an OAuth2.0 key it I do not get the option to have the "offline" access. AND to top it off it requires you to put in source URL's that will be accessing the auth request.
Also Google Cloud Storage does not support the key= like some of their other services. So here I have a an API KEY for my project as well as an OAuth JSON file for my service user and they are useless.
I can get a curl command to work with the temp OAuth bearer key but I need a long term solution for this.
RUN curl -X GET \
-H "Authorization: Bearer ya29.GlsoB-ck37IIrXkvYVZLIr3u_oGB8e60UyUgiP74l4UZ4UkT2aki2TI1ZtROKs6GKB6ZMeYSZWRTjoHQSMA1R0Q9wW9ZSP003MsAnFSVx5FkRd9-XhCu4MIWYTHX" \
-o "/home/shmac/test.tar.gz" \
"https://www.googleapis.com/storage/v1/b/mybucket/o/my.tar.gz?alt=media"
A long term key/ID/secret that will allow me to download a GCP bucket object from any location.
The solution needs to be lightweight as it will be used to download
images inside a docker image, hence the curl requirement.
This is a vague requirement. What is lightweight? No external libraries, everything written in assembly language, must fit in 1 KB, etc.
The GCP bucket and object needs to be secure and not public.
This normal requirement. With some exceptions (static file storage for websites, etc) you want your buckets to be private.
I need a "long" lived ticket/key/client_ID.
My advice is to stop thinking "long-term keys". The trend in security is to implement short-term keys. In Google Cloud Storage, seven-days is considered long-term. 3600 seconds (one hour) is the norm almost everywhere in Google Cloud.
For Google Cloud Storage you have several options. You did not specify the environment so I will include both user credentials, service account, and presigned-url based access.
User Credentials
You can authenticate with User Credentials (eg username#gmail.com) and save the Refresh Token. Then when an Access Token is required, you can generate one from the Refresh Token. In my website article about learning the Go language, I wrote a program on Day #8 which implements Google OAuth, saves the necessary credentials and creates Access Tokens and ID Tokens as required with no further "login" required. The comments in the source code should help you understand how this is done. https://www.jhanley.com/google-cloud-and-go-my-journey-to-learn-a-new-language-in-30-days/#day_08
This is the choice if you need to use User Credentials. This technique is more complicated, requires protecting the secrets file but will give you refreshable long term tokens.
Service Account Credentials
Service Account JSON key files are the standard method for service-to-service authentication and authorization. Using these keys, Access Tokens valid for one hour are generated. When they expire new ones are created. The max time is 3600 seconds.
This is the choice if you are programmatically accessing Cloud Storage with programs under your control (the service account JSON file must be protected).
Presigned-URLs
This is the standard method of providing access to private Google Cloud Storage objects. This method requires the URL and generates a signature with an expiration so that objects can be accessed for a defined period of time. One of your requirements (which is unrealistic) is that you don't want to use source URLs. The max time is seven-days.
This is the choice if you need to provide access to third-parties to access your Cloud Storage Objects.
IAM Based Access
This method does not use Access Tokens, instead, it uses Identity Tokens. Permissions are assigned to Cloud Storage buckets and objects and not to the IAM member account. This method requires a solid understanding of how Identities work in Google Cloud Storage and is the future direction for Google security - meaning for many services access will be controlled on a service/object basis and not via roles that grant wide access to an entire service in a project. I talk about this in my article on Identity Based Access Control
Summary
You have not clearly defined what will be accessing Cloud Storage, how secrets are stored, if the secrets need to be protected from users (public URL access), etc. The choice depends on a number of factors.
If you read the latest articles on my website I discuss a number of advanced techniques on Identity Based Access Control. These features are starting to appear on a number of Google Services in the beta level commands. This includes Cloud Scheduler, Cloud Pub/Sub, Cloud Functions, Cloud Run, Cloud KMS and soon more. Cloud Storage supports Identity Based Access which requires no permissions at all - the identity is used to control access.

Limiting Access to API Gateway (and AWS Lambda) in a package

We have a package that we share with out customers. In the package, we have a chunk of code that does HTTP Request callouts to our central API Gateway. As of now, our API Gateway is open and accepts requests from everywhere, which is not good. I want to limit access to our users who would be using our software. The only solution I have found is using IAM and providing authorization that would require us to include our Access Keys in the package. Our users can install our package in any environment they want and we have no control over that environment. So I think a viable option is to create a generic user policy with minimal access to allow our users to call our API Gateway. However, putting access key in the code doesn't seem like a good idea. Another option is to provider our customers with access keys but that also has overhead. What is a better alternative that is more secure and easy to maintain?
You can use built-in API Gateway API Key functionality when IAM policies aren't possible.
So long as your clients could be on any infrastructure, versus limited to AWS, the API Gateway service provides a generic API key solution, which allows you to restrict client traffic to your API Gateway by enforcing that client requests include API keys. This API key interface is part of their "API Usage Plan" feature.
This document explains how to use the console to set up an API Gateway to enforce that client traffic bears an API key:
To set up API keys, do the following:
Configure API methods to require an API key.
Create or import an API key for the API in a region.
Your clients can implement a "secret storage" solution, in order to avoid putting their API keys into their source code.
For sure it isn't wise for your clients to store their API Keys plain-text inside their source code. Instead, they could use a secret storage solution, to store the API keys outside of their codebase, but still give their applications access to the secret.
This article describes an example solution for secure secret storage (e.g. secure API key storage) which grants an application access to the application secret without putting the unencrypted secret into the source code. It uses Amazon KMS + Cryptex, but the same principle can be applied with other technologies: http://technologyadvice.github.io/lock-up-your-customer-accounts-give-away-the-key/