I'm hosting a static website on S3 that uses an API. My auth token for the API is stored in a JS file, but I want to keep that obscured from public users, but NOT from my application.
At the moment, it looks like you need to make S3 buckets (and all of their files) publicly accessible by everyone, but I want to mask my config file. Is this possible, and if so, what is the best way to do it?
Thanks!
Amazon provides a service called Lambda. It is a serverless computing. You use can can be solved using this.
You can write an auth function in Lambda where you can place the api auth token.
You are not really going to be able to completely hide your token, no matter what you do by masking it etc, ultimately your browser is issuing an API call and passing along the credentials which anyone that cares to look for it can see it.
What you want to do is use something like aws cognito to generate temporary, restricted tokens for each user, even anonymous users.
Cognito Identity supports the creation and token vending process for
unauthenticated users as well as authenticated users. This removes the
friction of an additional login screen in your app, but still enables
you to use temporary, limited privilege credentials to access AWS
resources.
https://aws.amazon.com/cognito/faqs/
If you do this, someone can still see the token being used, but it is time and permission limited - not the keys to the kingdom, so they can't do much with it.
Related
I'm contributing to developping a web (front+back) application, which uses OpenID Connect (with auth0) for authentication & authorization.
The web app needs authentication to access some public & some restricted information (restriction are per-user or depending on certain group-related rules).
We want to provide a upload/download features for documents such as .pdf, and we have implemented minIO (pretty similar to AWS S3) for public documents.
However, we can't wrap ou heads around restricted-access files :
should we implement OIDC on minIO for users to access directly the buckets but with temporary access tokens, allowing for fine-grained authorization policy
or should the back-office be the only one to have keys to minIO and be the intermediary between the object storage and users ?
Looking for good practices here, thanks in advance for your help.
Interesting question, since PDF docs are web static content unless they contain sensitive data. I would aim to separate secured (API) and non-secured (web) concerns on this one.
UNSECURED RESOURCES
If there is no security involved, connecting to a bucket from the front end makes sense. The bucket contents can also be distributed to a content delivery network, for best global performance. The PDF can be considered a web resource.
SECURED RESOURCES
Requests for these need to be treated as an API request, if a PDF doc contains sensitive data. APIs should receive an access token and enforce access to documents via scopes and claims.
You might use a Documents API for this. The implementation might still connect to a bucket, but this might be a different bucket that the browser does not have access to.
SUMMARY
This type of solution is often clearer if you think in terms of URL design. Eg the front end might have 2 document URLs:
publicDocs
secureDocs
By default I would treat docs that users upload as secure, unless they select an upload option such as make public.
I have a mobile app which authenticates users on my server. I'd like to store images of authenticated users in Google Cloud Storage bucket but I'd like to avoid uploading images via my server to google bucket, they should be directly uploaded (or downloaded) from the bucket.
(I also don't want to display another Google login to users to grant access to their bucket)
So my best case scenario would be that when user authenticates to my server, my server also generates short lived access token to specific Google storage bucket with read and write access.
I know that service accounts can generate accessTokens but I couldn't find any documentation if it is a good practice top pass these access tokens from server to client app and if it is possible to limit scope of the access token to specific bucket.
I found authorization documentation quite confusing and asking here what would be best practice approach to achieve access to the cloud storage for my case?
I think you are looking for signed urls.
A signed URL is a URL that provides limited permission and time to
make a request. Signed URLs contain authentication information in
their query string, allowing users without credentials to perform
specific actions on a resource.
Here you can see more about them in GCP. Here you have an explanation of how you can adapt them for your program.
Suppose I have a simple node backend application which when ran needs to connect to a specific GSuite instance, query some things (users, groups, etc.) and then close and not run again until needed, which can mean either a very long time or a few seconds. From what I gathered from Google's documentation there may be multiple ways of doing this, including having an OAuth client and follow the whole flow in setting it up, managing token lifecycle, etc.
However I do NOT want to go with this option for now for various reasons and I am wondering if there is any way of getting access by means of an API Key / secret, like many other 3rd party services allow nowadays. Simply put I would like to generate a key pair somewhere on GSuite, no idea where, and use those keys for auth instead of OAuth, something Google suggests is possible, both on the GSuite Admin app (with a broken link that leads nowhere - not surprising) and on GCloud API and Credentials subpage where you setup credentials (however there it says that API Keys can only be used for very limited resources, none of them having anything to do with GSuite).
I think your best option is to see if what you want to do can be done by a service account. You can create a service account, grant administrator privileges to it in GSuite, enable some APIs, and then that account can do a lot of things without using OAuth directly. The credentials for the service account can then be provided to your application as a json key file, which it can use to authenticate to GSuite. You can also grant service accounts permissions to specific objects like files in Drive, but it doesn't sound like that would be sufficient to your needs.
A guide that may be helpful in the details of how to do this is https://m.fin.com/2017/10/04/navigating-the-google-suite-directory-api/
In development, we have successfully written an image to S3 bucket and then get the url back so we can store the url.
Now that we're moving into production, we need to not include the access and secret keys.
Everything is saying to use Cognito, but we don't want to authenticate users. We just want images that are stored in the app to be backed up online and store the url. Every user can dump images in the same bucket because they will never access the images, just download via url.
Does anyone know, is there an invisible way to establish this connection securely to only read and write from the app without forcing users to login?
You may want to checkout CognitoIdentityService. CognitoIdentity allows developers to get temporary credentials to call other AWS service. So, developers need not put access and secret keys within the application. They can simply use the credentials provided by Cognito.
With Cognito, developers can configure if they wish the users to be authenticated or not. With CognitoIdentity authentication is optional. If user not authenticated, user will be given a new identityId every time. For authenticated users, identityId remains same. But it can easily be used for either case to get temporary credentials.
I have been studying the documentation for the Dropbox API but I couldn't find a way to directly access an account without going to the OAuth process. Is there a way to achieve that?
My final goal is to have a webpage with a list of files and folders from a specific Dropbox account (my own), which can be viewed and downloaded by anyone.
To access a user's Dropbox account via the API, your app will need to be authorized by the user. The Dropbox API currently requires that this authorization be done via the OAuth flow. You only need to perform this step once per user though, as you can store and reuse the access token for each user.
It sounds like you intend to use only one account though (your own), so you can just process this flow once manually yourself, and save and reuse the access token programmatically.
New answer, 9 years later, so probably new changes.
When you create an app in Dropbox, the settings page has a button to Generate Access Token. This will create a permanent token to access your own account without going through the oauth flow.