How to restrict public user access to s3 buckets or minIO? - amazon-s3

I have got a question about minio or s3 policy. I am using a stand-alone minio server for my project. Here is the situation :
There is only one admin account that receives files and uploads them to minio server.
My Users need to access just their own uploaded objects. I mean another user is not supposed to see other people's object publicly (e.g. by visiting direct link in URL).
Admin users are allowed to see all objects in any circumstances.
1. How can i implement such policies for my project considering i have got my database for user authentication and how can i combine them to authenticate the user.
2. If not what other options do i have here to ease the process ?

Communicate with your storage through the application. Do policy checks, authentication or authorization in the app and store/grab files to/from storage and make the proper response. I guess this is the only way you can have limitation on uploading/downloading files using Minio.
If you're using a framework like Laravel built in S3 driver works perfectly with Minio; Otherwise it's just matter of a HTTP call. Minio provides HTTP APIs.

Related

Using object storage with authorization for a web app

I'm contributing to developping a web (front+back) application, which uses OpenID Connect (with auth0) for authentication & authorization.
The web app needs authentication to access some public & some restricted information (restriction are per-user or depending on certain group-related rules).
We want to provide a upload/download features for documents such as .pdf, and we have implemented minIO (pretty similar to AWS S3) for public documents.
However, we can't wrap ou heads around restricted-access files :
should we implement OIDC on minIO for users to access directly the buckets but with temporary access tokens, allowing for fine-grained authorization policy
or should the back-office be the only one to have keys to minIO and be the intermediary between the object storage and users ?
Looking for good practices here, thanks in advance for your help.
Interesting question, since PDF docs are web static content unless they contain sensitive data. I would aim to separate secured (API) and non-secured (web) concerns on this one.
UNSECURED RESOURCES
If there is no security involved, connecting to a bucket from the front end makes sense. The bucket contents can also be distributed to a content delivery network, for best global performance. The PDF can be considered a web resource.
SECURED RESOURCES
Requests for these need to be treated as an API request, if a PDF doc contains sensitive data. APIs should receive an access token and enforce access to documents via scopes and claims.
You might use a Documents API for this. The implementation might still connect to a bucket, but this might be a different bucket that the browser does not have access to.
SUMMARY
This type of solution is often clearer if you think in terms of URL design. Eg the front end might have 2 document URLs:
publicDocs
secureDocs
By default I would treat docs that users upload as secure, unless they select an upload option such as make public.

How to get short lived access to specific Google Cloud Storage bucket from client mobile app?

I have a mobile app which authenticates users on my server. I'd like to store images of authenticated users in Google Cloud Storage bucket but I'd like to avoid uploading images via my server to google bucket, they should be directly uploaded (or downloaded) from the bucket.
(I also don't want to display another Google login to users to grant access to their bucket)
So my best case scenario would be that when user authenticates to my server, my server also generates short lived access token to specific Google storage bucket with read and write access.
I know that service accounts can generate accessTokens but I couldn't find any documentation if it is a good practice top pass these access tokens from server to client app and if it is possible to limit scope of the access token to specific bucket.
I found authorization documentation quite confusing and asking here what would be best practice approach to achieve access to the cloud storage for my case?
I think you are looking for signed urls.
A signed URL is a URL that provides limited permission and time to
make a request. Signed URLs contain authentication information in
their query string, allowing users without credentials to perform
specific actions on a resource.
Here you can see more about them in GCP. Here you have an explanation of how you can adapt them for your program.

Expose expiring URL with compressed file

The requirement is:
A technical user creates a DB backup from postgreSQL (pg_dump)
The technical user uploads the file to a bucket in the closes AWS region
the technical user gets an URL that should expire every week
technical user user sends the URL to 2-4 people with little IT knowledge: the non-technical user
non-technical user downloads the file accessing the temporary URL and replace it into a Docker Container Bind Volume local location
Constrains:
AWS technical user doesn't have permissions to generate AIM access key nor secret key
AWS S3 must be used as the organization uses AWS and strategically the purpose will be to have everything centralized in AWS infrastructure
I am following this documentation about presigned object URL
What do you suggest?
I suggest to create Iam user and consume the credentials with an small application (server side). There is Api already created by aws to connect any programming language. Personally I use symfony you have bundles to connect to s3 directly. Under my perspective I recommending you to create a simple interface to upload the backup and provide access to people with roles according to your necessities.

Amazon S3 API OAuth-style access to 3-rd party buckets

I'm a newbie in AWS infrastructure, and I can't figure out how to build auth process which I want.
I want to have something similar to what other cloud storages, like Box, Dropbox, Onedrive have:
developer registeres OAuth app with a set of permissions
client with one click can give a consent for this app to have listed permissions on his own account and it's content, eternally, until consent is deliberately withdrawn
Now, as far as I understand, client should go to console and create a user, create a role for him, then send this user's id and key to my app, which is not that convinient. I'm looking for a most easy and simple way to do that.
I've tested "Login with Amazon" + "Amazon Cognito", but it turned out as a completely opposite mechanism: client should set up Login, link it to Cognito, to provide me one click access.
So, is it even possible? Which is the best way to implement such auth process?
There isn't a way to do what you're trying to do, and I would suggest that there's a conceptual problem with comparing Amazon S3 to Dropbox, Box, or Onedrive -- it's not the same kind of service.
S3 is a service that you could use to build a service like those others (among other purposes, of course).
Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage.
https://aws.amazon.com/s3/
Note the target audience -- "developers and IT teams" -- not end-users.
Contrast that with Amazon Cloud Drive, another service from Amazon -- but not part of AWS.
The Amazon Cloud Drive API and SDKs for Android and iOS enable your users to access the photos, videos, and documents that they have saved in the Amazon Cloud Drive, and provides you the ability to interact with millions of Amazon customers. Access to the free Amazon Cloud Drive API and SDKs for Android and iOS enable you to place your own creative spin on how users upload, view, edit, download, and organize their digital content using your app.
https://developer.amazon.com/public/apis/experience/cloud-drive/
The only way for your app to access your app's user's bucket would be for the user to configure and provide your app with a key and secret, or to configure their bucket policy to allow the operation by your app's credentials, or to create an IAM role and allow your app to assume it on their behalf, or something similar within the authentication and authorization mechanisms in AWS... none of which sound like a good idea.
There's no OAuth mechanism for allowing access to resources in an AWS account.

When using S3 in AWS, how do you manage access to specific images?

I am developing image server through S3 in AWS(Amazon Web Service) but i should solve the management issue
What i mean is that end user should be able to specific images in S3
For that, i am thinking about IAM(Identity access management) for allowing some users to access specific images.
What i want to know is that whether there is other solutions or not.
Actually, i have found about cognito but unfortunately cognito is supported in only 2 regions....
If you have a good idea, please give me explanation thank you
Unfortunately there is nothing in the suite of AWS services that fits your use case 100%.
While Amazon Cognito is only available in 2 regions, this does not restrict you to accessing S3 from only those 2 regions with credentials vended from the service. You could Amazon Cognito and IAM roles to define a policy that would allow for limited permissions to a set of files based on the prefix. However, at the current time, role policies would allow you to restrict access to 2 classes of files:
"Public files" - files accessible via all identities in your pool.
"Private files" - files accessible only to a specific identity in your pool.
If you wanted to support restricting access to specific files to specific users in your application you would need to handle this through a backend application that would proxy the access to the files in S3.