Expose expiring URL with compressed file - amazon-s3

The requirement is:
A technical user creates a DB backup from postgreSQL (pg_dump)
The technical user uploads the file to a bucket in the closes AWS region
the technical user gets an URL that should expire every week
technical user user sends the URL to 2-4 people with little IT knowledge: the non-technical user
non-technical user downloads the file accessing the temporary URL and replace it into a Docker Container Bind Volume local location
Constrains:
AWS technical user doesn't have permissions to generate AIM access key nor secret key
AWS S3 must be used as the organization uses AWS and strategically the purpose will be to have everything centralized in AWS infrastructure
I am following this documentation about presigned object URL
What do you suggest?

I suggest to create Iam user and consume the credentials with an small application (server side). There is Api already created by aws to connect any programming language. Personally I use symfony you have bundles to connect to s3 directly. Under my perspective I recommending you to create a simple interface to upload the backup and provide access to people with roles according to your necessities.

Related

How to restrict public user access to s3 buckets or minIO?

I have got a question about minio or s3 policy. I am using a stand-alone minio server for my project. Here is the situation :
There is only one admin account that receives files and uploads them to minio server.
My Users need to access just their own uploaded objects. I mean another user is not supposed to see other people's object publicly (e.g. by visiting direct link in URL).
Admin users are allowed to see all objects in any circumstances.
1. How can i implement such policies for my project considering i have got my database for user authentication and how can i combine them to authenticate the user.
2. If not what other options do i have here to ease the process ?
Communicate with your storage through the application. Do policy checks, authentication or authorization in the app and store/grab files to/from storage and make the proper response. I guess this is the only way you can have limitation on uploading/downloading files using Minio.
If you're using a framework like Laravel built in S3 driver works perfectly with Minio; Otherwise it's just matter of a HTTP call. Minio provides HTTP APIs.

How to get short lived access to specific Google Cloud Storage bucket from client mobile app?

I have a mobile app which authenticates users on my server. I'd like to store images of authenticated users in Google Cloud Storage bucket but I'd like to avoid uploading images via my server to google bucket, they should be directly uploaded (or downloaded) from the bucket.
(I also don't want to display another Google login to users to grant access to their bucket)
So my best case scenario would be that when user authenticates to my server, my server also generates short lived access token to specific Google storage bucket with read and write access.
I know that service accounts can generate accessTokens but I couldn't find any documentation if it is a good practice top pass these access tokens from server to client app and if it is possible to limit scope of the access token to specific bucket.
I found authorization documentation quite confusing and asking here what would be best practice approach to achieve access to the cloud storage for my case?
I think you are looking for signed urls.
A signed URL is a URL that provides limited permission and time to
make a request. Signed URLs contain authentication information in
their query string, allowing users without credentials to perform
specific actions on a resource.
Here you can see more about them in GCP. Here you have an explanation of how you can adapt them for your program.

Amazon S3 API OAuth-style access to 3-rd party buckets

I'm a newbie in AWS infrastructure, and I can't figure out how to build auth process which I want.
I want to have something similar to what other cloud storages, like Box, Dropbox, Onedrive have:
developer registeres OAuth app with a set of permissions
client with one click can give a consent for this app to have listed permissions on his own account and it's content, eternally, until consent is deliberately withdrawn
Now, as far as I understand, client should go to console and create a user, create a role for him, then send this user's id and key to my app, which is not that convinient. I'm looking for a most easy and simple way to do that.
I've tested "Login with Amazon" + "Amazon Cognito", but it turned out as a completely opposite mechanism: client should set up Login, link it to Cognito, to provide me one click access.
So, is it even possible? Which is the best way to implement such auth process?
There isn't a way to do what you're trying to do, and I would suggest that there's a conceptual problem with comparing Amazon S3 to Dropbox, Box, or Onedrive -- it's not the same kind of service.
S3 is a service that you could use to build a service like those others (among other purposes, of course).
Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage.
https://aws.amazon.com/s3/
Note the target audience -- "developers and IT teams" -- not end-users.
Contrast that with Amazon Cloud Drive, another service from Amazon -- but not part of AWS.
The Amazon Cloud Drive API and SDKs for Android and iOS enable your users to access the photos, videos, and documents that they have saved in the Amazon Cloud Drive, and provides you the ability to interact with millions of Amazon customers. Access to the free Amazon Cloud Drive API and SDKs for Android and iOS enable you to place your own creative spin on how users upload, view, edit, download, and organize their digital content using your app.
https://developer.amazon.com/public/apis/experience/cloud-drive/
The only way for your app to access your app's user's bucket would be for the user to configure and provide your app with a key and secret, or to configure their bucket policy to allow the operation by your app's credentials, or to create an IAM role and allow your app to assume it on their behalf, or something similar within the authentication and authorization mechanisms in AWS... none of which sound like a good idea.
There's no OAuth mechanism for allowing access to resources in an AWS account.

When using S3 in AWS, how do you manage access to specific images?

I am developing image server through S3 in AWS(Amazon Web Service) but i should solve the management issue
What i mean is that end user should be able to specific images in S3
For that, i am thinking about IAM(Identity access management) for allowing some users to access specific images.
What i want to know is that whether there is other solutions or not.
Actually, i have found about cognito but unfortunately cognito is supported in only 2 regions....
If you have a good idea, please give me explanation thank you
Unfortunately there is nothing in the suite of AWS services that fits your use case 100%.
While Amazon Cognito is only available in 2 regions, this does not restrict you to accessing S3 from only those 2 regions with credentials vended from the service. You could Amazon Cognito and IAM roles to define a policy that would allow for limited permissions to a set of files based on the prefix. However, at the current time, role policies would allow you to restrict access to 2 classes of files:
"Public files" - files accessible via all identities in your pool.
"Private files" - files accessible only to a specific identity in your pool.
If you wanted to support restricting access to specific files to specific users in your application you would need to handle this through a backend application that would proxy the access to the files in S3.

Google Cloud Storage: How can I grant an installed application access to only one bucket?

I'm developing an application that manipulates data in Google Cloud Storage
buckets owned by the user. I would like to set it up so the user can arrange to
grant the application access to only one of his or her buckets, for the sake of
compartmentalization of damage if the app somehow runs amok (or it is
impersonated by a bad actor or whatever).
But I'm more than a bit confused by the documentation around GCS authorization.
The docs on OAuth 2.0 authentication show that there are only three
choices for scopes: read-only, read-write, and full-control. Does this
mean that what I want is impossible, and if I grant access to read/write one
bucket I'm granting access to read/write all of my buckets?
What is extra confusing to me is that I don't understand how this all plays in
with GCS's notion of projects. It seems like I have to create a project to get
a client ID for my app, and the N users also have to create N projects for
their buckets. But then it doesn't seem to matter -- the client ID from project
A can access the buckets from project B. What are project IDs actually for?
So my questions, in summary:
Can I have my installed app request an access token that is good for only a
single bucket?
If not, are there any other ways that developers and/or careful users
typically limit access?
If I can't do this, it means the access token has serious security
implications. But I don't want to have to ask the user to go generate a new one
every time they run the app. What is the typical story for caching the token?
What exactly are project IDs for? Are they relevant to authorization in any
way?
I apologize for the scatter-brained question; it reflects what appears to be
scatter-brained documentation to me. (Or at least documentation that isn't
geared toward the installed application use case.)
I had the same problem as you.
Go to : https://console.developers.google.com
Go to Credentials and create new Client ID
You have to delete the email* in "permissions" of your projet.
And add it manually in the ACL of your bucket.
*= the email of the Service Account. xxxxxxxxxxxx-xxxxxxxxx#developer.gserviceaccount.com
if you are building an app. It's Server to server OAuth.
https://developers.google.com/accounts/docs/OAuth2ServiceAccount
"Can you be clearer about which project I create the client ID on (the developer's project that owns the installed application, or the user's project that own's the bucket)?"
the user's project that own's the bucket
It's the user taht own the bucket who grant access.
It turns out I'm using the wrong OAuth flow if I want to do this. Thanks to Euca
for the inspiration to figure this out.
At the time I asked the question, I was assuming there were multiple projects
involved in the Google Developers Console:
One project for me, the developer, that contained generated credentials for
an "installed application", with the client ID and (supposed) secret baked into
my source code.
One project for each of my users, owning and being billed for a bucket that
they were using the application to access.
Instead of using "installed application" credentials, what I did was switch to
"service account" credentials, generated by the user in the project that owns
their bucket. That allows them to create and download a JSON key file that they
can feed to my application, which then uses the JSON Web Tokens flow of OAuth
2.0 (aka "two-legged OAuth") to obtain authorization. The benefits of this are:
There is no longer a need for me to have my own project, which was a weird
wart in the process.
By default, the service account credentials allow my application to access
only the buckets owned by the project for which they were generated. If the
user has other projects with other buckets, the app cannot access them.
But, the service account has an "email address" just like any other user, and
can be added to the ACLs for any bucket regardless of project, granting
access to that bucket.
About your answer.
Glad you solved your problem.
You can also reduce the access to only ONE bucket of the projet. For example, if you have several buckets and the application does not need access to all.
By default, the service account has FULL access Read, write and ACL of all buckets. I usually limited to the needed bucket.