How to restrict Amazon S3 API access? - amazon-s3

Is there a way to create a different identity to (access key / secret key) to access Amazon S3 buckets via the REST API where I can restrict access (read only for example)?

The recommended way is to use IAM to create a new user, then apply a policy to that user.

Yes, you can. The S3 API documentation describes the Authentication and Access Control services available to you. You can set up a bucket so that another Amazon S3 account can read but not modify items in the bucket.

Check out the details at http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?UsingAuthAccess.html (follow the link to "Using Query String Authentication")- this is a subdocument to the one Greg Posted, and describes how to generate access URLs on the fly.
This uses a hashed form of the private key and allows expiration, so you can give brief access to files in a bucket without allowed unfettered access to the rest of the S3 store.
Constructing the REST URL is quite difficult, it took me about 3 hours of coding to get it right, but this is a very powerful access technique.

Related

s3:GetBucketLocation IAM permission for data transfer API on GCP

I want to use data transfer API by GCP to grab data from s3 to my GCS bucket. The S3 bucket is controlled by our client and we have zero control on it.
We learned to use this API the AWS IAM has to have these permissions:
s3:ListBucket
s3:GetObject
s3:GetBucketLocation
https://cloud.google.com/st... (edited)
When I asked them, they said the permission are given in prefix level and not bucket level since the bucket has data for many clients and not just us. They do not want to give any permission which might give us access to the whole bucket data and we should be limited to our prefix level.
I want to know if asking for this permission (s3:GetBucketLocation) in the prefix level will give us access to the ALL data present in the bucket? or it is just allowing the transfer API to locate the data?
I did check the AWS documentation and the closest answer was about GetBucketLocation API which stated:
" Returns the Region the bucket resides in. You set the bucket's Region using the LocationConstraint request parameter in a CreateBucket request. For more information, see CreateBucket."
So it does seem it only returns the region of the bucket BUT there is no documentation to be found specific to the permission itself.
In google documentation it does say that this API is only need the region, however we need to make sure it does not open a way for us to read all the data in the bucket if that makes sense.
Please let me know if you have any knowledge on this.
Firstly, I don't think you need to get the bucket's region programatically. If you're interested in a specific bucket, perhaps the client could tell you the region of it?
The action-level detail is in the SAR for S3 which just says:
Grants permission to return the Region that an Amazon S3 bucket resides in
So there's nothing about object-level (i.e. data) access granted by it.

Copy between S3 buckets using signed URL with boto? [duplicate]

I'm using a service that puts the data I need on S3 and gives me a list of presigned URLs to download (http://.s3.amazonaws.com/?AWSAccessKeyID=...&Signature=...&Expires=...).
I want to copy those files into my S3 bucket without having to download them and upload again.
I'm using the Ruby SDK (but willing to try something else if it works..) and couldn't write anything like this.
I was able to initialize the S3 object with my credentials (access_key and secret) that grants me access to my bucket, but how do I pass the "source-side" access_key_id, signature and expires parameters?
To make the problem a bit simpler - I can't even do a GET request to the object using the presigned parameters. (not with regular HTTP, I want to do it through the SDK API).
I found a lot of examples of how to create a presigned URL but nothing about how to authenticate using an already given parameters (I obviously don't have the secret_key of my data provider).
Thanks!
You can't do this with a signed url, but as has been mentioned, if you fetch and upload within EC2 in an appropriate region for the buckets in question, there's essentially no additional cost.
Also worth noting, both buckets do not have to be in the same account, but the aws key that you use to make the request have to have permission to put the target object and get the source object. Permissions can be granted across accounts... though in many cases, that's unlikely to be granted.
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html
You actually can do a copy with a presigned URL. To do this, you need to create a presigned PUT request that also includes a header like x-amz-copy-source: /sourceBucket/sourceObject in order to specify where you are copying from. In addition, if you want the copied object to have new metadata, you will also need to add the header x-amz-metadata-directive: REPLACE. See the REST API documentation for more details.

Google Cloud equivalent of Amazon STS

Amazon STS offers the ability to take an IAM token and create a limited subset of the abilities of that token for other use. The subset of abilities can be by time (expiring in N hours) and by allowed operations (e.g. read one S3 bucket but not all the S3 buckets the original token can read).
Because this is done using the S3 ARN format which which supports wildcards in the S3 key name, that means it's possible to create a sub-token that can read part of an S3 bucket.
Looking through Google Cloud Storage's's access control docs I couldn't find the equivalent of this functionality in GCS.
To be more specific, I'd like to create a bucket with these four objects:
/folder1/file1
/folder1/file2
/folder2/file3
/folder2/file4
And given a token with permissions to access all files indefinitely, produced a limited subset of the token with permissions to view just the objects in /folder2/* (so /folder2/file3 and /folder2/file4) for N hours.
Is this possible in GCS like it is in S3/STS?
Currently, in GCP there are no tokens with a limited subset of the abilities of another token.
The most similar thing to what you are asking are Signed URLs, since they allow access time-limited access to Cloud Storage objects.
I don't know why you need them to have abilities that are a subset to the ones of another token, but in your case you could just create Signed URLs with permissions to view the objects in /folder2/*

Can I run my static website from an S3 Bucket, and add password protection?

I'm running a static website completely from an Amazon S3 bucket, but I want to password protect my content. Is this possible? The type of authentication doesn't bother me, it just needs to be there, so that people can't just 'discover' my website.
At the moment, I don't have a domain name set up, which I believe rules out http://www.s3auth.com/ as a possible solution. Are there any others?
AWS doesn't provide a way to do this directly right now. The S3auth solution you mentioned is nice in that your bucket/objects remain private so that a direct access to the bucket does not allow objects to be read without your private credentials. The disadvantage of the s3auth approach is that it relies on you trusting s3auth with your private credentials. If your credentials are compromised at any stage, it could be costly depending on how someone might abuse your access rights.
If you make your objects publicly readable (as you do when you create a website), anyone who learns/guesses/knows your objects names etc can access them. Or indeed if the bucket is readable, then all they need is the bucket name. There is no real way around this except by tightening the S3 access permissions.
If you only access your website from certain IP addresses, perhaps looking at Bucket Policies may help. Scroll down to Restricting Access to Specific IP Addresses. This is not a password but it does allow you to restrict where accesses can come from at least.
Another common technique for providing temporary access to objects is Query String Request Authentication. This does not however match your original requirement of password protecting your S3 bucket website.
This is possible using CloudFront and Lambda#Edge. See the answer here: https://stackoverflow.com/a/45971193/4550880
I think the AWS SDK for Javascript is what you're looking for. To be fair, it wasn't available when you posted this question 2 years ago. It allows you to login with Facebook, Google or Amazon. Here's another resource using AWS login.

How do you let only authorized user have access contents stored in Amazon's S3?

Once you stored contents in S3 and make it public, then everyone have access to it. Is there a way to let only authorized users have access to the content stored in S3? For example, I have a site that let people store their documents. The server stores these documents in S3 and I would like only the user who uploaded the document to have access to it.
I know I can copy the S3 contents to my server and let only authorized users have access, but this will make the server slow. I would like to be able server the contents directly to the client's browser from the S3.
Thanks.
The link given in the above answer is no longer correct -- Amazon had it's documentation reorganized. I think these are the correct pages to read:
http://docs.amazonwebservices.com/AmazonS3/2006-03-01/dev/index.html?RESTAuthentication.html#RESTAuthenticationQueryStringAuth
http://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html
You want to read the section called 'Query String Request Authentication Alternative' found here
It explains how to create a time-based expiring link to an S3 object
You would then have to write the code that manages the users (the who owns which object part of your question).