How do I stop people being able to see every file in my Amazon S3 bucket? - amazon-s3

If someone goes to the url of my bucket, they are able to see every single file listed.
Although I want the files in my bucket to be able to be seen by the public, I'd prefer not to have this list view available. Is there a way to prevent "directory listings" like this?

you should remove read access for "All Users" built-in group from the bucket's ACL. You can do that using the tool like CloudBerry Explorer freeware
Make sure you keep read access on the files you want to serve from S3.
Thanks
Andy

Related

Can developer have access to limited S3 console

A developer of mine wants to be able to see the entire contents of the S3 bucket that I've given him to develop with. It seems the only way to do this is to give a limited version of the AWS console to see as objects enter the bucket.
Is this even possible? Is there any other way to allow him to see as objects populate the bucket?
You can use IAM roles to control access to resources at a granular level, even down to individual object contained in an S3 bucket.
You can read more about IAM here https://aws.amazon.com/iam/

S3 — Auto generate folder structure?

I need to store user uploaded files in Amazon S3. I'm new to S3, but as I got from docs, S3 requires of me to specify file upload path in PUT method.
I'm wondering if there is a way to send file to S3, and simply get link for http(s) access? I wish Amazon to handle all headache related to file/folder structure itself. For example, I just pipe from node.js file to S3, and on callback I get http link with no expiration date. And Amazon itself creates smth like /2014/12/01/.../$hash.jpg and just returns me the final link? Such use case looks to be quite common.
Is it possible? If no, could you suggest any options to simplify file storage/filesystem tree structure in S3?
Many thanks.
S3 doesnt' have folders, actually. In a normal filesystem, 2014/12/01/blah.jpg would mean you've got a 2014 folder with a folder called 12 inside it and so on, but in S3 the entire 2014/12/01/blah.jpg it the key - essentially a single long filename. You don't have to create any folders.

AWS: Append only mode for S3 bucket

Context
I want to have a machine upload a file dump.rdb to s3/blahblahblah/YEAR-MONTH-DAY-HOUR.rdb on the hour.
Thus, I need this machine to have the ability to upload new files to S3.
However, I don't want this machine to have the ability to (1) delete existing files or (2) overwrite existing files.
In a certain sense, it can only "append" -- it can only add in new objects.
Question:
Is there a way to configure an S3 setup like this?
Thanks!
I cannot comment yet, so here is a refinement to #Viccari 's answer...
The answer is misleading because it only addresses #1 in your requirements, not #2. In fact, it appears that it is not possible to prevent overwriting existing files, using either method, although you can enable versioning. See here: Amazon S3 ACL for read-only and write-once access.
Because you add a timestamp to your file names, you have more or less worked around the problem. (Same would be true of other schemes to encode the "version" of each file in the file name: timestamps, UUIDs, hashes.) However, note that you are not truly protected. A bug in your code, or two uploads in the same hour, would result in an overwritten file.
Yes, it is possible.
There are two ways to add permissions to a bucket and its contents: Bucket policies and Bucket ACLs. You can achieve what you want by using bucket policies. On the other hand, Bucket ACLs do not allow you to give "create" permission without giving "delete" permission as well.
1-Bucket Policies:
You can create a bucket policy (see some common examples here), allowing, for example, an specific IP address to have specific permissions.
For example, you can allow: s3:PutObject and not allow s3:DeleteObject.
More on S3 actions in bucket policies can be found here.
2-Bucket ACLs:
Using Bucket ACLs, you can only give the complete "write" permission, i.e. if a given user is able to add a file, he is also able to delete files.
This is NOT possible! S3 is a key/value store and thus inherently doesn't support append only. The PUT/cp command to S3 can always overwrite a file. By enabling versioning on your bucket you are still safe in cause the account uploading the files gets compromised.

Differences in some filenames case after uploading to Amazon S3

I uploaded a lot of files (about 5,800) to Amazon S3, which seemed to work perfectly well, but a few of them (about 30) had their filenames converted to lowercase.
The first time, I uploaded with Cyberduck. When I saw this problem, I deleted them all and re-uploaded with Transmit. Same result.
I see absolutely no pattern that would link the files that got their names changed, it seems very random.
Has anyone had this happen to them?
Any idea what could be going on?
Thank you!
Daniel
I let you know first that Amazon S3 object URLs are case sensitive. So when you upload file file with upper case and access that file with same URL, it was working. But after renaming objects in lower case and I hope you are trying same older URL so you may get access denied/NoSuchKey error message.
Can you try Bucket Explorer to generate the file URL for Amazon S3 object and then try to access that file?
Disclosure: I work for Bucket Explorer.
When I upload to Amazon servers, I always use Filezilla and STFP. I never had such a problem. I'd guess (and honestly, this is just a guess since I haven't used Cyberduck nor Transmit) that the utilities you're using are doing the filename changing. Try it with Filezilla and see what the result is.

jets3t and Downloading Files from AmazonS3 with Different Name

We're using Amazon S3 for file storage and recently found out that we need to keep some sort of directory structure. Since S3 doesn't allow that, we know we can name the files according to their structure for storage. For example...
abc/123/draft.doc
What I want to know is if I want to provide a public link to this particular file is there anyway that the file can simply be draft.doc instead of abc/123/draft.doc ?
I feel stupid. After some more investigation I realized that by creating a GET url to the resource, I get exactly what I need.