I have set up an S3 bucket to host static files.
When using the website endpoint (http://.s3-website-us-east-1.amazonaws.com/): it forces me to set an index file. When the file isn't found, it throws an error instead of listing directory contents.
When using the s3 endpoint (.s3.amazonaws.com): I get an XML listing of the files, but I need an HTML listing that users can click the link to the file.
I have tried setting the permissions of all files and the bucket itself to "List" for "Everyone" in the AWS Console, but still no luck.
I have also tried some of the javascript alternatives, but they either don't work under the website url (that redirects to the index file) or just don't work at all. As a last resort, a collapsible javascript listing would be better than nothing, but I haven't found a good one.
Is this possible? If so, do I need to change permissions, ACL or something else?
I've created a simple bit of JS that creates a directory index in HTML style that you are looking for: https://github.com/rgrp/s3-bucket-listing
The README has specific instructions for handling Amazon S3 "website" buckets: https://github.com/rgrp/s3-bucket-listing#website-buckets
You can see a live example of the script in action on this s3 bucket (in website mode): http://data.openspending.org/
There is also this solution: https://github.com/caussourd/aws-s3-bucket-listing
Similar to https://github.com/rgrp/s3-bucket-listing but I couldn't make it work with Internet Explorer. So https://github.com/caussourd/aws-s3-bucket-listing works with IE and also add the possibility to order the files by names, size and date. On the downside, it doesn't follow folders: only the files at one level are displayed.
This might solve your problem. Security settings for Everyone group:
(you need the bucketexplorer.com software for this)
If you are sharing files of HTTP, you may or may not want people to be able to list the contents of a bucket (folder.) If you want the bucket contents to be listed when someone enters the bucket name (http://s3.amazonaws.com/bucket_name/), then edit the Access Control List and give the Everyone group the access level of Read (and do likewise with the contents of the bucket.) If you don’t want the bucket contents list-able but do want to share the file within it, disable Read access for the Everyone group for the bucket itself, and then enable Read access for the individual files within the bucket.
I created a much simpler solution. Just place the index.html file in root of your folder and it will do the job. No configuration required. https://github.com/prabhatsharma/s3-directorylisting
I had a similar problem and created a JavaScript-and-iframe solution that works pretty well for listing directories in S3 website files. You just have to drop a couple of .html files into the directory you want to list. You can find it here:
https://github.com/adam-p/s3-file-list-page
I found s3browser, which allowed me to set up a directory on the main web site that allowed browsing of the s3 bucket. It worked very well and was very easy to set up.
Using another approach base in pure JavaScript and AWS SDK JavaScript API. Not need PHP or other engine just pure web site (Apache or even IIS).
https://github.com/juvs/s3-bucket-browser
Not intent for deploy on your own bucket (for me, no make sense).
Using the new IAM Users from AWS you can provide more specific and secure access to your buckets. No need to publish your bucket to website and make all public.
If you want secure the access, you can use the conventional methods to authenticate users for your current web site.
Hope this help too!
Related
Current Situation
I have a project on GitHub that builds after every commit on Travis-CI. After each successful build Travis uploads the artifacts to an S3 bucket. Is there some way for me to easily let anyone access the files in the bucket? I know I could generate a read-only access key, but it'd be easier for the user to access the files through their web browser.
I have website hosting enabled with the root document of "." set.
However, I still get an 403 Forbidden when trying to go to the bucket's endpoint.
The Question
How can I let users easily browse and download artifacts stored on Amazon S3 from their web browser? Preferably without a third-party client.
I found this related question: Directory Listing in S3 Static Website
As it turns out, if you enable public read for the whole bucket, S3 can serve directory listings. Problem is they are in XML instead of HTML, so not very user-friendly.
There are three ways you could go for generating listings:
Generate index.html files for each directory on your own computer, upload them to s3, and update them whenever you add new files to a directory. Very low-tech. Since you're saying you're uploading build files straight from Travis, this may not be that practical since it would require doing extra work there.
Use a client-side S3 browser tool.
s3-bucket-listing by Rufus Pollock
s3-file-list-page by Adam Pritchard
Use a server-side browser tool.
s3browser (PHP)
s3index Scala. Going by the existence of a Procfile, it may be readily deployable to Heroku. Not sure since I don't have any experience with Scala.
Filestash is the perfect tool for that:
login to your bucket from https://www.filestash.app/s3-browser.html:
create a shared link:
Share it with the world
Also Filestash is open source. (Disclaimer: I am the author)
I had the same problem and I fixed it by using the
new context menu "Make Public".
Go to https://console.aws.amazon.com/s3/home,
select the bucket and then for each Folder or File (or multiple selects) right click and
"make public"
You can use a bucket policy to give anonymous users full read access to your objects. Depending on whether you need them to LIST or just perform a GET, you'll want to tweak this. (I.e. permissions for listing the contents of a bucket have the action set to "s3:ListBucket").
http://docs.aws.amazon.com/AmazonS3/latest/dev/AccessPolicyLanguage_UseCases_s3_a.html
Your policy will look something like the following. You can use the S3 console at http://aws.amazon.com/console to upload it.
{
"Version":"2008-10-17",
"Statement":[{
"Sid":"AddPerm",
"Effect":"Allow",
"Principal": {
"AWS": "*"
},
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::bucket/*"
]
}
]
}
If you're truly opening up your objects to the world, you'll want to look into setting up CloudWatch rules on your billing so you can shut off permissions to your objects if they become too popular.
https://github.com/jupierce/aws-s3-web-browser-file-listing is a solution I developed for this use case. It leverages AWS CloudFront and Lambda#Edge functions to dynamically render and deliver file listings to a client's browser.
To use it, a simple CloudFormation template will create an S3 bucket and have your file server interface up and running in just a few minutes.
There are many viable alternatives, as already suggested by other posters, but I believe this approach has a unique range of benefits:
Completely serverless and built for web-scale.
Open source and free to use (though, of course, you must pay AWS for resource utilization -- such S3 storage costs).
Simple / static client browser content:
No Ajax or third party libraries to worry about.
No browser compatibility worries.
All backing systems are native AWS components.
You never share account credentials or rely on 3rd party services.
The S3 bucket remains private - allowing you to only expose parts of the bucket.
A custom hostname / SSL certificate can be established for your file server interface.
Some or all of the host files can be protected behind Basic Auth username/password.
An AWS WebACL can be configured to prevent abusive access to the service.
I've been trying to set up a static website on Amazon S3. I've got things set up to use my personal domain, and so far I've been able to access the content just fine. All the links work, both for pages in the "root" directory and pages in subfolders, so it seems that S3 can follow the paths I'm using.
The problem is that none of the CSS stylings is being applied to the pages (it works fine on the development server on my local machine). I've tried using relative and absolute paths, but this doesn't seem to be the problem. I can see the content just as it should be, and I can navigate the site normally, but there's just no styling.
I've tried messing with permissions on the folders, but I'm clearly not getting something right. Am I missing something obvious? Surely S3 can use individual stylesheets?
Thanks in advance for any thoughts.
The reason is, amazon S3 sets the content-type of css files to binary/octet-stream, follow this tutorial to solve this issue.
You need to select your css file and then from the Properties tab click on Metadata. This is to assign optional metadata to the object as a name-value pair. The Content-Type key must have the Value text/css
I have tried looking for an answer to this but I think I am perhaps using the wrong terminology so I figure I will give this a shot.
I have a Rails app where a company can have an account with multiple users each with various permissions etc. Part of the system will be the ability to upload files and I am looking at S3 for storage. What I want is the ability to say that users from Company A can only download the files associated with that company?
I get the impression I can't unless I restrict the downloads to my deployment servers IP range (which will be Heroku) and then feed the files through a controller and a send_file() call. This would work but then I am reading data from S3 to Heroku then back to the user vs. direct from S3 to the user.
If I went with the send_file method can I close off my S3 server to the outside world and have my Heroku app send the file direct?
A less secure idea I had was to create a unique slug for each file and store it under that name to prevent random guessing of files i.e. http://mys3server/W4YIU5YIU6YIBKKD.jpg etc. This would be quick and dirty but not 100% secure.
Amazon S3 Buckets support policies for granting or denying access based on different conditions. You could probably use those to protect your files from different user groups. Have a look at the policy documentation to get an idea what is possible. After that you can switch over to the AWS policy generator to generate a valid policy depending on your needs.
I tryed to use Mediafire API, but when I use Folder, get_info, it doesn't return file & folder array like the example.
Full url I used: http://www.mediafire.com/api/folder/get_info.php?folder_key=l461cm2d8hfxd
What's wrong with my attempt? Thank you.
You can try the MediaFire API PHP Library. This class currently implements all the functions in the Mediafire API.
Ok I just took a look at their API documentation. They've updated the get_info function for the folders. They've taken out the file tree....
So if you are uploading via the dropbox (which doesn't return the quickkey associated with the file), you CAN NEVER dropbox upload and then use the api to find the file and download it. This renders their API as useless as tits on a boar hog.
The point of a dropbox is to allow remote uploads to a folder, you then know the folder key so you can query the API and return the document quickkeys that are in the folder which then allows you to manage those files remotely, move, delete, download etc. Now you cannot do this FAIL FAIL FAIL.
Despite the Functionality of get_info not working, folder search can resolve at least some issues with retrieving quick keys. In my case i searched for ".mp3" and was handed all the mp3s in my folder
Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing:
/index.html
/favicon.ico
/images/logo.gif
A call to www.example.com/index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured.
So, the question: Is there a way to have index.html functionality with content hosted on S3?
For people still struggling against this after 3 years, let me add some important information:
The URL for your website (and to which you have to point your DNS) is not
<bucket_name>.s3-us-west-2.amazonaws.com, but
<bucket_name>.s3-website-us-west-2.amazonaws.com.
If you use the first, it will not work as intended, no matter how much you config the Index document.
For a specific example, consider:
http://www-example-com.s3.amazonaws.com/index.html works.
http://www-example-com.s3.amazonaws.com/ fails with AccessDenied.
http://www-example-com.s3-website-us-west-2.amazonaws.com/ works!
To get your true website address, go to your S3 Management Console, select the target bucket, then Properties, then Static Website Hosting. It will show the website URL that will work.
Amazon S3 now supports Index Documents
The index document for a bucket can be set to something like index.html. When accessing the root of the site or a sub-directory containing a document of that name that document is returned.
It is extremely easy to do using the aws cli:
aws s3 website $MY_BUCKET_NAME --index-document index.html
You can set the index document from the AWS Management Console:
You can easily solve it by Amazon CloudFront link. At Amazon CloudFront you could modify the root object. You can download manager here: m1.mycloudbuddy.com/downloads.html.
Since It's been long time, this question being asked, and Amazon S3 changing their Interface. I would like to answer with updated screenshots.
We need to enable 'static web hosting' for S3 to serve as web hosting.
- Go to Properties -> click on static web hosting -> Select 'use this bucket to host a website'
- Enter the index document (index.html by default), error document and redirection rules, if any.
As answered in this answer on Stack Overflow, web hosting link would be: http://bucket-name.s3-website-region.amazonaws.com
I would suggest reading this thread from 2006 (On Amazon web services developers connection). It seems there's no easy solution to this.
Yes. using AWS Cloudfront lets you assign a default file.
you can do it using dns webforwards and cloaking. just forward to the complete path of the index.html
www.example.com forwards to http://www.example.com.s3.amazonaws.com and make sure you cloak the output.