I've been trying to set up a static website on Amazon S3. I've got things set up to use my personal domain, and so far I've been able to access the content just fine. All the links work, both for pages in the "root" directory and pages in subfolders, so it seems that S3 can follow the paths I'm using.
The problem is that none of the CSS stylings is being applied to the pages (it works fine on the development server on my local machine). I've tried using relative and absolute paths, but this doesn't seem to be the problem. I can see the content just as it should be, and I can navigate the site normally, but there's just no styling.
I've tried messing with permissions on the folders, but I'm clearly not getting something right. Am I missing something obvious? Surely S3 can use individual stylesheets?
Thanks in advance for any thoughts.
The reason is, amazon S3 sets the content-type of css files to binary/octet-stream, follow this tutorial to solve this issue.
You need to select your css file and then from the Properties tab click on Metadata. This is to assign optional metadata to the object as a name-value pair. The Content-Type key must have the Value text/css
Related
I want to archive an old website which was built with PHP. Its URLs are full of .phps and query strings.
I don't want anything to actually change from the perspective of the visitor -- the URLs should remain the same. The only actual difference is that it will no longer be interactive or dynamic.
I ran wget --recursive to spider the site and grab all the static content. So now I have thousands of files such as page.php?param1=a¶m2=b. I want to serve them up as they were before, so that means they'll mostly have Content-Type: text/html, and the webserver needs to treat ? and & in the URL as literal ? and & in the files it looks up on disk -- in other words it needs to not support query strings.
And ideally I'd like to host it for free.
My first thought was Netlify, but deployment on Netlify fails if any files have ? in their filename. I'm also concerned that I may not be able to tell it that most of these files are to be served as text/html (and one as application/rss+xml) even though there's no clue about that in their filenames.
I then considered https://surge.sh/, but hit exactly the same problems.
I then tried AWS S3. It's not free but it's pretty close. I got further here: I was able to attach metadata to the files I was uploading so each would have the correct content type, and it doesn't mind the files having ? and & in their filenames. However, its webserver interprets ?... as a query string, and it looks up and serves the file without that suffix. I can't find any way to disable query strings.
Did I miss anything -- is there a way to make any of the above hosts act the way I want them to?
Is there another host which will fit the bill?
If all else fails, I'll find a way to transform all the filenames and all the links between the files. I found how to get wget to transform ? to #, which may be good enough. It would be a shame to go this route, however, since then the URLs are all changing.
I found a solution with Netlify.
I added the wget options --adjust-extension and --restrict-file-names=windows.
The --adjust-extension part adds .html at the end of filenames which were served as HTML but didn't already have that extension, so now we have for example index.php.html. This was the simplest way to get Netlify to serve these files as HTML. It may be possible to skip this and manually specify the content types of these files.
The --restrict-file-names=windows alters filenames in a few ways, the most important of which is that it replaces ? with #. This is needed since Netlify doesn't let us deploy files with ? in the name. It's a bit of a hack; this is not really what this option is meant for.
This gives static files with names like myfile.php#param1=value1¶m2=value2.html and myfile.php.html.
I did some cleanup. For example, I needed to adjust a few link and resource paths to be absolute rather than relative due to how Netlify manages presence or lack of trailing slashes.
I wrote a _redirects file to define URL rewriting rules. As the Netlify redirect options documentation shows, we can test for specific query parameters and capture their values. We can use those values in the destinations, and we can specify a 200 code, which makes Netlify handle it as a rewrite rather than a redirection (i.e. the visitor still sees the original URL). An exclamation mark is needed after the 200 code if a "query-string-less" version (such as mypage.php.html) exists, to tell Netlify we are intentionally shadowing.
/mypage.php param1=:param1 param2=:param2 /mypage.php#param1=:param1¶m2=:param2.html 200!
/mypage.php param1=:param1 /mypage.php#param1=:param1.html 200!
/mypage.php param2=:param2 /mypage.php#param2=:param2.html 200!
If not all query parameter combinations are actually used in the dumped files, not all of the redirect lines need to be included of course.
There's no need for a final /mypage.php /mypage.php.html 200 line, since Netlify automatically looks for a file with a .html extension added to the requested URL and serves it if found.
I wrote a _headers file to set the content type of my RSS file:
/rss.php
Content-Type: application/rss+xml
I hope this helps somebody.
I working with mediawiki.
I want to change upload directory path to aws s3, i tried these two extensions but i getting some warning message.
I dont know these extension are working correctly.
https://www.mediawiki.org/wiki/Extension:LocalS3Repo and
https://www.mediawiki.org/wiki/Extension:AWS
If anybody is working with these extension or if you achieved these in any other ways
please explain me
I have been succesfully using the method described here, though in step 6, rather than using an apache rewrite, I changed the image paths in LocalSettings.php.
(It was quite a lot of work though, and I never figured out a way to the the cache-control and expires headers on the files, which was the real reason why I wanted to do it to begin with.)
I have a website on SiteFinity 4.4. I need to make a document available on a very specific URL, i.e.
http:www.example.com/reports/the-report.pdf
If I just create a directory in the root of the site it does not work (503 error). Also when I try to use the 302Redirect.xml file to redirect the URL to the PDF it does not work either (same error). The link has already been published and has to be exactly as specified. How do I solve this?
Any help would be greatly appreciated.
Sitefinity wouldn't block a folder. Adding a physical folder and dropping that report on the proper place should function, so it probably means you'll have to check your server configuration.
Anyway, the fastest way outside Sitefinity, would be to just create a IIS rewrite rule. Make the http:/www.example.com/reports/the-report.pdf the pattern and redirect them to the url of the document from the sitefinity library.
When you upload a document to the library in sitefinity it gives you an direct url, something like /docs/defaultlibrary/document. You can verify the url by going to content >> documents and files and chose Embed link to this file. That gives you a pop-up with the url.
I have set up an S3 bucket to host static files.
When using the website endpoint (http://.s3-website-us-east-1.amazonaws.com/): it forces me to set an index file. When the file isn't found, it throws an error instead of listing directory contents.
When using the s3 endpoint (.s3.amazonaws.com): I get an XML listing of the files, but I need an HTML listing that users can click the link to the file.
I have tried setting the permissions of all files and the bucket itself to "List" for "Everyone" in the AWS Console, but still no luck.
I have also tried some of the javascript alternatives, but they either don't work under the website url (that redirects to the index file) or just don't work at all. As a last resort, a collapsible javascript listing would be better than nothing, but I haven't found a good one.
Is this possible? If so, do I need to change permissions, ACL or something else?
I've created a simple bit of JS that creates a directory index in HTML style that you are looking for: https://github.com/rgrp/s3-bucket-listing
The README has specific instructions for handling Amazon S3 "website" buckets: https://github.com/rgrp/s3-bucket-listing#website-buckets
You can see a live example of the script in action on this s3 bucket (in website mode): http://data.openspending.org/
There is also this solution: https://github.com/caussourd/aws-s3-bucket-listing
Similar to https://github.com/rgrp/s3-bucket-listing but I couldn't make it work with Internet Explorer. So https://github.com/caussourd/aws-s3-bucket-listing works with IE and also add the possibility to order the files by names, size and date. On the downside, it doesn't follow folders: only the files at one level are displayed.
This might solve your problem. Security settings for Everyone group:
(you need the bucketexplorer.com software for this)
If you are sharing files of HTTP, you may or may not want people to be able to list the contents of a bucket (folder.) If you want the bucket contents to be listed when someone enters the bucket name (http://s3.amazonaws.com/bucket_name/), then edit the Access Control List and give the Everyone group the access level of Read (and do likewise with the contents of the bucket.) If you don’t want the bucket contents list-able but do want to share the file within it, disable Read access for the Everyone group for the bucket itself, and then enable Read access for the individual files within the bucket.
I created a much simpler solution. Just place the index.html file in root of your folder and it will do the job. No configuration required. https://github.com/prabhatsharma/s3-directorylisting
I had a similar problem and created a JavaScript-and-iframe solution that works pretty well for listing directories in S3 website files. You just have to drop a couple of .html files into the directory you want to list. You can find it here:
https://github.com/adam-p/s3-file-list-page
I found s3browser, which allowed me to set up a directory on the main web site that allowed browsing of the s3 bucket. It worked very well and was very easy to set up.
Using another approach base in pure JavaScript and AWS SDK JavaScript API. Not need PHP or other engine just pure web site (Apache or even IIS).
https://github.com/juvs/s3-bucket-browser
Not intent for deploy on your own bucket (for me, no make sense).
Using the new IAM Users from AWS you can provide more specific and secure access to your buckets. No need to publish your bucket to website and make all public.
If you want secure the access, you can use the conventional methods to authenticate users for your current web site.
Hope this help too!
Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing:
/index.html
/favicon.ico
/images/logo.gif
A call to www.example.com/index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured.
So, the question: Is there a way to have index.html functionality with content hosted on S3?
For people still struggling against this after 3 years, let me add some important information:
The URL for your website (and to which you have to point your DNS) is not
<bucket_name>.s3-us-west-2.amazonaws.com, but
<bucket_name>.s3-website-us-west-2.amazonaws.com.
If you use the first, it will not work as intended, no matter how much you config the Index document.
For a specific example, consider:
http://www-example-com.s3.amazonaws.com/index.html works.
http://www-example-com.s3.amazonaws.com/ fails with AccessDenied.
http://www-example-com.s3-website-us-west-2.amazonaws.com/ works!
To get your true website address, go to your S3 Management Console, select the target bucket, then Properties, then Static Website Hosting. It will show the website URL that will work.
Amazon S3 now supports Index Documents
The index document for a bucket can be set to something like index.html. When accessing the root of the site or a sub-directory containing a document of that name that document is returned.
It is extremely easy to do using the aws cli:
aws s3 website $MY_BUCKET_NAME --index-document index.html
You can set the index document from the AWS Management Console:
You can easily solve it by Amazon CloudFront link. At Amazon CloudFront you could modify the root object. You can download manager here: m1.mycloudbuddy.com/downloads.html.
Since It's been long time, this question being asked, and Amazon S3 changing their Interface. I would like to answer with updated screenshots.
We need to enable 'static web hosting' for S3 to serve as web hosting.
- Go to Properties -> click on static web hosting -> Select 'use this bucket to host a website'
- Enter the index document (index.html by default), error document and redirection rules, if any.
As answered in this answer on Stack Overflow, web hosting link would be: http://bucket-name.s3-website-region.amazonaws.com
I would suggest reading this thread from 2006 (On Amazon web services developers connection). It seems there's no easy solution to this.
Yes. using AWS Cloudfront lets you assign a default file.
you can do it using dns webforwards and cloaking. just forward to the complete path of the index.html
www.example.com forwards to http://www.example.com.s3.amazonaws.com and make sure you cloak the output.