I am considering moving one of my very static sites to use Amazon's Simple Storage Service. I have read a few articles describing how I can load the files in and set things up so that http://www.example.com/ is directed at those files, but is there a way I can ensure that people who go to http://example.com/ get 301'd to http://www.example.com/?
Just a heads up for those who find this, S3 now support main root domain. You no longer have to do a redirect.
Related
I'm trying to migrate a CDN from rackspace to aws.
In the former, everything is mapped to individual containers via CNAME records like so:
container1 = CNAME = container1.cdndomain
container2 = CNAME = container2.cdndomain
container3 = CNAME = container3.cdndomain
When we set up aws, everything I read said to set up one (only) cloudfront, with various buckets. So that's what I did.
Now I'm trying to somehow remap all of those containers into their new aws home and corresponding 'buckets' but the single cloudfront is making it hard for me.
I'd rather not go through thousands of line of code and config files and change all the current urls (e.g. manually change container1.cdndomain to cdndomain/container1).
But I can't find a way to remap
this: http://bgimgs.cdndomain/image
To it's aws counterpart
here: http://cdndomain/bucket/image
We use Zerigo for DNS and the interface will accept this CNAME path:
container.domain = CNAME = cdndomain/bucket
but aws doesn't route that to the correct bucket.
I've tried an .htaccess solution
RewriteEngine On
RewriteCond %{HTTP_HOST} ^container1\.cdndomain(.*)$ [NC]
RewriteRule ^(.*)$ http:\/\/cdndomain/container1/$1 [L,R=301]
But that's not working either.
Any ideas?
I don't know where the advice came from to only use one CloudFront distribution with multiple S3 buckets. Sure, you can, but if it doesn't match your needs, there is no reason why you should.
Just create multiple distributions.
If you are already accessing them with different hostnames, they they're different collections of objects, I can't actually think of any reason such advice would be applicable in your case. The only reason that comes to mind that would suggest you use just one CloudFront distro with multiple buckets would be to store all the content (from the different buckets) behind one hostname, or behind multiple hostnames with the same paths used at each hostname (a hack to get browsers to load assets faster is to convince the browser that the assets are all coming from different hosts, so more parallel connections will be opened by the browser).
In your case, this doesn't seem like what you need at all. Distributions, themselves, are free. Problem solved.
Regarding DNS, the other vendor may accept that configuration, but that doesn't mean it's a valid configuration. It isn't. You can't change paths with DNS.
Regarding .htaccess, S3 doesn't process .htaccess files. If you were creating them on your web server, that would be a no-go, too, since the web server would not see the requests in order to redirect them.
I have an object which I would like to address using different keys without actually copying the object itself, like a symlink in Linux. Does Amazon S3 provide such a thing?
S3 does not support the notion of a symlink, where one object key is treated as an alias for a different object key. (You've probably heard this before: S3 is not a filesystem. It's an object store).
If you are using the static web site hosting feature, there is a partial emulation of this capability, with object-level redirects:
http://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html
This causes requests for "object-a" to be greeted with a 301 Moved Permanently response, with the URL for "object-b" in the Location: header, which serves a similar purpose, but is of course still quite different. It only works if the request arrives at the website endpoint (not the REST endpoint).
If you use a reverse proxy (haproxy, nginx, etc.) in EC2 to handle incoming requests and forward them to the bucket, then of course you have the option at the proxy layer of rewriting the request URL before forwarding to S3, so you could translate the incoming request path to whatever you needed to present to S3. How practical this is depends on your application and motivation, but this is one of the strategies I use to modify where, in a particular bucket, an object appears, compared to where it is actually stored, allowing me to rewrite paths based on other attributes in the request.
I had a similar question and needed a solution, which I describe below. While S3 does not support symlinks, you can do this in a way with the following:
echo "https://s3.amazonaws.com/my.bucket.name/path/to/a/targetfile" > file
aws s3 cp file s3://my.bucket.name/file
wget $(curl https://s3.amazonaws.com/my.bucket.name/file)
What this is actually doing is getting the contents of the file, which is really just a pointer to the target file, then passing that to wget (curl can also be used to redirect to a file instead of wget).
This is really just a work around though as its not a true symlink but rather a creative solution to simulate symlinks.
Symlinks no, but same object to multiple keys, maybe.
Please refer to Rodrigo's answer at Amazon S3 - Multiple keys to one object
If you're using the website serving on S3, you can do it via header x-amz-website-redirect-location
If you're not using the website serving, you can create your custom header (x-amz-meta-KeyAlias) and handle it manually.
I've read a lot of articles stating that I should be using Amazon S3 in conjunction with the CDN Cloudfront. I'm currently not doing this. I'm simply using Cloudfront with my standard shared hosting package.
Is it OK to use Cloudfront on its own with my standard shared hosting package? Surely there is no added benefit to using S3 also as the files are already located within Cloudfront.
Any enlightenment on this is much appreciated.
Leigh
S3 allows you to do things like static webhosting, with logging and redirection. I.E www.example.com redirects to example.com. You can then use Cloudfront to place your assets as close to the end user as possible ("nearest edge location"). An excellent guide on how to do this is in the AWS docs. Two main things are that S3 supports https, and changes to files in S3 are reflected instantly. Because Cloudfront is a CDN, you have to manually expire files if you change them, otherwise is could take up to 24 hours to reflect your changes.
http://docs.aws.amazon.com/gettingstarted/latest/swh/website-hosting-intro.html
A quick comparison between the two is given here:
http://www.bucketexplorer.com/documentation/cloudfront--amazon-s3-vs-amazon-cloudfront.html
There is no problem of using CloudFront against your own origin server comparing to a S3 server.
There are some benefits of using S3:
Data transfer is faster between S3 and CloudFront
Don't need to worry about the stability and maintenance of origin S3 server
Multiple origin regions
There are also benefits if you use your own server:
Cost saving of S3 hosting (this depends on whether you need to pay for your own server)
Easy for customization should you need it
Data storage location for company/country regulation
So it's all depending on your specific circumstances, such as how much you pay for your hosting package, do you need low-level configuration of your origin server, and how sensitivity your data is.
I would say for majority of the small/medium projects, S3 is a perfect place to store data.
I would like to redirect a link as www.example.com/test to a bucket called test.example.com on S3. As far as I know, Apache has a file called .htaccess that does the trick. I'm able to redirect from test.example.com to a bucket on S3, but I don't know how to do it with the deep link thing. Is that possible?
It seems there is no way to accomplish that. I have seen many posts describing amazon s3 as a "very simple container". I solved my problem by creating a folder named test on the main bucket (www.example.com) with a html page redirecting to the actual bucket test.example.com. The best solution is to move to an apache container service, but for the moment I want to keep it simple.
Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing:
/index.html
/favicon.ico
/images/logo.gif
A call to www.example.com/index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured.
So, the question: Is there a way to have index.html functionality with content hosted on S3?
For people still struggling against this after 3 years, let me add some important information:
The URL for your website (and to which you have to point your DNS) is not
<bucket_name>.s3-us-west-2.amazonaws.com, but
<bucket_name>.s3-website-us-west-2.amazonaws.com.
If you use the first, it will not work as intended, no matter how much you config the Index document.
For a specific example, consider:
http://www-example-com.s3.amazonaws.com/index.html works.
http://www-example-com.s3.amazonaws.com/ fails with AccessDenied.
http://www-example-com.s3-website-us-west-2.amazonaws.com/ works!
To get your true website address, go to your S3 Management Console, select the target bucket, then Properties, then Static Website Hosting. It will show the website URL that will work.
Amazon S3 now supports Index Documents
The index document for a bucket can be set to something like index.html. When accessing the root of the site or a sub-directory containing a document of that name that document is returned.
It is extremely easy to do using the aws cli:
aws s3 website $MY_BUCKET_NAME --index-document index.html
You can set the index document from the AWS Management Console:
You can easily solve it by Amazon CloudFront link. At Amazon CloudFront you could modify the root object. You can download manager here: m1.mycloudbuddy.com/downloads.html.
Since It's been long time, this question being asked, and Amazon S3 changing their Interface. I would like to answer with updated screenshots.
We need to enable 'static web hosting' for S3 to serve as web hosting.
- Go to Properties -> click on static web hosting -> Select 'use this bucket to host a website'
- Enter the index document (index.html by default), error document and redirection rules, if any.
As answered in this answer on Stack Overflow, web hosting link would be: http://bucket-name.s3-website-region.amazonaws.com
I would suggest reading this thread from 2006 (On Amazon web services developers connection). It seems there's no easy solution to this.
Yes. using AWS Cloudfront lets you assign a default file.
you can do it using dns webforwards and cloaking. just forward to the complete path of the index.html
www.example.com forwards to http://www.example.com.s3.amazonaws.com and make sure you cloak the output.