hosting multiple sites on S3 bucket serving index.html from directory path - amazon-s3

I'm new to using AWS S3. I want to know if it's possible to host multiple static websites in one bucket using the website route directing meta data option. I am planning to have multiple folders each with their own index.html, but how can I configure the bucket settings to route to each individual site when a user types the address.
For example by typing
http://<bucket-name>.s3-website-<AWS-region>.amazonaws.com/folder1
will take them to website 1
and
http://<bucket-name>.s3-website-<AWS-region>.amazonaws.com/folder2
will take them to website 2
If this is possible, is there any way to also achieve the configuration using the AWS CLI?

This is possible with a slight modification to the URL. You need to use the URLs as follows with the trailing slash to serve the index.html document inside folder1 and folder2.
http://<bucket-name>.s3-website-<AWS-region>.amazonaws.com/folder1/
http://<bucket-name>.s3-website-<AWS-region>.amazonaws.com/folder2/
If you create such a folder structure in your bucket, you must have an
index document at each level. When a user specifies a URL that
resembles a folder lookup, the presence or absence of a trailing slash
determines the behavior of the website. For example, the following
URL, with a trailing slash, returns the photos/index.html index
document.
Reference: Index Document Support

Related

Path based routing between two static websites S3

I have two different react applications in two separate buckets (app1, app2). I would like to route my traffic like this:
www.example.com -> app1 (hosted in bucket app1)
www.example.com/app2 -> app2 (hosted in bucket app2)
I tried to use Cloudfront with two Origins and two Behaviors but looks like www.example.com/app2 doesn't work as expected. It's looking for a folder "app2" in my bucket app2, and will not redirect my traffic to my index.html. I just want to route my traffic on different static website according to the path. Any idea how to do that?
Thanks!
Cloudfront can't strip down portions of the path (app2 in your case). See AWS docs.
One option would be to put your app2 files in a folder named app2, this way the app2 behavior will find them.
Another option is to use CloudFront lambda function to rewrite the URL.

How to serve static files from S3 using CloudFront

I have a CloudFront distribution in front of 2 origins:
S3
API Gateway (Lambda)
I want all the static files to be served from S3, and the rest from API Gateway.
FYI I'm trying to reproduce a classic PHP setup with static files served by Nginx and the rest served by PHP through PHP-FPM.
How can I achieve that?
What I am currently doing is this:
It works, but it clearly sucks because I have to add all the static file extensions manually. Is there a way to match all static files? Or to check if a file exists in S3 and serve it from there?
Option 1. Let the default pattern be the bucket, and create a cache behavior with the path pattern for the API, like /api/*. Possibly not practical here.
Option 2. Match the dot before the extension for sending requests for files to S3, as /*.?? /*.??? etc. The ? placeholder matches exactly one character and without a * at the end, there must be a dot within that many characters of the end of the path.
Option 3. Match a prefix like /assets/* and send all of these requests to the bucket. Store all your objects with assets/ at the beginning of the object key.

Restrict direct folder access via .htaccess except via specific links

I want to restrict access to a folder on my server so that visitors may only access the contents (a web application) via links in the same domain. Can I do this using .htaccess? To be clear, I simply want to prevent direct access to the contents so that visitors are routed through other pages on my website in order to get there.
Sounds to me you want URL rewriting. Rewrite all URLs to point to a single point of entry (i.e. the index page), and have the route set as a GET variable (i.e. index.php?r=css/file.css).
This way, you have complete control over what goes where, and you can include or redirect your users accordingly.

Cannot seem to write to S3 bucket set up as website

I'm trying to do the following scenario:
bucket set up called share.example.com.
My website will be writing to this bucket - dynamically (User generated content).
I want the bucket to be a website, with index.html as the default document.
So a user does something on my site, app then writes to
share.example.com/foo/bar1/index.html
I then want the user to be able to browse to
http://share.example/foo/bar1/ (<-- note, no index.html)
I thought this would be trivial:
set up bucket as website with index.html as default doc
only I have permissions to write to the bucket
set up a bucket policy to allow anonymous read
create CNAME for share.example.com to share.example.com.s3-website-us-east-1.amazonaws.com
However, a problem.
In the above configuration, when I try the write, I get a 405.
If I change the CNAME to point to share.example.com.s3.amazonaws.com, the write succeeds, but now the website won't work as required.
What is the solution here?
Thanks a stack for any help.
The trick was to not use the vanity domain for the writing aspect.
I.e. instead of trying to do the PUT to share.example.com/foo/bar1, do it to s3.amazonaws.com/share.example.com/foo/bar1
The CNAME to share.example.com can then point to share.example.com.s3-website-us-east-1.amazonaws.com and it all works.

Map multiple subdomains to same S3-bucket

Is there some way to map multiple (thousands) of subdomains to one s3-bucket?
If so is it also possible to map it to a specific path in the bucket for each subdomain?
I want test1.example.com to map to mybucket/test1 and test2.example.com to map to mybucket/test2.
I know the last part isn't possible with normal dns-records but maybe there is some nifty Route 53 feature?
It's not possible with S3 directly. You can only use 1 subdomain with an S3 bucket.
However you can map multiple subdomains to a Cloudfront distribution.
Update (thanks to #SimonHutchison's comment below)
You can now map up to 100 alternate domains to a Cloudfront
distribution - see http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_cloudfront
You can also use a wildcard to map any subdomain to your distribution:
Using the * Wildcard in Alternate Domain Names
When you add alternate domain names, you can use the * wildcard at the
beginning of a domain name instead of specifying subdomains
individually. For example, with an alternate domain name of
*.example.com, you can use any domain name that ends with example.com in your object URLs, such as www.example.com,
product-name.example.com, and marketing.product-name.example.com. The
name of an object is the same regardless of the domain name, for
example:
www.example.com/images/image.jpg
product-name.example.com/images/image.jpg
marketing.product-name.example.com/images/image.jpg
Starting from October 2012 Amazon introduced a function to handle redirects (HTTP 301) for S3 buckets. You can read the release notes here and refer to this link for configuration via Console / API.
From AWS S3 docs :
Redirects all requests If your root domain is example.com and you want to serve requests for both http://example.com and
http://www.example.com, you can create two buckets named
example.com and www.example.com, maintain website content in only
one bucket, say, example.com, and configure the other bucket to
redirect all requests to the example.com bucket.
Advanced conditional redirects You can conditionally route requests according to specific object key names or prefixes in the
request, or according to the response code. For example, suppose
that you delete or rename an object in your bucket. You can add a
routing rule that redirects the request to another object. Suppose
that you want to make a folder unavailable. You can add a routing
rule to redirect the request to another page, which explains why
the folder is no longer available. You can also add a routing rule
to handle an error condition by routing requests that return the
error to another domain, where the error will be processed.