Security group for s3 hosted website making http requests - amazon-s3

Let’s assume that I have a S3 hosted website. Aside from that I have an EC2 that would be to receive http requests from that website. Is there a way that I can set up a security group so that that EC2 can only receive http requests from that website? I know that if the website was hosted on another EC2 I could this vos the IP address or a load balancer, I’m just not sure how to go about it in the S3 website case.

When you launch a website on S3 you will have all Static front-end Contents being served (Just like having a pure HTML/CSS/Javascript website with no Webserver on your local Machine). Means all the calls, XHRs or embedded resource pointing to your EC2 instance are requests which are generated by visitors Browsers with Network Source of their IP with the Origin of "S3 or If you place a CloudFront on S3 it will be CloudFront as Origin in HTTP Headers) communicating with the Destination Target of EC2 (Where you have your WebServer serving on port 80or443). There is no SG that could be applied on the Bucket. However, S3 Buckets can be configured with a Policy to white-list certain IPs address to access Bucket Content and subsequently the Static Web content hosted on it. You can Also enforce CORS policy and have conditions to check Referees and Origins.
Putting aside the Bucket Level Policy, IP White listing, CORS and Condition Restrictions If you serve your Web S3 Bucket from a CloudFront Distribution you can apply GEOIP restriction Rules at the CloudFront level as well.
Just in case if say like you have an API server on EC2 which is going to be called by your CloudFront Domain you can Apply some access control at both CloudFront and EC2 Web level to enforce tightened CORS policies. I.e. Other Websites on the Internet can not Hijack your API service or do CSRF attacks(again as a Browser Level Protection Only).

Related

Google Cloud Platform serving static sites over HTTPS?

I have multiple one page apps (static sites) in buckets on google cloud storage.
Each app can access the information it needs from one API running on a google app engine.
I can serve the one page apps by pointing the CName of each domain to c.google.storage.com, but it doesn't serve it over HTTPS, just HTTP.
My question is:
1) why does google storage not serve contents of buckets via HTTPS if I use a custom domain?
2) How can I serve content on google cloud storage via HTTPS?
NOTE: From my basic understanding of google load balancers, I can serve the content of buckets via HTTPS if I point the domain to the load balancer, but then I would need a load balancer for each app. Those load balancers are too expensive. Is it possible to have one load balancer for all apps maybe?
You don't need a load balancer for each app. You can add multiple backends to a single load balancer and each backend can be connected to a separate storage bucket (that would be app specific). You can then add a hostname mapping on the load balancer per application, that will proxy requests to the correct backend bucket based on the Host-header in the request. You can also add path mapping to these rules if necessary.
You can achieve with only one HTTPS load balancer. create the LB and add each storage bucket as a backend bucket in the loadbalancer. Don't forget to create your bucket with the DNS name (e.g bucket1.mycompany.com, bucket2.mycompany.com etc). Add a wildcard A record in your DNS entry pointing to external IP of LB.
This maybe isn't the answer you are looking for, but I recommend Firebase Hosting (https://firebase.google.com/docs/hosting) to host single-page sites (React, Vue, etc) on GCP.

How do I limit inbound security group on Rails app on EBS to only come from my app (on S3) and prevent other servers from hitting my APIs?

My Rails backend (api.mydomain.com) is hosted in EBS. The EC2 hosts have a VPC security group. The VPC security group's inbound rules only allow the corresponding load balancer security group on HTTP. The load balancer security group allows 0.0.0.0/0 on both HTTP and HTTPS. I would like to restrict API calls that hit my Rails backend to only come from my Angular app hosted in S3 (mydomain.com). Is this possible?
I want to prevent other servers from hitting my APIs.
It's not related to your AWS security things; you should set CORS settings for your backend API and set your API only accept the request from your domain.
Understanding CORS
The same-origin policy is an important security concept implemented by web browsers to prevent Javascript code from making requests against a different origin (e.g., different domain) than the one from which it was served. Although the same-origin policy is effective in preventing resources from different origins, it also prevents legitimate interactions between a server and clients of a known and trusted origin.
Cross-Origin Resource Sharing (CORS) is a technique for relaxing the same-origin policy, allowing Javascript on a web page to consume a REST API served from a different origin.
For more information read the following documents:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
https://www.html5rocks.com/en/tutorials/cors/
https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
and these docs about CORS and Rails
https://demisx.github.io/rails-api/2014/02/18/configure-accept-headers-cors.html
https://til.hashrocket.com/posts/4d7f12b213-rails-5-api-and-cors
How to enable CORS in Rails 4 App

S3 hosted website Cloudfront distribution and API Gateway custom domain pointing to the same subdomain

I have a subdomain for my website named api.example.com and I want to have the following achieved:
have 1 CloudFront distribution for an S3 static website mapped to api.example.com
have the API Gateway custom domain name mapped to the same subdomain api.example.com
The steps I did to achieve this setup are:
Create an API Gateway custom domain api.example.com and set the base mappings for the APIs I want to expose as v1 (version 1 for now)
In Route 53 I created a CNAME record api.example.com pointing to the Edge optimized Target domain name of the API Gateway from Step 1
Note: at this point I get, as expected, the 200 response from https://api.example.com/v1
I created an S3 bucket and set it up for Static website hosting. All files uploaded successfully and working.
I created a new Cloudfront distribution with the origin in the S3 bucket. At this point, for this Cloudfront distribution, I can not set the CNAME record as api.example.com because it is already used by the first Custom Domain Name set in the API Gateway and AWS gives an CNAMEAlreadyExistsException - so I leave this field empty. Accessing the CloudFront distribution for the S3 bucket works as expected.
Under the CloudFront distribution generated for the S3 bucket I add another origin (the API Gateway custom domain name) and create the Bevahior rule to route the v1/* calls to API Gateway custom domain name.
At this point, things are not falling into place anymore:
- when accessing https://api.example.com I get the {"message": "Fobidden" } from the API Gateway distribution. However the URL https://api.example.com/v1 still returns the expected result.
Question: Is there anything which I missed to set so it will work for the URL https://api.example.com to return the content of the S3 static website?
Note: also, the fact that I have an empty CNAME field on the S3 bucket cloudfront distribution while I have a CNAME defined in Route53 using the same cloudfront distribution prompts me an warning message saying that this situation may expose me to a vulnerability.
For your usecase mentioned, you only need one Cloudfront distribution (which is mapped to api.example.com) where it should be able to forward the traffic to S3 and API Gateway (both added as origins to the same distribution) using different behavior configurations. You can configure the behaviors in a way that /v1/* traffic is routed to the API Gateway and other traffic to S3.
When setting up the origins and behaviors, there are few configurations you need to follow.
Make sure both S3 and API Gateway behaviors redirects HTTP to HTTPS.
When adding API Gateway origin set only to forward HTTPS traffic.
In API Gateway behavior, whitelist the headers for accept-* ones , authorization, origin, referrer and makesure you do not whitelist 'Host' header.
In both origins, don't add any paths.
For the API Gateway behavior configure the TTLs to 0 and allow all the methods (GET, POST & etc.)

Amazon ACM certificate Issue and is in Use but Website is still having Http

I have hosted my static website in S3 bucket using angular5 and mapped to a custom domain using Route53. I want to have SSL/TLS(HTTPS) for my site, so I used ACM to generate the certificate and mapped it to my site using CloudFront. The ACM status is issued and it says it's in use. but my website is not HTTPS enabled.
Everything is hosted in us-east-1, I am accessing my site from East-Asia. Is this an issue?
Am I missing something?
The ACM certificate for CloudFront should have been generated in the N.Virginia region. Then you should be able to assign it to your CloudFront distribution.
In your CloudFront distribution Origin, you should set the "Origin Protocol Policy" parameter to "HTTPS Only" if you want to use HTTPS between CloudFront and your S3 bucket.
In your CloudFront distribution Cache Behavior, you should set the "Viewer Protocol Policy" parameter to "Redirect HTTP to HTTPS" so that every HTTP communication between the clients and your CloudFront distribution is redirected to use HTTPS.
Then you would have to change your DNS record to point to the CloudFront distribution CNAME.
Additionally you could configure your CloudFront distribution and your S3 bucket to restrict access directly from the clients to the S3 buckets, so that every request goes through your ClouddFront distribution.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
Typically, if you're using an Amazon S3 bucket as the origin for a
CloudFront distribution, you grant everyone permission to read the
objects in your bucket. This allows anyone to access your objects
either through CloudFront or using the Amazon S3 URL. CloudFront
doesn't expose Amazon S3 URLs, but your users might have those URLs if
your application serves any objects directly from Amazon S3 or if
anyone gives out direct links to specific objects in Amazon S3

HTTPS for GoDaddy - AWS S3 static website

I want to host a HTTPS-only static website using Amazon S3 and GoDaddy. Here's what I've done so far:
a) Created S3 static website at Amazon
b) Added S3 static URL at GoDaddy's CNAME records
CNAME **NAME**: *WWW* and **HOST** : *mywebsite.com.s3-website.ap-south-1.amazonaws.com*
c) Then in forwading **Domain** section added http://mywebsite.com
Now, I can access my website using http://mywebsite.com address. However, I want the site to be available via HTTPS only, for which I bought an SSL certificate from GoDaddy. And did setup on GoDaddy.
Now, the question is:
Is there a way to have an automatic http to https redirect with this setup?
You need to use CloudFront to serve HTTPS requests for an Amazon S3 Bucket. Amazon published Key Differences Between the Amazon Website and the REST API Endpoint to highlight website hosting on s3 does not support SSL connections.
However, this tutorial by Amazon will walk you through creating a CloudFront distribution to serve s3 content using HTTPS.