I have a problem about s3 and cloudfront process cookie.
My client web is set on s3 with cloudfront, and server is set on beanstalk(node.js+express).
I need to send a jwt token from server to client through 'cookie' (code: res.cookie(token)).
It all woks on local. But after I deploy on aws, I cannot get any cookie from server. I find that aws doc says
"Amazon S3 and some HTTP servers don’t process cookies. Don’t
configure CloudFront to forward cookies to an origin that doesn’t
process cookies, or you’ll adversely affect cacheability and,
therefore, performance."
Is there any possible solution with revising cloudfront cookies setting? or I need to change my server code that not send token with cookies?
Related
My Rails backend (api.mydomain.com) is hosted in EBS. The EC2 hosts have a VPC security group. The VPC security group's inbound rules only allow the corresponding load balancer security group on HTTP. The load balancer security group allows 0.0.0.0/0 on both HTTP and HTTPS. I would like to restrict API calls that hit my Rails backend to only come from my Angular app hosted in S3 (mydomain.com). Is this possible?
I want to prevent other servers from hitting my APIs.
It's not related to your AWS security things; you should set CORS settings for your backend API and set your API only accept the request from your domain.
Understanding CORS
The same-origin policy is an important security concept implemented by web browsers to prevent Javascript code from making requests against a different origin (e.g., different domain) than the one from which it was served. Although the same-origin policy is effective in preventing resources from different origins, it also prevents legitimate interactions between a server and clients of a known and trusted origin.
Cross-Origin Resource Sharing (CORS) is a technique for relaxing the same-origin policy, allowing Javascript on a web page to consume a REST API served from a different origin.
For more information read the following documents:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
https://www.html5rocks.com/en/tutorials/cors/
https://en.wikipedia.org/wiki/Cross-origin_resource_sharing
and these docs about CORS and Rails
https://demisx.github.io/rails-api/2014/02/18/configure-accept-headers-cors.html
https://til.hashrocket.com/posts/4d7f12b213-rails-5-api-and-cors
How to enable CORS in Rails 4 App
Let’s assume that I have a S3 hosted website. Aside from that I have an EC2 that would be to receive http requests from that website. Is there a way that I can set up a security group so that that EC2 can only receive http requests from that website? I know that if the website was hosted on another EC2 I could this vos the IP address or a load balancer, I’m just not sure how to go about it in the S3 website case.
When you launch a website on S3 you will have all Static front-end Contents being served (Just like having a pure HTML/CSS/Javascript website with no Webserver on your local Machine). Means all the calls, XHRs or embedded resource pointing to your EC2 instance are requests which are generated by visitors Browsers with Network Source of their IP with the Origin of "S3 or If you place a CloudFront on S3 it will be CloudFront as Origin in HTTP Headers) communicating with the Destination Target of EC2 (Where you have your WebServer serving on port 80or443). There is no SG that could be applied on the Bucket. However, S3 Buckets can be configured with a Policy to white-list certain IPs address to access Bucket Content and subsequently the Static Web content hosted on it. You can Also enforce CORS policy and have conditions to check Referees and Origins.
Putting aside the Bucket Level Policy, IP White listing, CORS and Condition Restrictions If you serve your Web S3 Bucket from a CloudFront Distribution you can apply GEOIP restriction Rules at the CloudFront level as well.
Just in case if say like you have an API server on EC2 which is going to be called by your CloudFront Domain you can Apply some access control at both CloudFront and EC2 Web level to enforce tightened CORS policies. I.e. Other Websites on the Internet can not Hijack your API service or do CSRF attacks(again as a Browser Level Protection Only).
I have hosted my static website in S3 bucket using angular5 and mapped to a custom domain using Route53. I want to have SSL/TLS(HTTPS) for my site, so I used ACM to generate the certificate and mapped it to my site using CloudFront. The ACM status is issued and it says it's in use. but my website is not HTTPS enabled.
Everything is hosted in us-east-1, I am accessing my site from East-Asia. Is this an issue?
Am I missing something?
The ACM certificate for CloudFront should have been generated in the N.Virginia region. Then you should be able to assign it to your CloudFront distribution.
In your CloudFront distribution Origin, you should set the "Origin Protocol Policy" parameter to "HTTPS Only" if you want to use HTTPS between CloudFront and your S3 bucket.
In your CloudFront distribution Cache Behavior, you should set the "Viewer Protocol Policy" parameter to "Redirect HTTP to HTTPS" so that every HTTP communication between the clients and your CloudFront distribution is redirected to use HTTPS.
Then you would have to change your DNS record to point to the CloudFront distribution CNAME.
Additionally you could configure your CloudFront distribution and your S3 bucket to restrict access directly from the clients to the S3 buckets, so that every request goes through your ClouddFront distribution.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
Typically, if you're using an Amazon S3 bucket as the origin for a
CloudFront distribution, you grant everyone permission to read the
objects in your bucket. This allows anyone to access your objects
either through CloudFront or using the Amazon S3 URL. CloudFront
doesn't expose Amazon S3 URLs, but your users might have those URLs if
your application serves any objects directly from Amazon S3 or if
anyone gives out direct links to specific objects in Amazon S3
if I'm using the AmazonS3Client to put and fetch files, is my connection encrypted? This seems basic, but my googling seems to return things about encrypting the S3 storage and not whether the transmission from this client is secure. If it's not secure is there a setting to make it secure?
Amazon S3 endpoints support both HTTP and HTTPS. It is recommended that you communicate via HTTPS to ensure your data is encrypted in transit.
You can also create a Bucket Policy that enforces communication via HTTPS. See:
Stackoverflow: Force SSL on Amazon S3
Sample policy: s3BucketPolicyEncryptionSSL.json
I have a simple static site which I provide through Amazon's Cloudfront. There is nothing of importance on there so it does not need HTTPS, furthermore I don't want to go through the hassle and cost of setting up an SSL certificate for my site, and I'm happy if requests are sent through HTTPS are met with a service unavailable or whatever other error message would be considered appropriate. Instead, Cloudfront attempts to serve the HTTPS pages using its own certificate and so flags the site up in the browser as 'untrusted' due to the certificate/domain name mismatch.
Is there some way to disable HTTPS entirely in Cloudfront, or some other way of gracefully falling back to HTTP whilst still using Cloudfront?
I've had the same problem.
Amazon now offers SSL certificates free of charge with following restrictions:
You can only use them in CloudFront or ELB.
Browsers which don't have Server Name Indication support won't render your site correctly.
In my case, I just used it even though I never need it. It is much better than having "Untrusted Connection" on the browser.
I couldn't find any mechanism to graceful fail or to block HTTPS completely.
See : http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html#CNAMEsAndHTTPS
For those who are using AWS web console to setup their cloudfront, follow this path to change the "Viewer protocol policy":
AWS Web Console > Cloudfront > Cloudfront Distributions > [Select your distribution] > Behaviors tab > [Select your cache behavior] > Edit > Viewer Protocol Policy > Set "HTTP and HTTPS"
You can specify independently, for each CloudFront origin, if it should use HTTP and HTTPS or only HTTP using the Origin Protocol Policy setting.
Protocols
CloudFront forwards HTTP or HTTPS requests to the origin server based
on the following:
The protocol of the request that the end user sends to CloudFront,
either HTTP or HTTPS.
The value of the Origin Protocol Policy field in the CloudFront
console or, if you're using the CloudFront API, the
OriginProtocolPolicy element in the DistributionConfig complex type.
In the CloudFront console, the options are HTTP Only and Match Viewer.
If you specify HTTP Only, CloudFront forwards requests to the origin
server using only the HTTP protocol, regardless of the protocol in the
end-user request.
Source: AWS CloudFront documentation
Please notice that in case you wish to add an alternate domain name to a distribution:
It seems that since this release (Apr 8, 2019) when you add an alternate domain name to a distribution, you must also attach a SSL/TLS certificate to that distribution that covers the alternate domain name.
So in that case you can't disable HTTPS.
(*) Note: I personally don't see the mentioned option of HTTP Only for Origin Protocol Policy - although it is also mentioned here.