I've uploaded my static files to S3. To cache my files into CDN (and reduce aws cost + better SEO results), I'm using cloudflare.
My bucket name is cdn.mydomainname.com
My Cloudflare CNAME configuration is cdn (name) and cdn.mydomainname.com.region_code.s3.amazonaws.com(alias)
However, there's a problem. Whenever, I browse my webpages, the static files does not load because of https error stating Your connection is not private. Upon accepting it, my image url cdn.mydomainname.com/image.jpeg is redirected to https://region_code.amazonaws.com/cdn-mydomainname-com/image.jpeg. Now when I check my network logs, the image is not cache by Cloudflare, as I can see below in my response headers.
Server: AmazonS3
x-amz-id-2: some_id
x-amz-request-id: some_id
I've read through multiple blogs, SO questions and documentation, but I'm not able to find the solution.
Some people recommend not to use bucket name as cdn.mydomainname.com. Instead use something like cdn-mydomainname-com.
Now my Cloudflare CNAME configuration is cdn (name) and cdn-mydomainname-com.region_code.s3.amazonaws.com(alias)
There are 2 problems with it.
1) My urls will not be pretty (https://region_code.amazonaws.com/cdn-mydomainname-com/image.jpeg). This will negatively impact my SEO.
2) It again shows the same response headers as shown previously.
Server: AmazonS3
x-amz-id-2: some_id
x-amz-request-id: some_id
What can be done to curbe this? Where am I wrong
UPDATE
I tried to host a static file on my server, and that file is served from Cloudflare as checked in the response headers (CF-CACHE-STATUS: HIT)
Try pointing CloudFlare's CNAME to
cdn-mydomainname-com.region_code.s3.amazonaws.com
but leave your bucket name as
cdn.mydomainname.com
and access your image at
cdn.mydomainname.com/myimage
S3 will use the hostname that CloudFlare sends when looking up the bucket not the subdomain. Indeed you can put any subdomain in CloudFlare you want. The important part is that the subdomain has no dots in it. The certificate S3 presents to CloudFlare is a wildcard certificate of form
*.region_code.s3.amazonasw.com
so CloudFlare will accept it as valid for
cdn-mydomainname-com.region_code.s3.amazonaws.com
and the image will pass through CloudFlare as desired.
Related
Let's say I have an S3 bucket named example.com and I want to serve its content through CloudFront using an alternate domain example.com.
I've added a CNAME record to direct example.com to the CloudFront endpoint, and secured the domain using an AWS SSL Certificate.
In CloudFront, when I go to select the Origin, it shows my bucket. For example: example.com.s3.amazonaws.com
If I choose this origin, and I browse to https://example.com/my-bucket-item.jpg, I get redirected to https://example.com.s3-us-east-2.amazonaws.com/my-bucket-item.jpg and a "Connection not secure" SSL error appears.
If I set the origin to just the domain example.com then I get a 403 Bad Request error from CloudFront.
From what I understand, my bucket has to share the name of my domain, otherwise I get a "bucket does not exist" error.
I've followed the AWS documentation on this. What I'm doing wrong here?
Update
I successfully got CloudFront to recognize my alternate domain by changing my origin policy to Managed-CORS-S3Origin.
New problem: even though I've selected 'Yes' to 'Restrict Bucket Access', I'm still able to access files via the S3 url. Do I need to turn off public access to my bucket? If I do this, it seems to override my CloudFront policy...
I had to change my origin request policy to Managed-CORS-S3Origin - this solved the general problem for me.
I want to display pdf from s3 in the browser by using pdfjs - https://mozilla.github.io/pdf.js/
In place using of s3 URL, I have reverse proxy it like this
URL www.my-site-url.com/public/s3-presinged-url-bucket-part-with-sign-info
NGINX Block
location /public {
proxy_pass https://XXXX.s3.us-west-2.amazonaws.com/;
}
But S3 throws error
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
How to reverse proxy it correctly?
I ran into a similar situation when proxying to presigned S3 urls. Everything worked on development machines, but failed in production because CloudFront added additional headers which changed the signature. In my case because I already had valid presigned URLs provided headers were unmolested, I added proxy_pass_request_headers off; to make the proxy request roughly equivalent to a direct GET request.
I have built a git backed static site that lives in an S3 bucket and is updated with a Code Pipeline. The site is fully hosted on AWS. The Route 53 name servers point to the S3 bucket but I have recently created a Cloudfront distribution that points to the S3 bucket so I am able to have a SSL certificate. The problem is I believe when you go to the sites url it still points to the S3 bucket and not the Cloudfront distribution. Could this be due to a Route 53 config issue?
The SSL certificated is ACM are active and hosted in (US East)N. Virgina and have been added to the custom SSL certificate in the Cloudfront distribution.
the Cloudfront distribution origin is the S3 bucket with it being "domainname.s3.amazonaws.com" (there are two distributions one for domainname.com and www.domainname.com pointing to each bucket respectivley.
I know a common fix for this is to wait for cloudfront to find the bucket and so I have waited 24 hours before asking the question.
If there is any more information I need to provide please let me know I have tried to proved as much as possible but there is something I am probably overlooking.
Seems like you have to update your Route53 configuration.
As the docs say:
If you want to use your own domain name, use Amazon Route 53 to create
an alias record that points to your CloudFront distribution. An alias
record is a Route 53 extension to DNS. It's similar to a CNAME record,
but you can create an alias record both for the root domain, such as
example.com, and for subdomains, such as www.example.com. (You can
create CNAME records only for subdomains.) When Route 53 receives a
DNS query that matches the name and type of an alias record, Route 53
responds with the domain name that is associated with your
distribution.
You can also check your domain whit a nslookup or dig and see what does the domain resolve, that way you can ensure if it is pointing to your CloudFront distribution
nslookup yourdomain.com
The result of the dig / nslookup should show you something like:
<hash>.cloudfront.net. and that resolving to multiple IP addresses
I have uploaded my SSL certificates to IAM purchased from Comodo and evrything looks fine in chrome and opera. But mozilla is giving an error: "Connection Partially encrypted". I am not able gauge why this is happening.
Link : https://www.advisorcircuit.com/
Please tell me what is the possible culprit for this?
and also i want to know , how can i redirect my users to HTTPS ebven if they type http as even if i type http the website loads and opens.
I am using AWS t2.medium instance. So is there any configuration i need to do in my console??
Redirection:
You have a few options:
Block HTTP traffic, only allow HTTPS on the Security Group level ( Not the nicest solution.
Use an Elastic Load balancer, Listening only on HTTPS port. ( Same as above)
The webserver ( most of them like Tomcat, IIS, etc) supports a redirection, so it sends back "HTTP/1.1 301 Moved Permanently", then the client browser does the call again on HTTPS.
If you use Elastic Load Balancer with SSL termination ( which is a good practice, less load on your server, easier setup of the SSL Certificate). Then all your traffic inside your VPC goes on port 80. In this case you need to setup your webserver to redirect differently. Instead of the incoming port, the trigger for the redirection should be the based on the "X-Forwarded-Proto" header value, which is the original protocol what the client is using.
For production environment the last setup is an AWS Best practice. ( Of course there are also other solutions)
Your site is running Apache/2.2.29. You can redirect your virtual host traffic from 80->443 in Apache itself. That way if someone goes to http://www.yourdomain.com then get redirected to https://www.yourdomain.com
ServerFault has an post explaining how to use Apache mod_rewrite to accomplish this
https://serverfault.com/a/554183/280448
Also you need to adjust the SSL cipher suites that your site accepts. Your ELB has an option to change cipher suites and you can deselect some there. The two you definitely want deselected are RC4 and SSL3.
Here's the full report if you want to make more changes
https://www.ssllabs.com/ssltest/analyze.html?d=www.advisorcircuit.com&s=52.7.154.196&latest
We have website e.g. http://www.acb.com which points to a hardware load-balancer which is suppose to load-balance two dedicated server. Each server is running apache as a frontend and uses mod_proxy to forward request to tomcat.
Some pages of our website require SSL like https://www.abc.com/login or https://www.abc.com/checkout
SSL is terminated at hardware load-balancer.
When I configured mod_pagespeed it compressed, minimized and merged css file and rewrote them with an absolute url http://www.abc.com/css/merged.pagespeedxxx.css instead of relative url /css/merged.pagespeedxxx.css.
It works fine for non ssl pages but when I navigate to an ssl page such as https://www.abc.com/login all the css and js files are blocked by browser like chrome as their absolute url is not using ssl.
How can I resolve this issue ?
Check for https string in this documentation and this one.
You should show us in your question your current ModPagespeedMapOriginDomain && ModPagespeedDomain settings.
From what I understand from these lines:
The origin_specified_in_html can specify https but the origin_to_fetch_from can only specify http, e.g.
ModPagespeedMapOriginDomain http://localhost https://www.example.com
This directive lets the server accept https requests for www.example.com without requiring a SSL certificate to fetch resources - in fact, this is the only way mod_pagespeed can service https requests as currently it cannot use https to fetch resources. For example, given the above mapping, and assuming Apache is configured for https support, mod_pagespeed will fetch and optimize resources accessed using https://www.example.com, fetching the resources from http://localhost, which can be the same Apache process or a different server process.
And these ones:
mod_pagespeed offers limited support for sites that serve content through https. There are two mechanisms through which mod_pagespeed can be configured to serve https requests:
Use ModPagespeedMapOriginDomain to map the https domain to an http domain.
Use ModPagespeedLoadFromFile to map a locally available directory to the https domain.
The solution would be something like that (or the one with ModPagespeedLoadFromFile)
ModPagespeedMapOriginDomain http://localhost https://www.example.com
BUT, the real problem for you is that apache does not directly receive the HTTPS requests as the hardware load balancer handle it on his own. So the mod-pagespeed output filter does not even know it was requested for an SSL domain. And when it modify the HTML content, applying domain rewrite maybe, it cannot handle the https case.
So... one solution (untested) would be using another virtualhost on the apache server, still HTTP if you want, dedicated to https handling. All https related urls (/login,/checkout,...) would then be redirected to this specific domain name by the hardware load balancer. Let's say http://secure.acb.com. This name is only in use between the load balancer and front apaches (and quite certainly apache should restrict access to this VH to the load balancer only).
Then in these http://secure.acb.com virtualhosts mod_pagespeed would be configured to externally rewrite domains to https://www.example.com. Something like:
ModPagespeedMapOriginDomain http://secure.example.com https://www.example.com
Finally the end user request is https://www.example.com/login, the load balancer manages HTTPS, talk to apache with http://secure.example.com, and page results contains only references to https://www.example.com/* assets. Now when theses assets are requested with an https domain request you still have the problem of serving theses assets. So the hardware load balancer should allow all theses assets url in the https domain and send them to the http://secure.abc.com virtualhosts (or any other static VH).
This sounds like you configured the rewritten URL as http://www.abc.com/css/merged.pagespeedxxx.css yourself - therefor: Try to use a protocol-relative URL, e.g. remove http: and just state //www.abc.com/css/merged.pagespeedxxx.css - this will use the same protocol as the embedding page was requested in.
One of the well standardized but relatively unknown features of URLs