s3 presinged url nginx reverse proxy error - SignatureDoesNotMatch - amazon-s3

I want to display pdf from s3 in the browser by using pdfjs - https://mozilla.github.io/pdf.js/
In place using of s3 URL, I have reverse proxy it like this
URL www.my-site-url.com/public/s3-presinged-url-bucket-part-with-sign-info
NGINX Block
location /public {
proxy_pass https://XXXX.s3.us-west-2.amazonaws.com/;
}
But S3 throws error
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
How to reverse proxy it correctly?

I ran into a similar situation when proxying to presigned S3 urls. Everything worked on development machines, but failed in production because CloudFront added additional headers which changed the signature. In my case because I already had valid presigned URLs provided headers were unmolested, I added proxy_pass_request_headers off; to make the proxy request roughly equivalent to a direct GET request.

Related

How to use CloudFront and S3 with alternate domain?

Let's say I have an S3 bucket named example.com and I want to serve its content through CloudFront using an alternate domain example.com.
I've added a CNAME record to direct example.com to the CloudFront endpoint, and secured the domain using an AWS SSL Certificate.
In CloudFront, when I go to select the Origin, it shows my bucket. For example: example.com.s3.amazonaws.com
If I choose this origin, and I browse to https://example.com/my-bucket-item.jpg, I get redirected to https://example.com.s3-us-east-2.amazonaws.com/my-bucket-item.jpg and a "Connection not secure" SSL error appears.
If I set the origin to just the domain example.com then I get a 403 Bad Request error from CloudFront.
From what I understand, my bucket has to share the name of my domain, otherwise I get a "bucket does not exist" error.
I've followed the AWS documentation on this. What I'm doing wrong here?
Update
I successfully got CloudFront to recognize my alternate domain by changing my origin policy to Managed-CORS-S3Origin.
New problem: even though I've selected 'Yes' to 'Restrict Bucket Access', I'm still able to access files via the S3 url. Do I need to turn off public access to my bucket? If I do this, it seems to override my CloudFront policy...
I had to change my origin request policy to Managed-CORS-S3Origin - this solved the general problem for me.

Cloudflare-S3 HTTPS handshake

I've uploaded my static files to S3. To cache my files into CDN (and reduce aws cost + better SEO results), I'm using cloudflare.
My bucket name is cdn.mydomainname.com
My Cloudflare CNAME configuration is cdn (name) and cdn.mydomainname.com.region_code.s3.amazonaws.com(alias)
However, there's a problem. Whenever, I browse my webpages, the static files does not load because of https error stating Your connection is not private. Upon accepting it, my image url cdn.mydomainname.com/image.jpeg is redirected to https://region_code.amazonaws.com/cdn-mydomainname-com/image.jpeg. Now when I check my network logs, the image is not cache by Cloudflare, as I can see below in my response headers.
Server: AmazonS3
x-amz-id-2: some_id
x-amz-request-id: some_id
I've read through multiple blogs, SO questions and documentation, but I'm not able to find the solution.
Some people recommend not to use bucket name as cdn.mydomainname.com. Instead use something like cdn-mydomainname-com.
Now my Cloudflare CNAME configuration is cdn (name) and cdn-mydomainname-com.region_code.s3.amazonaws.com(alias)
There are 2 problems with it.
1) My urls will not be pretty (https://region_code.amazonaws.com/cdn-mydomainname-com/image.jpeg). This will negatively impact my SEO.
2) It again shows the same response headers as shown previously.
Server: AmazonS3
x-amz-id-2: some_id
x-amz-request-id: some_id
What can be done to curbe this? Where am I wrong
UPDATE
I tried to host a static file on my server, and that file is served from Cloudflare as checked in the response headers (CF-CACHE-STATUS: HIT)
Try pointing CloudFlare's CNAME to
cdn-mydomainname-com.region_code.s3.amazonaws.com
but leave your bucket name as
cdn.mydomainname.com
and access your image at
cdn.mydomainname.com/myimage
S3 will use the hostname that CloudFlare sends when looking up the bucket not the subdomain. Indeed you can put any subdomain in CloudFlare you want. The important part is that the subdomain has no dots in it. The certificate S3 presents to CloudFlare is a wildcard certificate of form
*.region_code.s3.amazonasw.com
so CloudFlare will accept it as valid for
cdn-mydomainname-com.region_code.s3.amazonaws.com
and the image will pass through CloudFlare as desired.

Caddy + Organizr + Plex Media Server = Can't connect to PMS?

Ultimately my goal is to be able to load my PMS admin interface via Organizr. I had already tried simply using the URL https://app.plex.tv/desktop through Organizr, but that URL disallows loading the page in iFrames, so now I'm trying to use Caddy server to reverse proxy it to my local LAN IP instead ...
I have this code in my Caddyfile (note that my PMS is hosted on a different pc on my LAN):
proxy /pms https://192.168.234.234:32400 {
websocket
keepalive 12
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Proto {scheme}
transparent
}
Then when I try to visit the URL, it gives me a 502 Bad Gateway, and the Caddy log file says [ERROR 502 /pms] x509: cannot validate certificate for 192.168.234.234 because it doesn't contain any IP SANs
If I add the insecure_skip_verify directive, I get the error: 401 Unauthorized instead.
I'm still pretty new to using Caddy, anyone know what's going on here?
Since you use Caddy which will deal with the SSL, redirect to http instead of https.
To solve my particular problem; in Organizer I used the Plex web URL instead.
https://192.168.234.234:32400/web
Note the /web at the end.
Another option, was to have Organizr open it using the PopOut option, which just acts something like a regular bookmark, and loads any URL in a new tab, and/or add a line to the Caddyfile like this:
redir /pms https://app.plex.tv/desktop 301
Then in Organizr you could use either the /pms URL, or the direct Plex URL https://app.plex.tv/desktop, and it'd just load Plex in a new tab.

Can the Host Header be different from the URL

We run a website which is hosted using WCF.
The website is hosted on: https://foo.com and the ssl certicate is registered using the following command:
netsh http add sslcert hostnameport=foo.com:443
When we browse the website on the server, all is fine, and the certificate is valid.
There is a loadbalance in front of the server which listens to bar.com and then redirects the request to our server.
The loadbalancer doesn't rewrite the get URL, but only the Host Header.
The rewritten header looks like this:
GET https://foo.com/ HTTP/1.1
Host: bar.com
Connection: keep-alive
Now we have some issues which indicates that the ssl certificate is invalid in this case.
The Loadbalancer itself has a certificate registered listening to https://bar.com
Questions:
Is it ok/allowed that the get URL and the Host in the http header are different?
If it is ok to have different values in the header, under which url should we run the site? get URL or Host url?
Well, referencing the RFC2616:
If Request-URI is an absolute URI, the host is part of the
Request-URI. Any Host header field value in the request MUST be
ignored.
So, back to your questions:
It is allowed but a bad idea as it will create confusion, better to use relative path. i.e.
GET /path HTTP/1.1
instead of
GET https://foo.com/path HTTP/1.1.
Modify the loadbalance configuration to do so. Or make the both values the same.
If Host header has a value different than the request URI, then the URI is taking priority over the Hosts header.

How to change http response code on an object in Amazon S3

I have a webpage hosted on Amazon S3 but I don't want the http response code to be 200. The page is a maintenance page that I'll redirect traffic to when I take our main website down for maintenance.
I want the Amazon S3 page to include a response header of:
HTTP/1.1 503 Service unavailable
Amazon give the ability to add some metadata to the S3 Object but there is nothing for the http status code.
Is it possible?
Not sure which browsers, or crawlers, that supports this. But you could potentially use the meta http-equiv status meta tag to accomplish this.
<meta http-equiv="status" content="503 Service Unavailable" />
The specification says to treat it in the same way as if 503 had been sent as the status code.
I believe you can get Cloudfront to do this. I haven't tested it yet, but try this:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html
You cannot customize the status code for S3 responses.
You can use API Gateway as a proxy to your S3 website error page where you can customize status codes returned.
Until Amazon allow a custom status code from S3, here is a workaround using nginx.
We watch for the existence of a specific file, that acts as a "ON switch" for maintenance mode. If found, we proxy_pass requests to S3 - The trick is to return 503 but redirect processing of 503 status codes to a nginx "named location".
Example nginx conf file (just the relevant bits shown):
server {
...
# Redirect processing of 503 status codes to a nginx "named location".
error_page 503 #maintenance;
# "Maintenance Mode" is off by default - Use a nginx variable to track state.
set $maintenance off;
# Switch on "Maintenance Mode" if a certain file exists.
if (-f /var/www/app/maintenanceON) {
set $maintenance on;
}
if ($maintenance = on) {
# For Maintenance mode Google recommend using status code: "503 Service unavailable".
return 503;
}
...
location #maintenance {
# Redirect the request to a static maintenance page hosted in Amazon S3.
# Note: Use proxy_pass instead of rewrite so we keep the 503 code (otherwise nginx serves a 302 code)
rewrite ^(.*)$ /index.html break;
proxy_pass http://bucketname.s3-website-us-east-1.amazonaws.com;
}
}