I have two cloudfront and one s3 bucket and in both cloudfront i have added s3 bucket as a origin. (i am using origin access identity to serve s3 content)
I added same behavior in both cloudfront.
My problem is
I am able to access get s3 using only one cloudfront and its throwing error SignatureDoesNotMatch with other cloudfront.
For example:
https://cloudront1url/images/a.jpg is working but
https://cloudfront2url/images/a.jpg is not working.
Error that i am getting is click here
I got the issue. in behavior i was using "Cache Based on Selected Request Headers" (whitelist option) for s3 origin. I was white listing "host" header. when i choose option "none" in "Cache Based on Selected Request Headers" issue gets resolved.
In my case it was Origin Request Policy in Cloudfront being set to forward all headers which turns out takes your request headers and calculates signature while s3 calculates signature from specific set of headers.
Correct way to use OAI is with CORS-S3Origin request policy or cherry pick selected headers yourself.
I got a hint from this article. I had to edit the behavior, use “Legacy cache settings” and select “All” for “Query strings” (select default “None” for “Headers”, and select default “None” for “Cookies”). After that, the SignatureDoesNotMatch error was gone.
Here is the screenshot of the CloudFront behavior.
Related
It may look a little strange that I want to upload file to S3 bucket through cloudfront, and access it with CloudFront.
And AWS declared that CloudFront support this putObject action according to
https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/
Now we have configured the CloudFront settings(Origin/Behavior) and S3 policy to complete this.
Only one block issue found that:
The uploaded file via CloudFront can't be accessed by any account or any roles. It's owner named "cf-host-credentials-global".
Just tried several ways to fix this issue, base on a quite simple solution:
CloudFront can access the S3 bucket(This s3 bucket is not public accesible.) with OAC which has putObject and getObject permission on it.
We can use a CloudFront URL mapping to S3 bucket origin for uploading a file.
Note: No signed CloudFront or signed S3 URL, but I also tested those cases actually.
We still always get such accessDenied issue, most of time it can be uploaded with the expected size and file name.
But it can't be downloaded or accesible.
I endeavor to fix this on this simple solution, but all of them are failed as below:
add x-amz-acl header, according to answer on stackoverflow
The file upload by CloudFront Origin Access Identity signed url can't be access by boto3 or IAM role?
I add the x-amz-acl header, but got this error with failed uploading:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>NDOLGKOGSF883</RequestId>
<HostId>CD9cDmGbSuk34Gy3mK2Znfdd9klmfew0s2dsflks3</HostId>
</Error>
Even use a pre-signed S3 url(put the x-amz-acl header in boto3 generate_presigned_url), it still the same error.
seems someone said x-amz-acl can be put into query parameter, then I have tried it in the URL(with signed URL and unsigned URL), it doesn't work anyway.
Pre-signed URLs and x-amz-acl
someone said we need to add x-amz-content-sha256 header in the client request, according to
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
add x-amz-content-sha256 header, it can be uploaded successfully, but still failed with AccessDenied on that S3 Object uploaded.
add Content-MD5 header, it got the issue that header is not signed as above, and uploading failed.
Anyone has an idea about this? How to fix this AccessDenied issue?
Thanks in advance.
It looks like x-amz-acl header via OAC is not getting signed when the request is being sent from CloudFront to S3 bucket.
So if you insist on using OAC, there's only one way: change the "Object Ownership" to ACLs disabled in S3 bucket permissions.
And that works for me.
I have built a system where I have product templates. A brand will overwrite the template to create a product. Images can be uploaded to the template and be overwritten on the product. The product images are uploaded to the corresponding brand's S3 bucket. But on the product template images are uploaded to a generic S3 bucket.
Is there a way to make the brand's bucket fallback to the generic bucket if it receives a 404 or 403 with a file url. Similar to the hosted website redirect rules? These are just buckets with images so it wouldn't be a hosted website and I was hoping to avoid turning that on.
There is not a way to do this with S3 alone, but it can be done with CloudFront, in conjunction with two S3 buckets, configured in an origin group with appropriate origin failover settings, so that 403/404 errors from the first bucket cause CloudFront to make a follow-up request from the second bucket.
After you configure origin failover for a cache behavior, CloudFront does the following for viewer requests:
When there’s a cache hit, CloudFront returns the requested file.
When there’s a cache miss, CloudFront routes the request to the primary origin that you identified in the origin group.
When a status code that has not been configured for failover is returned, such as an HTTP 2xx or HTTP 3xx status code, CloudFront serves the requested content.
When the primary origin returns an HTTP status code that you’ve configured for failover, or after a timeout, CloudFront routes the request to the backup origin in the origin group.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html
This seems to be the desired behavior you're describing. It means, of course, that cache misses that need to fall back to the second bucket will require additional time to be served, but for cache hits, there won't be any delay since CloudFront only goes through the "try one, then try the other" on cache misses. It also means that you'll be paying for some traffic on the primary bucket for objects that aren't present, so it makes the most sense sense if the primary bucket will have the object more often than not.
This solution does not redirect the browser -- CloudFront follows the second path before returning a response -- so you'll want to be mindful of the Cache-Control settings you attach to the fallback objects when you upload them, since adding a (previously-absent) primary object after a fallback object is already fetched and cached (by either CloudFront or the browser) will not be visible until any cached objects expire.
I am trying to set following Origin Custom Headers
Header Name: Cache-Control
Value: max-age=31536000
But it is giving com.amazonaws.services.cloudfront.model.InvalidArgumentException: The parameter HeaderName : Cache-Control is not allowed. (Service: AmazonCloudFront; Status Code: 400; Error Code: InvalidArgument; error.
I tried multiple ways along with setting the Minimum TTL, Default TTL, and Maximum TTL, but no help.
I assume you are trying to get good ratings on gtmetrix page score by leveraging browser caching! If you are serving content from S3 through cloudfront, then you need to add the following headers to objects in S3 while uploading files to S3.
Expires: {some future date}
Bonus: You do not need to specify this header for every object individually. You can upload a bunch of files together on S3, click next, and then on the screen that asks S3 storage class, scroll down and add these headers. And don't forget to click save!
I am using an S3 bucket to store a bunch of product images for a large web site. These images are being served through Cloudfront with the S3 bucket as the origin. I have noticed that Cloudfront does not put an expiration header on the image even though I have set the distribution behavior to customize the cache headers and set a long min, max, and default TTL in Cloudfront.
I understand that I can put an expiration on the S3 object, however this is going to be quite impractical as I have millions of images. I was hoping that cloudfront would do me the honors of adding this header for me, but it does not.
So my question is the only way to get this expiration header to apply it every S3 object, or perhaps I am missing something in Cloudfront that will do it for me?
CloudFront's TTL configuration only controls the amount of time CloudFront keeps the object in the cache.
It doesn't add any headers.
So, yes, you'll need to set these on the objects in S3.
Note that Cache-Control: is usually considered a better choice than Expires:.
A alternative to avoid updating the onjects is to configure a proxy server in EC2 in the same region as the bucket, and let the server add the headers as the responses pass through it.
Request: CloudFront >> Proxy >> S3
Response: S3 >> Proxy >> CloudFront
...for what it's worth.
We have an S3 bucket with website hosting enabled, and an error document set. We want to use it to serve images over https.
Over http, the 404 works fine: example. But for https, we need to use a different URL scheme, and the 404 no longer works: example. (That URL scheme also fails with http: example.)
Is there some way to do this? Have I misconfigured the S3 bucket, or something along those lines? (I've given 'list' permission to everyone, which turned the failure from a 403 to a 404, but not the 404 I want.)
We solved this by setting up a Cloudfront as an interface to the S3 bucket.
One tricky bit: the origin for the CloudFront distribution needs to have origin protocol policy set to HTTP only. That means it can't directly be an S3 bucket, which always has 'match viewer' policy. Instead you can set it to the URL of an S3 bucket. Instead of BUCKET.s3.amazonaws.com, use the endpoint given by S3's static website hosting: BUCKET.s3-website-REGION.amazonaws.com.
This might have unintended side effects, and there might be better ways to do it, but it works.