Can we remove response headers when we are accessing images stored on Amazon S3?
By default it is giving the following headers:
x-amz-id-2:
x-amz-request-id:
Server:
By default it is giving amazon related values for these headers. Is there any way to remove headers?
Not without proxying the requests through some software you control that can strip the headers. Pretty sure Amazon has no user setting for that.
Related
I try to upload gif to AWS S3. URL is presigned. For presign I use Vapor for sending image it happens from React.
Here docs says: https://soto.codes/2020/12/presigned-urls.html
If you want to include some headers values with the URL you have to include the headers while signing it and the client will be required to include exactly the same headers when they use the URL.
but image/gif is sent at presign. In the return of presign I see X-Amz-SignedHeaders: content-type%3Bhost%3Bx-amz-acl.
Seems presign did his part.
Then the content upload, with PUT has also has the Content-Type: image/gif
Then what is wrong. Why S3 does not have the type?
No type here:
Just realised you are looking at the wrong thing. Scroll further down on the aws console page until you find the metadata section. You can also test this by running a get on the object and see what content-type is returned
The Azure WAF can be configured to check the maximum size of a request like this:
Anyway, besides having this configuration, any time we upload a file the WAF considers it as a "not file upload operation" and returns 413 "Request entity too large" if the file exceeds 128 Kb.
We are sending the POST request with what we think are the right headers:
Content-disposition: attachment; filename="testImage.jpg"
Content-Length: 2456088
Content-Type: image/jpeg
But it does not make a difference. Any idea why the WAF does not see this is a file upload and applies the Max file upload check instead of the Max request body size limit?
After several conversations with Microsoft we found that the WAF considers only file attachments if they are sent using multipart/form-data
If you send it this way the WAF will understand it is a file and thus will apply the limits configured for files instead than for bodies.
There is no other way to send files supported by the WAF for now.
From documentation:
Only requests with Content-Type of multipart/form-data are considered
for file uploads. For content to be considered as a file upload, it
has to be a part of a multipart form with a filename header. For all
other content types, the request body size limit applies.
Please note that filename header also needs to be present in request for WAF to consider it as file upload.
Here's a simplified version of the issue I'm running into - basically I'm just trying to get data out of S3 which is in a .gz file (MOCK_DATA.json.gz)
I'm using axios to try and retrieve the data from the S3 URL. I've heard that generally, there's a way to get the response automatically decompressed + decoded by just setting your headers to allow content-encoding: gzip.
At a high level, I have something like this:
axios.getRequest("http://<my bucket>.s3.amazonaws.com/MOCK_DATA.json.gz", {headers: headers})
.then(response => // do stuff with response)
When I try to log the response, it looks like its stilled gzipped and I'm not sure the best way to approach this.
I've tried setting some headers on the request to specify the expected content type but so far to no avail.
I could also try just manually decoding the response once it has been received but I've been told that it should be happening automatically. Does anyone have tips on how I should be approaching this or if there might be a misunderstanding on how decoding on the client side works?
I figured out the issue - rather than messing around on the frontend code / the consumer of the .gz file, I just had to add some metadata to the S3 object itself.
It had automatically set the content type on upload:
Content-Type: application/x-gzip
But I had to also set:
Content-Encoding: gzip
In the S3 Object properties in order to get the value decoding properly when dealing with it from the JS code.
I am trying to set following Origin Custom Headers
Header Name: Cache-Control
Value: max-age=31536000
But it is giving com.amazonaws.services.cloudfront.model.InvalidArgumentException: The parameter HeaderName : Cache-Control is not allowed. (Service: AmazonCloudFront; Status Code: 400; Error Code: InvalidArgument; error.
I tried multiple ways along with setting the Minimum TTL, Default TTL, and Maximum TTL, but no help.
I assume you are trying to get good ratings on gtmetrix page score by leveraging browser caching! If you are serving content from S3 through cloudfront, then you need to add the following headers to objects in S3 while uploading files to S3.
Expires: {some future date}
Bonus: You do not need to specify this header for every object individually. You can upload a bunch of files together on S3, click next, and then on the screen that asks S3 storage class, scroll down and add these headers. And don't forget to click save!
How to enable Keep Alive connection in AWS S3 or CloudFront? I uploaded images to S3 and found that the urls don't have keep alive connection. They cannot be cached by client application even I added cache-control headers to each image file.
From the tag wiki for Keep-Alive:
A feature of HTTP where the same connection is used for multiple
requests, speeding up downloading of web pages with multiple
resources.
I'm not aware of any relation that this has to cache behavior. I usually see mentions of Keep-Alive headers in relation to long-polling, which wouldn't make any sense to enable on S3.
I think you are incorrectly linking keep-alive headers with your browser's ability to cache static content. The cache-control headers should be all that is needed for caching of static content in the browser.
Are you verifying that the response from CloudFront includes the cache-control headers you have set on the S3 objects? Perhaps you need to invalidate the CloudFront cache after you updated the headers.
Related to your question I think the problem is in setting correct TTL(>0) to your origin/behaviours in Cloudfront.
Also AWS Cloudfront (from 30 March 2017) enables you to set up custom read and keep-alive timeouts for custom origins.