Can I prevent RestSharp from decoding a gzip response? - gzip

I'm trying to use RestSharp as a transparent proxy between a client and a resource that send data gzipped. Obviously I don't want to have my proxy waste resources on decoding and re-encoding the gzipped responses, but it seems like this is done automatically.
Is there some way to tell RestSharp not to decode any gzipped responses so that I could just read the raw gzip bytes and return these to the client?

Related

Azure Web Application Firewall (WAF) not diferentiating file uploads from normal posts and returning 413

The Azure WAF can be configured to check the maximum size of a request like this:
Anyway, besides having this configuration, any time we upload a file the WAF considers it as a "not file upload operation" and returns 413 "Request entity too large" if the file exceeds 128 Kb.
We are sending the POST request with what we think are the right headers:
Content-disposition: attachment; filename="testImage.jpg"
Content-Length: 2456088
Content-Type: image/jpeg
But it does not make a difference. Any idea why the WAF does not see this is a file upload and applies the Max file upload check instead of the Max request body size limit?
After several conversations with Microsoft we found that the WAF considers only file attachments if they are sent using multipart/form-data
If you send it this way the WAF will understand it is a file and thus will apply the limits configured for files instead than for bodies.
There is no other way to send files supported by the WAF for now.
From documentation:
Only requests with Content-Type of multipart/form-data are considered
for file uploads. For content to be considered as a file upload, it
has to be a part of a multipart form with a filename header. For all
other content types, the request body size limit applies.
Please note that filename header also needs to be present in request for WAF to consider it as file upload.

How to get JSON out of gzip-ed .json file in S3 from frontend Javascript?

Here's a simplified version of the issue I'm running into - basically I'm just trying to get data out of S3 which is in a .gz file (MOCK_DATA.json.gz)
I'm using axios to try and retrieve the data from the S3 URL. I've heard that generally, there's a way to get the response automatically decompressed + decoded by just setting your headers to allow content-encoding: gzip.
At a high level, I have something like this:
axios.getRequest("http://<my bucket>.s3.amazonaws.com/MOCK_DATA.json.gz", {headers: headers})
.then(response => // do stuff with response)
When I try to log the response, it looks like its stilled gzipped and I'm not sure the best way to approach this.
I've tried setting some headers on the request to specify the expected content type but so far to no avail.
I could also try just manually decoding the response once it has been received but I've been told that it should be happening automatically. Does anyone have tips on how I should be approaching this or if there might be a misunderstanding on how decoding on the client side works?
I figured out the issue - rather than messing around on the frontend code / the consumer of the .gz file, I just had to add some metadata to the S3 object itself.
It had automatically set the content type on upload:
Content-Type: application/x-gzip
But I had to also set:
Content-Encoding: gzip
In the S3 Object properties in order to get the value decoding properly when dealing with it from the JS code.

how does alamofire know when there is a change in a JSON online?

im working on an app which fetches json from a website. everything is working properly and im using alamofire .
but for some reason, when i post new content on the website and the json file changes, alamofire doesnt get the new content. instead, it loads the content from the cache instead of redownloading the new content.
the only workaround to this is to clear the cache which is a way that i do not prefer since the user will have to download the content all over again at each view load.
so what im asking is, is there a way to notify the alamofire method about the new content and try to load the new content instead of having me to implement a method to clear the cache?
Alamofire uses the Foundation URL loading system, which relies on NSURLCache. The cache behavior for HTTP requests is determined by the contents of your HTTP response's Cache-Control headers. For example, you may wish to configure your server to specify must-revalidate:
Cache-Control: max-age=3600, must-revalidate
You should also make sure your server is specifying ETag and Content-Length headers to make it easy to tell when content has changed.
NSHipster's writeup on NSURLCache has a few good examples. If you're totally new to web caching, I recommend you read the very helpful section 13 of the HTTP 1.1 spec, and possibly also this caching tutorial.

Enabling Playframework response gzip compression in the reverse proxy

There has been a lot of discussion about gzipping Play's response (i.e. rendering HTML from a template). Apparently the developers don't care to add it, at least Page Speed reports that the response in 2.0.3 is not compressed and that my app is generating 70% of useless traffic.
I'm using lighttpd as a reverse proxy. mod_compress is enabled, for both "text/html" and "text/html; charset=utf8". But Play's response is still not compressed.
Is there any way to do this via the reverse proxy?

Serving gzipped content directly — bad thing to do?

I have my website configured to serve static content using gzip compression, like so:
<link rel='stylesheet' href='http://cdn-domain.com/css/style.css.gzip?ver=0.9' type='text/css' media='all' />
I don't see any website doing anything similar. So, the question is, what's wrong with this? Am I to expect shortcomings?
Precisely, as I understand it, most websites are configured to serve normal static files (.css, .js, etc) and gzipped content (.css.gz, .js.gz, etc) only if the request comes with a Accept-Encoding: gzip header. Why should they be doing this when all browsers support gzip just the same?
PS: I am not seeing any performance issues at all because all the static content is gzipped prior to uploading it to the CDN which then simply serves the gzipped files. Therefore, there's no stress/strain on my server.
Just in case it's helpful, here's the HTTP Response Header information for the gzipped CSS file:
And this for gzipped favicon.ico file:
Supporting Content-Encoding: gzip isn't a requirement of any current HTTP specification, that's why there is a trigger in the form of the request header.
In practice? If your audience is using a web browser and you are only worried about legitimate users then there is very, very slim to no chance that anyone will actually be affected by only having preprocessed gzipped versions available. It's a remnant of a bygone age. Browsers these days should handle being force-fed gzipped content even if they don't request it as long as you also provide them correct headers for the content being given to them. It's important to realise that HTTP request/response is a conversation and that most of the headers in a request are just that; a request. For the most part, the server on the other end is under no obligation to honor any particular headers, and as long as they return a valid response that makes sense the client on the other end should do their best to make sense of what was returned. This includes enabling gzip if the server responds that it has used it.
If your target is machine consumption however, then be a little wary. People still think that it's a smart idea to write their own HTTP/SMTP/etc parsers sometimes even though the topic has been done to death in multiple libraries for pretty much every language out there. All the libraries should support gzip just fine, but hand-rolled parsers usually won't.