Any reason not to add "Cache-Control: no-transform" header to every page? - http-headers

We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?

The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5

Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en

Related

CloudFlare failed to purge dynamic content to a blog post

I run my blog that linked with Cloudflare when I create a new post I don't see the current post that affected my site. I tried many methods that I learned from sites like trying to use a page rule to bypass Cloudflare cache but it didn't work. Also, I turn it off auto minify js, CSS, and Html still does not work. my blog still shows the oldest post that is from since 5 days ago. When you log in WordPress dashboard panel you will see the current posts but for the normal visit will see the cached posts that remain static all time
here is my Cloudflare setting
Page Rules Settings
Page Speed Settings
Page Caching Settings
I need your help from everybody who knows about this problem and how we can solve it
Thanks....!
Looking at the site now, looks like you have maybe disabled Cloudflare? I'm seeing the latest posts, and there are no Cloudflare response headers coming back.
For troubleshooting caching issues, one of the most useful things you can do is to inspect the response headers (using Chrome Dev Tools or similar 'Network' tab). First step is to identify which request is responsible for the cached content (a document, or an AJAX call, etc.)
From there, you can look at the response headers to see why it is behaving this way, specifically you'll want to check the Cache-Control and CF-Cache-Status header. More info here - https://developers.cloudflare.com/cache/about/default-cache-behavior
I fixed this problem because all performance is managed by Ezoic. I tried to purge everything on Ezoic and all things are working perfectly.

CSRF tokens - Do we need to use them in most cases?

So I essentially went on an epic voyage to figure out how to implement CSRF tokens. 20 years later - and now I feel like I just wasted my life. haha
So basically after making malicious test-clients and doing some re-reading it looks like it's virtually not a problem if:
1) You don't allow outdated browsers(they don't enforce CORS)
2) You don't allow CORS by setting the "Access-Control-Allow-Origin" on the resources.
3) You use a JSON API(all requests-responses is sending JSON).
4) You take care of XSS(they can inject code that will run from same origin ).
So as long as you take care of XSS(Reactjs! Holla) - all of the above(minus the old browser part I guess) is basically common practice and an out-of-the-box setup - so it seems like a waste of time to worry about csrf tokens.
Question:
So in order to avoid throwing my laptop under a moving car - is there any reason that I did all that work adding CSRF tokens if I am already adhering to the 4 prevention strategies mentioned above?
Just Fun Info - wanted to share one juicy find my tests came across:
The only ify thing I found with my tests is "GET" requests and an image tag
e.g.
<img src="http://localhost:8080/posts" onload={this.doTheHackerDance} />
The above will pass your cookie, and therefore access the endpoint successfully, but apparently since it is expecting an image - it returns nothing - so you don't get to do the hacker dance. :)
BUUUUT if that endpoint does other things besides return data like a good little "GET" request(like update data) - a hacker can still hit a "dab!" on ya (sorry for viral dance move reference).
tl;dr - Requiring JSON requests mitigates CSRF, as long as this is checked server-side using the content-type header.
Do we need to use them in most cases?
In most other cases, yes, although there are workarounds for AJAX requests.
You don't allow outdated browsers(they don't enforce CORS)
You don't allow CORS by setting the "Access-Control-Allow-Origin" on the resources.
CORS is not required to exploit a CSRF vulnerability.
If Bob has cookies stored for your site, CORS allows your site to allow other sites to read from it, using Bob's browser and cookies.
CORS weakens the Same Origin Policy - it does not add additional security.
The Same Origin Policy (generally - see below for caveat) does not prevent the request from being made to the server, it just stops the response being read.
The Same Origin Policy does not restrict non-Javascript requests in any way (e.g. POSTs made by <form> or <img> HTML directives).
Browsers that do not support CORS, also do not support AJAX cross-origin requests at all.
Therefore while not outputting CORS headers from your site is good for other reasons (other sites cannot access Bob's session), it is not enough to prevent CSRF.
You use a JSON API(all requests-responses is sending JSON).
Actually, if you are setting the content-type to application/json and verifying this server-side, you are mitigating CSRF (this is the caveat mentioned above).
Cross-origin AJAX requests can only use the following content-types:
application/x-www-form-urlencoded
multipart/form-data
text/plain
and these requests are the only ones that can be made using HTML (form tags or otherwise).
You take care of XSS(they can inject code that will run from same
origin ).
Definitely. XSS is almost always a worse vulnerability than CSRF. So if you're vulnerable to XSS you have other problems.
BUUUUT if that endpoint does other things besides return data like a
good little "GET" request(like update data) - a hacker can still hit a
"dab!" on ya (sorry for viral dance move reference).
This is why GET is designated as a safe method. It should not make changes to your application state. Either use POST as per the standard (recommended), or protect these GETs with CSRF tokens.
Please just follow OWASP's guidelines: "General Recommendation: Synchronizer Token Pattern". They know what they're doing.
CSRF counter-measures are not hard if you're using a framework. It's dead simple with Spring Security for example. If you're not using a security framework, you're screwing up big time. Keep it simple. Use one general method to protect against CSRF which you can use on many types of projects
CSRF is orthogonal to CORS . You are vulnerable even if you disallow CORS on your server and your users use the latest Chrome. You can CSRF with HTML forms and some JavaScript.
CSRF is orthogonal to XSS . You are vulnerable even if you have no XSS holes on your server and your users use the latest Chrome
CSRF can happen against JSON APIs. Rely on Adobe keeping Flash secure at your own peril
The new SAMESITE cookie attribute will help, but you need anti-CSRF tokens until then.
Keep reading

Browser-side suggested HTTP/2 server push

Are there any specific spec'd processes that a browser client can use to dynamically encourage a server to push additional requested items into the browser cache using HTTP/2 server push before the client needs to actually use them (not talking about server-side events or WebSockets, here, btw, but rather HTTP/2 server push)?
There is nothing (yet) specified formally for browsers to ask a server to push resources.
A browser could figure out what secondary resources needs to render a primary resource, and may send this information to the server opportunistically on a subsequent request with a HTTP header, but as I said, this is not specified yet.
[Disclaimer, I am the Jetty HTTP/2 maintainer]
Servers, on the other hand, may learn about resources that browsers ask, and may build a cache of correlated resources that they can push to clients.
Jetty provides a configurable PushCacheFilter that implements the strategy above, and implemented a HTTP/2 Push Demo.
The objective of server push is that the server send additional files (e.g. javascripts, css) along with the requested URL (e.g. an HTML page) to the browser before the browser knows what related files are required, thus saving a round-trip and improve webpage load speed. If the browser already know what resources are needed it can request with normal HTTP calls.

Can you modify http request headers in a Safari extension?

I can do this in FF and IE, and I know it doesn't exist in Chrome yet. Anybody know if you can do this in a Safari plugin? I can't find anything that says one way or another in the documentation.
Edit (November 2021): as pointed out in the comments, ParosProxy seems to no longer exist (and was last released ~2006 from what I can see). There are more modern options for debugging on Mac (outside of browser plugins on non-Safari browsers) like Proxyman. Rather than adding another list of links that might expire, I'll instead advise people to search for "debugging proxy" on their platform of choice instead.
Original Answer (2012):
The Safari "Develop" menu in advanced preferences allows you to partially customize headers (like the user agent), but it is quite limited.
However, if a particular browser or app does not allow you to alter the headers, just take it out of the equation. You can use things like Fiddler or ParosProxy (and many others) to alter the requests regardless of the application sending the request.
They also have the advantage of allowing you to make sure that you are sending the same headers regardless of the application in question and (depending on your requirements) potentially work across multiple browsers and apps without modification.
Safari has added extension support but its APIs don't let you have granular level control over Request & Response as compared to Chrome/Firefox/Edge.
To have granular level control over your Request and Response, you need setup a system wide proxy instead.
Requestly Desktop App automatically does this for you and on top of that, you can do various types of modifications too like:
Modify Request/Response Headers
Redirect URLs
Modify Response
Delay Network request
Insert Custom Scripts
Change User-Agent
Here's an article about header modification using requestly
https://requestly.io/feature/modify-request-response-headers/
Disclaimer: I work at Requestly

Images on SSL enabled site with Internet explorer

I have a problem with my site after implementation of SSL that images do not appear. The scenario is that images come from images.domain.com (hosted on Amazon S3) and my certificate is for www.domain.com.
This problem only seems to happen in IE and not in any other browsers.
The issue is related to "mixed content" - HTTPS pages which have HTTP resources (images, scripts, etc) embedded.
The point of using HTTPS is to ensure that only the originating server and the client have access to the secured page. However, in theory it might be possible for this security to be compromised if HTTP resources are embedded - a server might intercept an unsecured javascript file and inject some code to alter the secured page onload.
Most browsers will indicate that a secure page has mixed content by altering the "secure lock" icon, either by showing the lock as open or broken, or by making the icon red (Chrome displayed a skull and crossbones for a short time, but they realised that this was a bit serious for the potential threat level).
Internet Explorer (depending on the version) will display a message either asking whether the insecure content should be shown (IE<=7), or whether only the secure content should be shown (IE>=8). It sounds like you have somehow disabled this message to always hide the insecure content, however that's not the default behaviour.
I think the best solution for you is to replace your S3 links with HTTPS versions.
I am not a web developer, but someone who often deals with the crap experience that is IE. I am not sure what version you are using, but you do not have a wildcard SSL cert (i.e. *.domain.com), so does it have something to do with an old-school limitation in 3rd party images?
See here for what I allude to above and a very good explanation of how IE caches cross-domain HTTPS content, specifically images. I am not sure what the solution is, but I was curious so I researched a little myself and this might help.