So I essentially went on an epic voyage to figure out how to implement CSRF tokens. 20 years later - and now I feel like I just wasted my life. haha
So basically after making malicious test-clients and doing some re-reading it looks like it's virtually not a problem if:
1) You don't allow outdated browsers(they don't enforce CORS)
2) You don't allow CORS by setting the "Access-Control-Allow-Origin" on the resources.
3) You use a JSON API(all requests-responses is sending JSON).
4) You take care of XSS(they can inject code that will run from same origin ).
So as long as you take care of XSS(Reactjs! Holla) - all of the above(minus the old browser part I guess) is basically common practice and an out-of-the-box setup - so it seems like a waste of time to worry about csrf tokens.
Question:
So in order to avoid throwing my laptop under a moving car - is there any reason that I did all that work adding CSRF tokens if I am already adhering to the 4 prevention strategies mentioned above?
Just Fun Info - wanted to share one juicy find my tests came across:
The only ify thing I found with my tests is "GET" requests and an image tag
e.g.
<img src="http://localhost:8080/posts" onload={this.doTheHackerDance} />
The above will pass your cookie, and therefore access the endpoint successfully, but apparently since it is expecting an image - it returns nothing - so you don't get to do the hacker dance. :)
BUUUUT if that endpoint does other things besides return data like a good little "GET" request(like update data) - a hacker can still hit a "dab!" on ya (sorry for viral dance move reference).
tl;dr - Requiring JSON requests mitigates CSRF, as long as this is checked server-side using the content-type header.
Do we need to use them in most cases?
In most other cases, yes, although there are workarounds for AJAX requests.
You don't allow outdated browsers(they don't enforce CORS)
You don't allow CORS by setting the "Access-Control-Allow-Origin" on the resources.
CORS is not required to exploit a CSRF vulnerability.
If Bob has cookies stored for your site, CORS allows your site to allow other sites to read from it, using Bob's browser and cookies.
CORS weakens the Same Origin Policy - it does not add additional security.
The Same Origin Policy (generally - see below for caveat) does not prevent the request from being made to the server, it just stops the response being read.
The Same Origin Policy does not restrict non-Javascript requests in any way (e.g. POSTs made by <form> or <img> HTML directives).
Browsers that do not support CORS, also do not support AJAX cross-origin requests at all.
Therefore while not outputting CORS headers from your site is good for other reasons (other sites cannot access Bob's session), it is not enough to prevent CSRF.
You use a JSON API(all requests-responses is sending JSON).
Actually, if you are setting the content-type to application/json and verifying this server-side, you are mitigating CSRF (this is the caveat mentioned above).
Cross-origin AJAX requests can only use the following content-types:
application/x-www-form-urlencoded
multipart/form-data
text/plain
and these requests are the only ones that can be made using HTML (form tags or otherwise).
You take care of XSS(they can inject code that will run from same
origin ).
Definitely. XSS is almost always a worse vulnerability than CSRF. So if you're vulnerable to XSS you have other problems.
BUUUUT if that endpoint does other things besides return data like a
good little "GET" request(like update data) - a hacker can still hit a
"dab!" on ya (sorry for viral dance move reference).
This is why GET is designated as a safe method. It should not make changes to your application state. Either use POST as per the standard (recommended), or protect these GETs with CSRF tokens.
Please just follow OWASP's guidelines: "General Recommendation: Synchronizer Token Pattern". They know what they're doing.
CSRF counter-measures are not hard if you're using a framework. It's dead simple with Spring Security for example. If you're not using a security framework, you're screwing up big time. Keep it simple. Use one general method to protect against CSRF which you can use on many types of projects
CSRF is orthogonal to CORS . You are vulnerable even if you disallow CORS on your server and your users use the latest Chrome. You can CSRF with HTML forms and some JavaScript.
CSRF is orthogonal to XSS . You are vulnerable even if you have no XSS holes on your server and your users use the latest Chrome
CSRF can happen against JSON APIs. Rely on Adobe keeping Flash secure at your own peril
The new SAMESITE cookie attribute will help, but you need anti-CSRF tokens until then.
Keep reading
Related
The AppScan report insists that my site have some problems with Cross Site Request Forgery.
1-) Using token on forms is a good solution but in the report there are pages without forms like "Logout" page. It just kills the session and that's all, how can CSRF utilize there I don't understand.
2-) Checking "Referer" is a good solution? Everyone says no. And "Referer Header" is not always present.
3-) Same-Site Cookies is shown as the ultimate solution but I don't use cookies at all.
What needs to be done to mitigate CSRF?
I have seen in many places that one of the benefits of token-based authentication over cookie-based authentication is it is better suite for CORS/cross-domain scenario.
But why?
Here is a CORS scenario:
An HTML page served from http://domain-a.com makes an <img> src
request for http://domain-b.com/image.jpg.
Even if there's a token on my machine, how could the mere <img> tag know where to find and send it?
And according to here, it is suggested to store JWT as cookie, so how could that survive the CORS/cross-domain scenario?
ADD 1
Token-based authentication is easier to scale out than session-cookie one. See a related thread here: Stateless web application, an urban legend?
Just for clarification: Requests to any subdomain that you have are also considered as cross origin request (ex. you make request from www.example.com to api.example.com).
A simple <img> GET request to another origin is, indeed, cross origin request as well, but browsers are not using preflighted (OPTION) requests if you use GET, HEAD, POST requests only, and your Content-Type header is one of the followings:
application/x-www-form-urlencoded
multipart/form-data
text/plain
Therefore simple <img> request to another origin won't have problem (regardless if subdomain or totally another domain), as it won't go through preflight, unless it requires credentials, because when you add Authorization header, the request needs to go through preflight.
About storing in localstorage vs in cookie: Localstorage has single origin policy, meaning you cannot access the data you have stored from the subdomain, ie, example.com cannot access data in localstorage of api.example.com. On the other hand, using cookies, you can define which subdomains can access to the cookie. So you can access your token in stored in cookie and send it to server with your requests. Cookies also doesn't allow to access data across different domains.
Hope this helps.
We are developing a restful API that fulfills some various events. We have done a Nessus vulnerability scan to see security leaks. It turned out that we have some leaks leads to clickjacking and we have found the solution. I have added x-frame-options as SAMEORIGINin order to handle problems.
My question here is that, since I am an API, do I need to handle clickjacking? I guess 3rd party user should be able to reach my API over an iframe and I don't need to handle this.
Do I miss something? Could you please share your ideas?
Edit 2019-10-07: #Taytay's PR has been merged, so the OWASP recommendation now says that the server should send an X-Frame-Options header.
Original answer:
OWASP recommends that clients send an X-Frame-Options header, but makes no mention of the API itself.
I see no scenario where it makes any sense for the API to return clickjacking security headers - there is nothing to be clicked in an iframe!
OWASP recommends that not only do you send an X-Frame-Options header but that it is set to DENY.
These are recommendations not for a web site but for a REST service.
The scenario where it makes sense to do this is exactly the one the OP mentioned - running a vulnerability scan.
If you do not return a correct X-Frame-Options header the scan will fail. This matters when proving to customers that your endpoint is safe.
It is much easier to provide your customer a passing report than have to argue why a missing header does not matter.
Adding a X-Frame-Options header should not affect the endpoint consumer as it is not a browser with an iframe.
We have recently fixed a nagging error on our website similar to the one described in How to stop javascript injection from vodafone proxy? - basically, the Vodafone mobile network was vandalizing our pages in transit, making edits to the JavaScript which broke viewmodels.
Adding a "Cache-Control: no-transform" header to the page that was experiencing the problem fixed it, which is great.
However, we are concerned that as we do more client-side development using JavaScript MVP techniques, we may see it again.
Is there any reason not to add this header to every page served up by our site?
Are there any useful transformations that this will prevent? Or is it basically just similar examples of carriers making ham-fisted attempts to minify things and potentially breaking them in the process?
The reasons not to add this header is speed performance and data transfer.
Some proxy / CDN services encode the media, so if your client is behind proxy or are you using a CDN service, the client may get higher speed and spend littler data transfer. This header actually orders proxy / CDN - not to encode the media , and leave the data as is.
So, if you don't care about this, or your app not use many files like images or music, or you don't want any encoding on your traffic, there is no reason not to do this (and the opposite, recommended to).
See the RFC here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5
Google has recently incorporated the service googleweblight so if your pages has the "Cache-Control: no-transform" header directive you'll be opting-out from transcoding your page in case the connection comes from a mobile device with slow internet connection.
More info here:
https://support.google.com/webmasters/answer/6211428?hl=en
I'm very new to CSRF protection so please excuse if I make poor assumptions or if I'm missing something, but I'd like to make sure I'm doing all I can to prevent CSRF. From my research so far, I've found the following:
CSRF can be thwarted (and is probably best thwarted) by placing a nonce parameter within the request body of all HTML POSTs, only using POSTs to modify data, and on the server side verify the token is valid before processing the request.
A malicious website can send requests to my site (thwarted by the nonce), but they can't read responses because of same-origin policies in place on browsers. Assuming that someone is using a secure browser, a malicious site cannot GET a page from my site using AJAX, read the nonce, and use it for themselves.
Script tags are not bound by same-origin policies in most (possibly any) browsers and can therefore allow for content to be read from other sites.
When I got to point 3 I decided to try to get at HTML content in Chrome using JSONP; I opened up my console (with a page not from localhost) and ran the following code:
$.ajax({
url: "http://localhost:8080/my-app/",
dataType: "jsonp",
jsonp: "alert"
}).done(function(data) {
alert(data);
});
What I received in the console window was the this:
Resource interpreted as Script but transferred with MIME type
text/html
As far as I can tell the browser is essentially telling me that it received the content, proceeded to parse the response, but stopped because the type was not application/json. So finally, my question is, can this be compromised? Even though the browser failed to parse the response as JSON, it does have the response. Is there a way that this response could be parsed as HTML, grab my CSRF nonce, and compromise the protections I'm trying to enforce? It would seem to me (and I hope this to be true) that browsers will not allow this, just like they don't allow cross domain requests in the first place for basically all other communication, and that's what we as developers are relying upon (in addition to same-origin policies) to protect our sites. Is my thinking correct?