CORS complaint on Safari 5.1.7 (windows 7) - safari

A page at http://www4.example.com that tries to an xhr connection to http://www6.example.com/
The browser sends a GET request with this header:
Origin: http://www4.example.com
The www6.example.com server sends back:
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://www4.example.com
Connection: Keep-Alive
Content-Encoding: gzip
Content-Type: text/plain
Date: ...
Keep-Alive: timeout=5, max=100
Transfer-Encoding: Identity
Server: Apache/2.2.20 (Ubuntu)
Vary: Accept-Encoding
X-Powered-By: PHP/5.3.6-13ubuntu3.7
And yet I get:
XMLHttpRequest cannot load http://www6.example.com/myscript.php?xhr=1&t=1234333223. Origin http://www4.example.com is not allowed by Access-Control-Allow-Origin.
My code matches my understanding of the CORS standard, and works fine with Chrome, Firefox, Opera, etc. so I'm going to assume this is a Safari 5.1 bug? My question is what do I need to do to work around it?

After a lot of trial and error, and watching network traffic, I think I can self-answer.
The Safari bug is that it sends an OPTIONS pre-flight request first, even though it is a GET request.
To add some extra complexity, it appears to only send this on the 2nd request. (I think this is because my 2nd request sends an extra custom header... but I couldn't actually isolate that, so I think there is something else going on as well - perhaps cache interactions?)
Sending Access-Control-Allow-Headers in the main response does not fix the problem: it does the OPTIONS request first, so never gets that far.
The fix I did was to put this at the very top of the PHP script:
if($_SERVER['REQUEST_METHOD'] == 'OPTIONS'){
header("Access-Control-Allow-Origin: ".#$_SERVER['HTTP_ORIGIN']);
header("Access-Control-Allow-Credentials: true");
header("Access-Control-Allow-Headers: Last-Event-Id, Origin, X-Requested-With, Content-Type, Accept, Authorization");
exit;
}
Sending back "Access-Control-Allow-Headers: *" did not work. You have to explicitly list the headers you want. I briefly experimented and it appears they are case-insensitive.
Sending back "Access-Control-Allow-Methods: POST, GET, OPTIONS" was not needed.
As an aside, Cookies are sent, but basic auth details are not sent (despite explicitly listing the Authorization header there). This might be a deliberate limitation of the CORS implementation, as of this version of WebKit (534.57.2), not a bug.

Related

Vue Firebase Verify ID Token CORS issue

I am trying to verify an ID Token using the Firebase Admin SDK as per instructions. My current auth code looks like this (in Vue):
// Auth.vue, inside the firebaseui config callback
signInSuccessWithAuthResult: function(authResult, redirectUrl) {
authResult.user
.getIdToken(/* forceRefresh */ true)
.then(function(idToken) {
// Send token to your backend via HTTPS
// ...
console.log(idToken);
})
.catch(function(error) {
// Handle error
console.log(error);
});
The login works fine and I can get authResult perfectly. However, it seems the function getIdToken is the problem, as I get the following error on my console:
Cross-Origin Request Blocked:
The Same Origin Policy disallows reading the remote resource at
https://securetoken.googleapis.com/v1/token?key=AIzaSyApp5yu051vMJlNLoQ1ngVSd-f2k7Pdavc.
(Reason: CORS request did not succeed).
In my request list, the one hanging is an OPTIONS method, with the following headers:
OPTIONS /v1/token?key=AIzaSyApp5yu051vMJlNLoQ1ngVSd-f2k7Pdavc HTTP/1.1
Host: securetoken.googleapis.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:62.0) Gecko/20100101 Firefox/62.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.8,pt-BR;q=0.5,de;q=0.3
Accept-Encoding: gzip, deflate, br
Access-Control-Request-Method: POST
Access-Control-Request-Headers: x-client-version
Origin: http://localhost:8080
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
I am not even sure where the problem lies. Is it coming from the Vue side? I am running it in a dev server (by simple yarn serve, vue cli 3). Would the solution be when I run Vue on a production server where I can actually configure cors?
Any light on the matter is extremely welcome...
Thanks!!
Figured it out.
I was calling it in the wrong place. What helped was this thread, which pointed me out to Preflighted Requests which is what the OPTIONS request is:
"preflighted" requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data.
So I realized I should not be sending this request within my Post request where I got the authorization in the first place. Moving it to another method made it work.

How and where to set Access-Control-Expose-Headers for koa-cors

I am attempting to get some headers sent from my server to my front end via a fetch request.
In the controller function, I am explicitly sending some headers like this:
exports.getItems = async (ctx) => {
ctx.set('Search-type', 'category');
};
In postman, when I make a get request to my server I get these headers:
Connection →keep-alive
Content-Length →6442
Content-Type →application/json; charset=utf-8
Date →Thu, 19 Apr 2018 16:10:54 GMT
Search-type →category
However, when I try to access the header in the fetch request from the front end, I can only log the Content-Type. How do I get Search-type from my fetch?
After some googling, I found this issue on github which seems very similar to mine. This led me to another github issue page with the suggestion that I need to 'expose some explicitly needed headers'.
In the koa/cors documentation, there is an option allowHeaders Access-Control-Allow-Headers what I want to know is, how do I expose the headers so I can get them on my front end?
In the response to the GET, in addition to adding the Access-Control-Allow-Origin response header, you also need to include the Access-Control-Expose-Headers: <comma-separated-list-of-headers> response header.
If that header isn't returned by the server, even though the headers are sent by the server to the browser, the browser blocks any non-standard headers from being accessed by JavaScript. So you can see Content-Type (because it's a 'standard' response header), but not Search-type.
Basically, you need to ensure that the server responds with this
Access-Control-Expose-Headers: Search-type
(in addition to any other CORS response headers, like Access-Control-Allow-Origin, of course).

Edge browser appears to discard response payload

I have a web app that returns a PDF to the browser, which works fine in Chrome and Firefox, however it does not work in Edge (version 38.14393.0.0). The response header looks like this:
Access-Control-Allow-Headers: Content-Type, Authorization
Access-Control-Allow-Origin: *
Cache-Control: no-store, no-cache, must-revalidate
Content-Disposition: inline; filename="Invoice.PDF"
Content-Length: 9255
Content-Type: application/pdf
Date: Mon, 30 Jan 2017 04:38:25 GMT
Pragma: no-cache
Server: Microsoft-IIS/10.0
I've seen other questions suggesting that 2 requests are sent from Edge, one of which should be ignored, but in my case there is only one, and it appears that the payload is being ignored because Edge reports:
This resource has no response payload data
..which is in the Response Body section for the request in the Network analyzer in Developer Tools. Using attachment instead of inline also does not work, and I wouldn't expect it to if Edge thinks there's no content in the response.
Any clues?
Edit: When the Content-Type is changed to: text/plain, the response body is no longer being discarded (but does not solve the problem), so I assume it's something specific to application/pdf
In my case, it appears the payload data is being discarded because I was attempting to use createObjectURL on the result. This post:
Setting window.location or window.open in AngularJS gives "access is denied" in IE 11
Has a solution that kind of works: Edge still does not actually open the PDF like other browsers are capable of, however it gives the user the option to save it.

Cannot generate an authorization code on API Explorer

I'm trying to collect and download my lifelog user data. The first step into doing this is getting a user-access token. I am encountering problems while requesting authorization.
From the sony developer authenticization page I am told to input the following code into my API explorer:
https://platform.lifelog.sonymobile.com/oauth/2/authorize?client_id=YOUR_CLIENT_ID&scope=lifelog.profile.read+lifelog.activities.read+lifelog.locations.read
I am supposed to receive the authorization code as such:
https://YOUR_CALLBACK_URL?code=abcdef
However, this is what the current situation is actually like:
I have replaced my actual client ID below with MY_CLIENT_ID for security reasons
INPUT:
GET /oauth/2/authorize?client_id=MY_CLIENT_ID&scope=lifelog.profile.read%2Blifelog.activities.read%2Blifelog.locations.read HTTP/1.1
Authorization:
Bearer kN2Kj5BThn5ZvBnAAPM-8JU0TlU
Host:
platform.lifelog.sonymobile.com
X-Target-URI:
https://platform.lifelog.sonymobile.com
Connection:
Keep-Alive
RESPONSE:
HTTP/1.1 302 Found
Content-Length:
196
Location:
https://auth.lifelog.sonymobile.com/oauth/2/authorize?scope=lifelog.profile.read+lifelog.activities.read+lifelog.locations.read&client_id=MY_CLIENT_ID
Access-Control-Max-Age:
3628800
X-Amz-Cf-Id:
HILH9w3eOm-6ebs_74ghegYQyWS4xyqA1l0gXPRJuuubsoZ6eiiS3g==
Access-Control-Allow-Methods:
GET, PUT, POST, DELETE
X-Request-Id:
76caccfc976d40259ef30415d10980e9
Connection:
keep-alive
Server:
Apigee Router
X-Cache:
Miss from cloudfront
X-Powered-By:
Express
Access-Control-Allow-Headers:
origin, x-requested-with, accept
Date:
Sun, 22 Jan 2017 03:00:42 GMT
Access-Control-Allow-Origin:
*
Vary:
Accept
Via:
1.1 dc698cd00b7ec82887573cfaba9ecca6.cloudfront.net (CloudFront)
Content-Type:
text/plain; charset=utf-8
Found. Redirecting to https://auth.lifelog.sonymobile.com/oauth/2/authorize?scope=lifelog.profile.read+lifelog.activities.read+lifelog.locations.read&client_id=MY_CLIENT_ID
Nowhere can I see the authorization code in the above code. I even tried copying and pasting the URL (on the last line) into my browser, it says "localhost.com took too long to respond"
This is where I input my request
I am not sure whether it is an issue with the callback URL. I don't have an actual website or app made, I just used the default localhost
I am a beginner in this and would really appreciate all help.

Logout via Set-Cookie fails

We moved our website a while ago to a new hoster and experience sporadically issues where people cannot logout anymore. Not sure if that has anything to do with the hosting environment or with a code change.
This is the Wireshark log of the relevant bit - all is happening in the same TCP stream.
Logout request from the browser (note the authentication cookie):
GET /cirrus/logout HTTP/1.1
Host: subdomain.domain.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://subdomain.domain.com/cirrus/CA/Admin/AccountSwitch
Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye
X-LogDigger-CliVer: client-firefox 2.1.5
X-LogDigger: logme=0&reqid=fda96ee5-2db4-f543-81b5-64bdb022d358&
Connection: keep-alive
Server response. It clears the cookie value and redirects
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 22 Nov 2013 14:40:22 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 124
Connection: keep-alive
Cache-Control: private, no-cache="Set-Cookie"
Location: /cirrus
Set-Cookie: USER.AUTH=; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus
X-Powered-By: ASP.NET
X-UA-Compatible: chrome=IE8
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
Browser follows the redirection, but with the old cookie value:
GET /cirrus HTTP/1.1
Host: subdomain.domain.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:26.0) Gecko/20100101 Firefox/26.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://subdomain.domain.com/cirrus/CA/Admin/AccountSwitch
Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye
X-LogDigger-CliVer: client-firefox 2.1.5
X-LogDigger: logme=0&reqid=0052e1e1-2306-d64d-a308-20f9fce4702e&
Connection: keep-alive
Is there anything obvious missing in the Set-Cookie header which could prevent the browser from deleting the cookie?
To change the value for an existing cookie, the following cookie parameters must match:
name
path
domain
name and path are set explecitely, the domain is not. Could that be the problem?
Edit: As it has been asked why the expiration date is set in the past, a bit more background.
This is using a slight modification of the AppHarbor Security plug-in: https://github.com/appharbor/AppHarbor.Web.Security
The modification is to include the path to the cookie. Please find here the modified logout method:
public void SignOut(string path)
{
_context.Response.Cookies.Remove(_configuration.CookieName);
_context.Response.Cookies.Add(new HttpCookie(_configuration.CookieName, "")
{
Expires = DateTime.UtcNow.AddMonths(-100),
Path = path
});
}
The expiration date in the past is done by the AppHarbor plug-in and is common practice. See http://msdn.microsoft.com/en-us/library/ms178195(v=vs.100).aspx
At a guess i'd say the historical expiry date is causing the whole Set-Cookie line to be ignored (why set a cookie that expired 8 years ago?).
expires=Fri, 22-Jul-2005
We have had issues with deleting cookies in the past and yes the domain and path must match the domain and path of the cookie you are trying to delete.
Try setting the correct domain and path in the HttpCookie.
Great question, and excellent notes. I've had this problem recently also.
There is one fail-safe approach to this, beyond what you ought to already be doing:
Set expiration in the past.
Set a path and domain.
Put bogus data in the cookie being removed!
Set-Cookie: USER.AUTH=invalid; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus; domain=subdomain.domain.com
The fail-safe approach goes like this:
Add a special string to all cookies, at the end. Unless that string exists, reject the cookie and forcibly reset it. For example, all new cookies must look like this:
Set-Cookie: USER.AUTH=AOvDEjH3w6xIxUC0sYNOAQR5BZ7pPmEF0RMxqohERN87Ti03Eqxd7rQC/BveqmaszmFg8QoSonP+Z+mtQQivKpvloFsQYretYKR8ENubj+moUBF479K5e4albKxS9mBEWT5Xy/XCnEyCPqLASGLY09ywkmIilNU1Ox4J3fCtYXHelE/hyzuKe9y3ui5AKEbbGs3sN9q1zYjVjHKKiNIGaHvjJ2zn7ZUs042B82Jc9RHzt0JW8dnnrl3mAkN1lJQogtlG+ynQSCyQD8YzgO8IpOnSXLJLaCMGMQcvSyX4YKJU/9sxgA5r5cZVCkHLsReS3eIJtXoxktMO6nxVZJY6MX1YwuJOgLRQvwBy9FFnQ6ye|1386510233; expires=Fri, 22-Jul-2005 14:40:17 GMT; path=/cirrus; domain=subdomain.domain.com
Notice the change: That extremely long string stored in USER.AUTH ends with |1386510233, which is the unix epoch of the moment when the cookie was set.
This adds a simple extra step to cookie parsing. Now you need to test for the presence of | and to discard the unix epoch unless you care to know when the cookie was set. To make it go faster, you can just check for string[length-10]==| rather than parsing the whole string. In the way I do it, I split the string at | and check for two values after the split. This bypasses a two-part parsing process, but this aspect is language specific and really just preferential when it comes to your choice of tactic. If you plan to discard the value, just check the specific index where you expect the | to be.
In the future if you change hosts again, you can test that unix epoch and reject cookies older than a certain point in time. This at the very most adds two extra processes to your cookie handler: removing the |unixepoch and if desired, checking when that time was to reject a cookie if you change hosts again. This adds about 0.001s to a pageload, or less. That is worth it compared to customer service failures and mass brain damage.
Your new cookie strategy allows you to easily reject all cookies without the |unixepoch immediately, because you know they are old cookies. And yes, people might complain about this approach, but it is the only way to truly know the cookie is valid, really. You cannot rely on the client side to provide you valid cookies. And you cannot keep a record of every single cookie out there, unless you want to warehouse a ton of data. If you warehouse every cookie and check it every time, that can add 0.01s to a pageload versus 0.001s in this strategy, so the warehousing route is not worth it.
An alternative approach is to use USER.AUTHENTICATION rather than USER.AUTH as your new cookie value, but that is more invasive perhaps. And you don't gain the benefit of what I said above if/when you change hosts again.
Good luck with your transition. I hope you get this sorted out. Using the strategy above, I was able to.