Workaround for third party cookie from server side - safari

We have a customer having their own client UI application (www.myclient.com). When they make an API call to our server (www.iamserver.com), we set cookies (Set-Cookie header) and send back in the response. We expect these cookies to be sent in the subsequent requests from the client. [Third-Party-Cookies scenario]
Problem: Recently, due to some org policy, our client browsers have been blocked from using third party cookie. Obviously, calls to our server are not going through since the cookies are not set.
Is there any workaround from server side that we can do without doing any change in the client application? Looking for answers only from the server side.
I tried changing the domain of the cookie we set to that of the client domain. It still won't work because the browser blocks the cookie saying "domain attribute was invalid with regards to the current host url".
Browser: chrome
Any references/pointers are deeply appreciated.
Thanks in advance.

Related

Is it safe to redirect non ssl requests to ssl version of site?

There is an API. Earlier all request were made not via ssl connection (encription was used) - http://api.com/dosomething. Logic has changed now. Now it is a bit problem to change URL for all clients who are using this API. There is https version of the api site. Is it safe to redirect all requests http://api.com/dosomething to https://api.com/dosomething on server side (apache or nginx)? How it works?
Your API consumer transmits everything in the clear: All its data, authentication, etc. And on your new server you're redirecting to the "same" URL, just using https? The https connection now will be secure, but all of your data and authentication has long leaked.
As we don't know anything about your API consumer, technically it could be a web browser that honors "secure" cookies, e.g. it might not transmit the authentication in the clear. But still, all of the data will be out already. As you say that you can't update the clients, I'm assuming that you're not in this situation.
So: The answer is no, it's not secure. Retire the old API, keep track of anyone accessing it. Once they're few enough, notify them of discontinuing the http service so that they upgrade. Or stay unsafe - choose your poison.

401 unauth response in Kerberos

Just noticed that with Kerberos authentication, client browser always gets a 401 response first (with WWW-Authenticate: Negotiate header) and in next request actual kerberos token is sent for authentication (handled internally by browser).
For first time its fine, but for every subsequent request why this process is repeated ? Once client knows that server support kerberos why dont client stores a cookie to indicate that every time I need to send auth token ?
I understand that the NTLM protocol is designed like this, but want to understand why ?
HTTP is stateless. Unless the server tells the client it should persist a state (via server cookie), the client should never assume anything about the server's intent.
More to the point it's wrong to assume that either party can always do Kerberos. The server originally said it wanted to Negotiate, and Negotiate contains a set of available protocols in preferred order (Kerberos, NTLM, etc.). A client can do Kerberos when it has line of sight to a KDC, but it can do NTLM in any/most circumstances, and it prefers Kerberos.
Additionally, once the client is authenticated the server may respond with a session cookie. The browser doesn't understand the contents, so it has no idea what happened. The server must then always indicate to the browser that it needs to auth up again (via 401 + WWW-Auth).

How can I authenticate a websocket connection where client and server reside on seperate domains?

I'm currently playing around with SignalR and websockets. From my research, it seems, as websockets do not support custom headers, there's basically only two ways to authenticate a websocket connection during token based authentication.
1) Passing the token in the query string
2) Storing the token in a cookie which then gets passed to the server when WithCredentials is set to true
The first method isn't great practice - even through websocket communication is encrypted, query strings may be logged by servers etc.
The second method I have got working on my local machine but it doesn't work once deployed because my client and server reside on different domains. So basically, I have an Angular site that has one domain (eg. client.com) and a WebAPI site that alls CORS with a completely different domain (eg. server.com). On my browser, if I'm on client.com, I cannot set a cookie that gets sent to server.com on a request.
What is a good way to authenticate websockets when client and server sit on different domains?
The WebSocket Protocol specification doesn't specify any particular way for authentication. You need to perform the authentication during the handshake phase and for that you can use any HTTP authentication mechanism like Basic, Digest, etc.
Further you could look into JWT token based authentication. Angular app can store the token in local storage and send it as a Transport header during the handshake request to the server. If the token is invalid, server can terminate the WebSocket connection upgrade request and the Angular app can re-direct the user to login page.

Identity cookie expiry

How would I redirect to login page AUTOMATICALLY, if my Identity cookie has expired at "ExpireTimeSpan" value? I do understand there is an event "OnRedirectToLogin" but that doesn't get triggered unless a request comes through. Is there a way, I can redirect to login right after the cookie has expired rather than keep sending requests to verify it has timed out?
Unless I misunderstand, what you want is the server to reach out to the client, but standard client/server HTTP works the other way around. The client is supposed to send requests to the server, and at some point to get redirected if its authentication cookie has expired, but if the cookie expires and the client never ask the server for anything more ever, then it doesn't have to be told anything has expired. Communication the other way around, where servers notify clients can be achieved by several means but is to be reserved for very particular needs. Are you sure you need that?

Is a HTTPS query string secure?

I am creating a secure web based API that uses HTTPS; however, if I allow the users to configure it (include sending password) using a query string will this also be secure or should I force it to be done via a POST?
Yes, it is. But using GET for sensitive data is a bad idea for several reasons:
Mostly HTTP referrer leakage (an external image in the target page might leak the password[1])
Password will be stored in server logs (which is obviously bad)
History caches in browsers
Therefore, even though Querystring is secured it's not recommended to transfer sensitive data over querystring.
[1] Although I need to note that RFC states that browser should not send referrers from HTTPS to HTTP. But that doesn't mean a bad 3rd party browser toolbar or an external image/flash from an HTTPS site won't leak it.
From a "sniff the network packet" point of view a GET request is safe, as the browser will first establish the secure connection and then send the request containing the GET parameters. But GET url's will be stored in the users browser history / autocomplete, which is not a good place to store e.g. password data in. Of course this only applies if you take the broader "Webservice" definition that might access the service from a browser, if you access it only from your custom application this should not be a problem.
So using post at least for password dialogs should be preferred. Also as pointed out in the link littlegeek posted a GET URL is more likely to be written to your server logs.
Yes, your query strings will be encrypted.
The reason behind is that query strings are part of the HTTP protocol which is an application layer protocol, while the security (SSL/TLS) part comes from the transport layer. The SSL connection is established first and then the query parameters (which belong to the HTTP protocol) are sent to the server.
When establishing an SSL connection, your client will perform the following steps in order. Suppose you're trying to log in to a site named example.com and want to send your credentials using query parameters. Your complete URL may look like the following:
https://example.com/login?username=alice&password=12345)
Your client (e.g., browser/mobile app) will first resolve your domain name example.com to an IP address (124.21.12.31) using a DNS request. When querying that information, only domain specific information is used, i.e., only example.com will be used.
Now, your client will try to connect to the server with the IP address 124.21.12.31 and will attempt to connect to port 443 (SSL service port not the default HTTP port 80).
Now, the server at example.com will send its certificates to your client.
Your client will verify the certificates and start exchanging a shared secret key for your session.
After successfully establishing a secure connection, only then will your query parameters be sent via the secure connection.
Therefore, you won't expose sensitive data. However, sending your credentials over an HTTPS session using this method is not the best way. You should go for a different approach.
Yes. The entire text of an HTTPS session is secured by SSL. That includes the query and the headers. In that respect, a POST and a GET would be exactly the same.
As to the security of your method, there's no real way to say without proper inspection.
SSL first connects to the host, so the host name and port number are transferred as clear text. When the host responds and the challenge succeeds, the client will encrypt the HTTP request with the actual URL (i.e. anything after the third slash) and and send it to the server.
There are several ways to break this security.
It is possible to configure a proxy to act as a "man in the middle". Basically, the browser sends the request to connect to the real server to the proxy. If the proxy is configured this way, it will connect via SSL to the real server but the browser will still talk to the proxy. So if an attacker can gain access of the proxy, he can see all the data that flows through it in clear text.
Your requests will also be visible in the browser history. Users might be tempted to bookmark the site. Some users have bookmark sync tools installed, so the password could end up on deli.ci.us or some other place.
Lastly, someone might have hacked your computer and installed a keyboard logger or a screen scraper (and a lot of Trojan Horse type viruses do). Since the password is visible directly on the screen (as opposed to "*" in a password dialog), this is another security hole.
Conclusion: When it comes to security, always rely on the beaten path. There is just too much that you don't know, won't think of and which will break your neck.
Yes, as long as no one is looking over your shoulder at the monitor.
I don't agree with the statement about [...] HTTP referrer leakage (an external image in the target page might leak the password) in Slough's response.
The HTTP 1.1 RFC explicitly states:
Clients SHOULD NOT include a Referer
header field in a (non-secure) HTTP
request if the referring page was
transferred with a secure protocol.
Anyway, server logs and browser history are more than sufficient reasons not to put sensitive data in the query string.
Yes, from the moment on you establish a HTTPS connection everyting is secure. The query string (GET) as the POST is sent over SSL.
You can send password as MD5 hash param with some salt added. Compare it on the server side for auth.