I am using embedded jetty on server side which will accept both http and websocket requests. I am using org.eclipse.jetty.security.authentication.FormAuthenticator for authenticating user. After user gets logged-in, my javascript code will open up a websocket connection with server. I want to know how server can authenticate this websocket client, to avoid accepting websocket connections from un-authorized clients(say, java client).
Assuming you have setup the embedded jetty properly, it would use the same security constraints system of a normal webapp to validate the user role assigned to the url pattern for that websocket.
This would prevent the websocket upgrade from even occurring (being attempted on the server side) if the user isn't authenticated and setup with an authorized role for that websocket.
Know however that browsers behave very differently in scenario. You will likely not get a decent error message out of the javascript WebSocket object if there is an authentication or authorization issue. It will just fail anonymously with a client side only close code (such as 1006) and no close reason.
Related
I'm currently playing around with SignalR and websockets. From my research, it seems, as websockets do not support custom headers, there's basically only two ways to authenticate a websocket connection during token based authentication.
1) Passing the token in the query string
2) Storing the token in a cookie which then gets passed to the server when WithCredentials is set to true
The first method isn't great practice - even through websocket communication is encrypted, query strings may be logged by servers etc.
The second method I have got working on my local machine but it doesn't work once deployed because my client and server reside on different domains. So basically, I have an Angular site that has one domain (eg. client.com) and a WebAPI site that alls CORS with a completely different domain (eg. server.com). On my browser, if I'm on client.com, I cannot set a cookie that gets sent to server.com on a request.
What is a good way to authenticate websockets when client and server sit on different domains?
The WebSocket Protocol specification doesn't specify any particular way for authentication. You need to perform the authentication during the handshake phase and for that you can use any HTTP authentication mechanism like Basic, Digest, etc.
Further you could look into JWT token based authentication. Angular app can store the token in local storage and send it as a Transport header during the handshake request to the server. If the token is invalid, server can terminate the WebSocket connection upgrade request and the Angular app can re-direct the user to login page.
We have a couple of back-end web applications to which we want to provide access via the public internet. To that end, we are setting up a reverse proxy (IIS 7.5) from our DMZ. At the same time, we want these web applications to be claims-enabled through ADFS 2.0.
WEB1.MYCORP.COM/WFE1 is the other back-end web application, on our internal network
WEB1.MYCORP.COM/WFE2 is the other back-end web application, on our internal network
ADFS.MYCORP.COM is the ADFS 2.0 server, on our internal network
FSPROXY.MYCORP.COM is the ADFS 2.0 proxy server, on our DMZ
RPROXY1.MYCORP.COM is the reverse proxy for WFE1, on our DMZ
RPROXY2.MYCORP.COM is the reverse proxy for WFE2, on our DMZ
In keeping with the proper configuration of ADFS, our internal DNS resolves ADFS.MYCORP.COM to the actual internal server, while external DNS points ADFS.MYCORP.COM to the ADFS proxy (FSPROXY).
So, here's the scenario:
End user browses to RPROXY.MYCORP.COM
Reverse proxy forwards request to WEB1.MYCORP.COM/WFE1
WFE1 redirects browser to ADFS.MYCORP.COM (actually FSPROXY)
ADFS Proxy prompts for credentials and authenticates against ADFS server
Upon successful authentication, browser redirected back to web app
I have a couple of questions. Do I need to configure something in the rp or the application to allow this. Also the adfs endpoint is the rp url is that an issue?
Do I need to set up something for the reverse proxy as well? (Should I/can I) set up a claims-enabled reverse proxy in IIS? How do I set up the reverse proxy rules to pass back the ADFS request unaltered? Currently, when I try to access the back-end application, it fails with a 401 authentication error. If I remove the proxy and just hit the app server it works fine.
Further,
This fails:
The path is client --> rp -->app -->adfs --> rp -->app --> rp -->client machine
this works:
The path is client -->rp -->app -->adfs -->app -->rp -->client machine
Any suggestions would be greatly appreciated!
Not familiar with how you enabled reverse proxy in IIS (ARR?). Something like this http://blogs.iis.net/carlosag/setting-up-a-reverse-proxy-using-iis-url-rewrite-and-arr
One choice for you is to use ADFS 2012R2 (if possible) because the proxy in that, the Web Application Proxy, handles both ADFS authentication and can handle app publishing for your claims enabled application. There are 2 ways you can publish your app to the internet. Once is pass-through which is kinda what you are trying to do. But it also allows pre-authentication support for a claims aware app. This way, you can have a different policy that decides whether the application can get pass your EDGE network before a packet goes to your internal application.
After doing lots of digging and fiddler traces I found the issue. In testing idp setup the token was different then stage env. The fiddler traces showed that the token was making it back to the app server. The issue was it also looked like the cookie dropped off for no reason. The issue was because the old dev ipd value disagreed with the stage value...naturally. Once I cleared the old token from the database everything worked.
I'm developing a client-server application which uses WebSocket. I have implemented token-based authentication with JWT. Once my client has a valid token, a WebSocket connection is opened indefinitely.
Is it a good idea to send the token within each request? Is there any chance of anyone to hijack the connection?
My question actually applies to any TCP-based connection which requires authentication.
Is there any chance of anyone to hijack the connection? ... My question actually applies to any TCP-based connection which requires authentication.
Yes it is possible to hijack existing TCP connections or just be the man in the middle when you start a new one. The protection against this is not to send the authentication within each message because these could be simply replicated by the attacker. Instead use encryption, i.e. wss:// in case of WebSockets or TLS, IPSec or similar in other cases. These protect against both active man in the middle attacks (hijacking) and passive sniffing.
More of a theoretical question, but I'm really curious!
I have a two part application:
Apache server hosting my UI
Back-end that services all http requests from the UI
The apache service proxies all http requests from the UI to the server. So, if the user is reasonably adept, they can reverse engineer our API by inspecting the calls in the browser's developer tools.
Thus, how do I prevent a user from using the server API directly and instead force them to use the UI?
The server can't determine whether a call came from the UI or not because a user can make a call to myapp.com/apache-proxy/blah/blah/blah from outside of the UI, apache will get the request and forward it to the server, which will have no idea it's not coming from a UI.
The option I see is to inject a header into the request from the UI, that indicates the origin of the request as the UI. This seems ripe for exploitation though.
To me, this is more of a networking question since its something I'd resolve at the network level. If you run your backend application in a private network (or on a public network with firewall rules) you can configure the backend host to only accept communication from your Apache server.
That way the end-user can't connect directly to the API, since its not accessible to the public. Only the allowed Apache server will be able to communicate with the backend API. That way the Apache server acts as an intermediary between the end-user (client side) and the backend API server.
An example diagram from AWS.
You could make the backend server require connections to be authenticated before accepting any requests from them. Then make it so only the Apache server can successfully authenticate in a way that end users cannot replicate. For example, by using SSL/TLS between Apache and the backend, where the backend requires client certificates to be used, and then issue Apache a private certificate that the backend will accept. Then end users will not be able to authenticate with the backend directly.
I'm implementing the cert authentication in a servlet filter. I obtains the client cert using the httpRequest.getAttribute("javax.servlet.request.X509Certificate") API. When the user starts a browser and accesses the web server for the first time, the underlying SSL requests the client cert and the browser prompts for the cert selection. However, after the user logs out and I've invalidated the HTTP session, if the user does not close the browser and come back to the Web server, the underlying SSL does NOT trigger the browser to prompt for the cert selection again. I assume it's because the SSL session was not torn down after the user logged out from the Web server at HTTP layer. My question is that is there a way to invalidate the underlying SSL session from the servlet? A more general question, how can I get the browser to re-prompt for the cert selection after the user logs out from Web server?
Thanks,
Gang
My question is that is there a way to invalidate the underlying SSL session from the servlet?
You would have to write a Tomcat Valve and probably alter some Tomcat source code as well. I've been several layers deep into the Tomcat HTTPS/authentication source code and I haven't seen a hook that would give you the SSLSession.
A more general question, how can I get the browser to re-prompt for the cert selection after the user logs out from Web server?
Invalidating the SSL session, if you can do it, may or may not have that effect. I would think it would be the browser-specific.