How Sessions are maintained by Server - apache

Anyone know how a web server (apache, tomcat) maintains sessions ?
I know how to create / handle and destroy sessions. What I need to know is how server maintains sessions internally.
i.e if 10 users are connected to the server, how the server identifies which session belongs to a particular user

Strictly speaking, your webserver (Apache) doesn't have a concept for "session"; it merely understands requests according to the HTTP protocol.
In fact, HTTP is famous for being a "stateless protocol" - there is no concept of "session". This is fundamental to the scalability of HTTP, but makes it hard to build web applications that need state.
So different web application frameworks have introduced the concept of "session".
Tomcat is not strictly speaking a web server, it's a servlet container.

Sessions are usually identified by a cookie with a unique ID for each user. The ID is generated and sent as a cookie when the session is first created (i.e. when the user doesn't already have a cookie).
Another way that's sometimes seen is keeping the session ID in the URL, which is used when the client refuses to accept cookies for some reason. This has numerous drawbacks though, such as security issues if the user pastes their URL to another user, having to add the ID to all links, and ugly URLs.

Related

Configure proxy in Apache to remove authentication

In the interest of avoiding yak-shaving, I'll try to provide as much context as possible.
We have an internal application that's also available on the public internet. This application runs on several instances of Apache on the IBM i - most of these instances require http basic authentication, except for one instance that acts as the 'welcome page' so to speak. This 'welcome page' has no authentication, but acts as a navigation hub with links for the user to go to other parts of the app (which DO have authentication and run on different instances of Apache).
We also have some documentation stored in Confluence (a wiki application) that runs on a separate server. This wiki application can display the documentation without requiring authentication, but if you authenticate, you then have the option to edit the documentation (assuming you're authorized to do so, of course). But the key is that the documentation is visible without requiring authentication.
My problem is: we want the documentation in Confluence to be accessible from within the main application (both when being accessed internally and over the internet) but, because the documentation is somewhat sensitive, we don't want it accessible to the internet at large.
The solution we came up with was to use a reverse proxy - we configure the Apache instances on the main application such that requests to /help/ on the main application are proxied to the confluence application. Thus, the Confluence application is not directly exposed to the Internet.
But this is where the problem starts.
If we just proxy /help/ through the main application Apache instance that doesn't require authentication, then the documentation is available from the main application without a problem - but since you don't require authentication, it's available to everyone on the Internet as well - so that's a no-go.
if we instead proxy '/help/' through the main application Apache instances that DO require authentication, it seems as though the basic authentication information is passed from the main application servers onto the Confluence server, and then we get an authentication failure, because not everyone who uses the main application has an account on the Confluence server. (For those that do, it works fine - but the majority of users won't have a Confluence account).
(Possible yak shaving alert from this point forward)
So, it seems as though when dealing with HTTP Basic authentication, if you set up proxy configuration from server A to server B, and set up the proxy on server A to require http basic authentication, then that authentication information is passed straight through to the server B, and in this scenario, server B complains since it doesn't expect authentication information.
My solution to that problem was to set up 2 levels of proxying - use the Apache instances requiring authentication to also require authentication for the proxy to /help/, but have /help/ proxy to a different server (Server C). This Server C doesn't require authentication but is not exposed to the internet. And Server C is configured to proxy /help/ to the actual Confluence server.
I did this on the basis of proxy-chain-auth - an environment variable which seems to indicate that by default, if you have a proxy chain, the authentication information is NOT automatically sent along the chain.
Alas, this did not work - i got an authentication error that seems to indicate that Server C did in fact proxy the authentication info onwards, even though i did not set proxy-chain-auth.
So, that's my yak-shaving journey.
I simply want to set up a configuration such that our documentation stored on Confluence requires some sort of authentication, but that authentication comes from the main application, not from Confluence.
(Without the requirement of having it accessible over the internet, none of this would've been an issue since the Confluence server can be viewed by anyone on its network without a problem).
I hope my question is clear enough - I honestly don't mind being pointed in a different direction to achieve the main goal, with the caveat that I can't change the main application (or Confluence for that matter) from using HTTP Basic Authentication.
Ideas, anyone?
PS. To retrieve the documentation from the Confluence server, I'm actually using their REST API to retrieve the page content - i don't know if that has any relevance, but I just wanted that made clear in case it does.
It turns out that the solution to the issue was pretty straightforward.
For my second proxy that does not require authentication, I had to change the Apache configuration to remove any authorization headers.
RequestHeader unset Authorization
This stops the authentication information from being passed from the second proxy onto Confluence.

Shibboleth Session Validation In Tomcat

I have an Apache/2.2.15 web server with the modules, mod_shib, mod_ssl, and mod_jk. I have a virtual host which is configured (attached below) with AuthType Shibboleth, SSLCertificates, and JKMount to direct requests using AJP to my Tomcat 8 server after a session is successfully established with the correct IDP. When my http request reaches my app server, I can see the various Shib-* headers, along with the attributes my SP requested from the IDP.
Is there a way my app server can validate the shibsession cookie or other headers? I am trying to protect against the scenario where my web server, which resides in the DMZ is somehow compromised, and an attacker makes requests to my app server, which resides in an internal zone.
Is there a way I can validate a signature of something available in the headers, to guarantee that the contents did indeed originate from the IDP, and were not manufactured by an attacker who took control of my web server?
Is there something in the OpenSAML library I could use to achieve this?
Is there a way my app server can validate the shibsession cookie or other headers?
mod_shib has already done that difficult work for you. After validating the return of information from the Identity Provider (IdP), mod_shib then sets environment variables (cannot be set by the client) for your application to read and trust. Implementing OpenSAML in your application is unnecessary as mod_shib has done the validation work for you.
From the docs:
The safest mechanism, and the default for servers that allow for it,
is the use of environment variables. The term is somewhat generic
because environment variables don't necessarily always imply the
actual process environment in the traditional sense, since there's
often no separate process. It really refers to a set of controlled
data elements that the web server supplies to applications and that
cannot be manipulated in any way from outside the web server.
Specifically, the client has no say in them.

What are the security ramifications of checking security with an HTTP call to an external server?

I was discussing development of an API with a colleague, and the following proposal has made me wonder about the securability of it.
Scenario:
There is a web application that is running on server A that (in addition to other functions) allows the admin user to specify arbitrary URLs, and security for the users within their account related to each URL. So basically:
URL:"/foo/bar", UserID:1234, AllowedToView:true
The admin user then has their own web application running on their own, separate server B. And they want to check if their end users that have logged in on that server B application have access to a particular URL on that server B application by checking against the API on server A.
This check could come in 2 forms:
A server-side HTTP call (from server B to server A) that happens within the context of a user requesting a url from server B, so this would look like:
User requests "/foo/bar" with their client from server B
During the processing of that request on server B, it makes an HTTP call to server A to check if that user has access to the requested URL
With the response from server A, server B can then either allow the user to access or redirect, send 403 access denied, etc.
An AJAX request from the end user's client directly to server A and utilizing the response with JavaScript. An issue of cross-domain scripting could be problematic here.
One challenge that comes to mind immediately is that there would have to be a way for the admin user to directly associate the end user that is accessing their web app on server B with the UserID that is associated with that user in the web app on server A. But let's assume that has been solved elegantly somehow :-D.
My question related more to the inherent (in)security related to a scenario like this. If the requests that are being made to server A (in both 1 and 2 above) are made with https, then can the integrity of the responses be counted on?
HTTPS makes sure the message can't be read or tampered with any relaying parties (proxies, etc.) but it doesn't guarantee the source of the data is trusted. If another service can determine the other URL and wire format they could spoof a request to it. This is generally where something like request signing comes into play using a shared-secret signing mechanism. Twilio's API uses this method to prove to you that they're actually calling your servers. HTTP Signatures is a proposal for a standardized way of doing this.
You can't rely on client side validation if you really want to secure your server B. That is, your second scenario - calling server A from the client side to see if it can access resources - is not a secure method. You need to count on the client to behave nicely, which of course leaves you open to attacks.
You first scenario - server to server call is a secure and preferred method. You will still need to secure your call using signing or just passing the shared secret itself to validate the origin of the call (using HTTPS).
That said, there are ways to secure a flow that goes through the client, but it will usually involve signing the data on the server since you can't have your client sign it (you can't place your secret in the client).

Web security question regarding an API

I'm building an API with no server-side authentication. A unique key (assume the key is very long and impossible to guess) will be generated for the session, but no cookie will be set on the client. The client could be a web browser with AJAX, a PHP script using CURL, or a desktop application. The normal transaction process I'm imagining will be:
Initial encounter
The client makes an initial request, calling a start_session method
The server generates a key and returns it along with some initial data
The client stores the key for later use (e.g. JavaScript sets a cookie with the key)
Next request
The client requests the server again, calling some set_data method, providing the original session key, as well as loads of private data such as a credit card number, information about legal cases, etc.
The server responds, and the responds with a success message
Another request
The client requests the server again, providing the original session key, and calling some get_data method
The server responds with all of the private data in some format (e.g. XML, JSON, etc)
A session key expires, if not used, in a 20 minutes, and all API URIs will require SSL.
My concern / question is: Do I need to be worried about whether the client has leaked the session key. Without authentication, I'm trusting that the original requester to keep the session key private. Is this common / safe practice?
Unless you use HTTPS throughout, you're vulnerable to HTTP sniffing, a la Firesheep.
Eve, if you do use SSL, if the client page isn't SSL or contains any non-SSL Javascript (or non-SSL frames in the same domain), you're still vulnerable (and there's nothing you can do about it)
To answer your stated question, it completely depends on your situation.
EDIT: You should warn your clients (developers) in the documentation page to handle the key correctly.
Beyond that, it depends on the average skill level of the clients.
You should probably have a disclaimer of some sort (I am not a lawyer).
It's probably OK.

Using HTTPS for the client-server communication

I would like to use the HTTPS to secure the communication between my client and the server. The first encrypted communication will be used to authenticate the user - i.e. checking his/her user name and password.
After the user credentials will be successfully checked by server I would like to start getting some data in subsequent requests. BUT how the server will determine that the subsequent request is send by the user, whose credentials were already checked?
Since the TCP connection might be closed between login and subsequent HTTPS requests, (I think) this means that the SSL context must be released by the server, so with the new GET request, the new TCP connection must be established and the new SSL(TLS) handshake must be done (i.e. new shared password for the encryption must be exchanged by both sides, etc.)
For this I think server needs to send back to the client in 200 OK response for the initial authentication request some randomly generated nonce (which is valid for a certain time), which I will include in every subsequent request, so the server will be able to detect, based on this randomly generated nonce, which user name is behind the request and check that this user is already logged in. Is my understanding correct?
Thanks a lot for the reply
BR
STeN
The simplest method is to require all communication to go via HTTPS (so the data is confidential; nobody other than the client and the server can see it) and to use simple username and password on every request inside that secure connection. This is dead simple to do in practice (the username and password actually go over the connection as an HTTP header, which is OK here because we're using HTTPS) and the server can check every time that the user is allowed. You don't need to worry about the SSL handshakes; that's the SSL/HTTPS layer's responsibility (and that's why HTTPS/SSL is nice).
Alternatively, the login can be done with any method and generate some kind of magic number (e.g., a UUID or a cryptographic hash of a random number and the user's name) that is stored in a session cookie. Subsequent requests can just check that the magic number is one that it recognizes from session start (and that not too much time has passed since it was issued); logout just becomes forgetting the magic number on the server side (and asking the client to forget too). It's a bit more work to implement this, but still isn't hard and there are libraries for server-side to handle the donkey work.
The first option is particularly good for where you're writing something to be used by other programs, as it is really easy to implement. The second option is better where the client is a web browser as it gives users more control over when their browser is authorized (program APIs don't tend to need that sort of thing). Whenever the client is going to be a browser, you need to take care to armor against other types of attack too (e.g., various types of request forgery) but that's pretty much independent of everything else.
Inventing custom authentication mechanism in your case is very risky - it's easy to make a mistake that will let lots of wrong doing. So the right approach, as for me, would be to use HTTPS and pass user credentials with each request.