Seeking input: maintaining a server session without any server state - authentication

I'm not a security expert, so I'm looking for people to poke gaping holes in an authentication scheme I've devised, or point me to a better, existing scheme that fulfills the same goals:
Overview of Problem
I have an interface in which the client maintains session lifecycle (it's an HTTP session on a web server, but it doesn't really matter).
The stateless server provides some services that require the caller to be authenticated (the server has the ability to perform this authentication).
However, it's desirable for the server not to have to authenticate the caller on each invocation, e.g., by passing credentials in each call. (The authentication process can be expensive.)
It's also desirable not to maintain session state on the server. For one thing, it's just asking for a brittle solution to have independent session timeouts on both client and server (the one on the client can't be gotten rid of), and a server timeout seems necessary in order to have a reliable session lifetime on the server (rather than relying on the client to explicitly end the session at an appropriate time). For another thing, the server isn't set up to store this sort of state.
The server has an explicit authenticate method. The problem is then: how does the server verify that, when another method is called, the caller has previously authenticated using the authenticate method, without storing any session state on the server?
Proposed Solution
Here's a scheme I've come up with:
The authenticate method accepts credentials as input parameters. Upon successful authentication, the server returns two things:
A timestamp indicating the time that authentication was performed.
An encrypted version of the tuple of { username, timestamp }, encrypted with a private key
On further method calls, the client passes both of these values back to the server. The server then decrypts the encrypted { username, timestamp } tuple. If the decrypted timestamp matches the unencrypted value that was also sent by the client, the server knows that the client has previously authenticated (as that's the only way to acquire a valid encrypted value). The decrypted username tells the server which user has been authenticated.
The validity period of an encrypted key can be enforced by only allowing timestamps that are within x hours of the current time. This isn't the same as a session timeout, but it limits the window within which a compromised timestamp could be used by a malicious party.
So
I fear that this scheme is naive in a dozen ways. What weaknesses or bad logic do you see?

In case anybody cares (which seems unlikely given the amount of attention this question has gotten!), we ended up implementing a scheme much as described above.
A few of the details vary, though:
The server creates a session token based upon the user name, the session-start timestamp (passed back to the user), and a salt.
The client does not pass this token back to the server. Instead, an MD5 hash is created from the entire request content concatenated with this token.
The MD5 hash is sent to the server along with the timestamp and the the user name (and the request). The server then re-creates the session token and performs the same hashing algorithm. If the MD5 hashes match: valid request.

Related

Use primary Id as idempotency token on API

i want to know if using primary id: 1;2;3;... can be used as idempotency toekn on API instead of using long string UUID.
as id is unique i think it can be
is there any risk?
I would recommend always using random idompotency tokens rather than reusing other values you have (like primary keys or sequential numbers).
It makes replaying sequences of commands during a debug session difficult if the IDs used for idempotency might overlap due to test data being reset, or other ways that you want to send the same test data in a request twice in a short period of time and not have the second request ignored.
An attacker may also be able to do a denial of service on your use of the API is they know enough to cause the web service to think that an idempotency token has already been used in a window. If you don't use a random token then you should consider the situations in which someone could disrupt your use of the service by sending their own requests with tokens that you will use (either guessed or just because they aren't generating random tokens themselves). Using either predictable or 'common' tokens in this situation will cause a problem. If the web service does not record tokens for attempts to call the web service without a valid authentication, then this is less of (or not) an issue.

IBM MQ Authentication and Authorization

According to my current understanding, all the client connections are authenticated at two levels, channel level and queue manger level,
At the queue manager level, it uses the CONNAUTH property's value of the QMGR which is an AUTHINFO object to determine how the authentication is done (Ex: Using host OS user repo), if the AUTHINFO object specifies ADOPTCTX(YES), it uses the user id contained in MQCSP structure as the user id for the application context and it is used for authorizations or if ADOPTCTX(NO) is there, the user id which the client application is running under is used as user id for the application context and that user id is used for authorizations.
At channel level, nothing regarding to authorizations is done. Only the authentication happens there as configured. For more granular access control, a set of channel authentication records are applied to the channels. CONNAUTH property's value of the QMGR is still used to determine the user repository to authenticate against.
Questions:
Am I correct up to this point? (corrections/explanations are much appreciated.)
What does the MCAUSER attribute of the channel object do? What is the purpose of it? Why does it matter which user the message channel agent runs under?
After all, how does the channel level authentication actually work with the MCAUSER?
In what order these two authentication procedures are done? Is the channel authentication done first?
You are correct that one should think about security of client connected MQ applications in two phases. There is an authentication phase (who are you? prove it!), and an authorization phase (now that I know who you are, are you allowed to do what you are trying to do?).
Authentication of a client connected MQ application can be done by checking the user id and password provided by the application (in the MQCSP) or by something at the channel level. This is essentially authenticating the channel connection, but it is inextricably linked to the client application. This channel authentication can use TLS certificates or a security exit to interrogate the remote party any way you feel like. [There is also IP address filtering but I wouldn't call that authentication so much].
The purpose of these authentications are to determine who the connecting party is (and reject them if necessary!) and to assign an appropriate user ID for the next step (the authorization checks). Assignment of this user ID can be done by accepting the password validated user ID (ADOPTCTX(YES)); by mapping certificate DNs (or IP addresses) using CHLAUTH rules; by setting the MCAUSER via a security exit; or by simply hard-coding a user ID into the MCAUSER (not authentication, but still a way to assign a user id for the later authorization checks). All of these have one thing in common, what they do ends up in the running SVRCONN's MCAUSER field. You can display it using DISPLAY CHSTATUS.
Authorization of a client connected MQ application happens just as it does for a locally bound MQ application. The same operations are checked against the same rules. Is this user allowed to "Open this Queue for putting", or "Inquire this QMgr object", or "subscribe to this topic" etc. The difference is simply in how the user ID used in that authorization check is obtained - i.e. how it gets into the MCAUSER.
To wrap up (and check I have covered all your questions):-
Sort of - read above text
The MCAUSER attribute at run-time holds the finally determined user ID for this client application. At definition time it can be hard-coded to a user id (some people use this to hard-code a rubbish user id as a belt-and-braces along side the CHLAUTH backstop rule).
Channel level authentication essentially sets the run-time value of MCAUSER
Authentication happens before authorization.
Further Reading
CHLAUTH – the back-stop rule
All the ways to set MCAUSER
Interaction of CHLAUTH and CONNAUTH - previously a blog post now incorporated into IBM Knowledge Center

Clarification about TURN server authentication through REST api

I was going through this draft to undertstand usage of REST api to access TURN servics. I am bit confused after going through that.
Currently, I am authenicating my TURN server using Long Term Credential Mechanism with Redis database, but instead of using actual username and password, I am using a authenication token( which expires in 8 hours) and a random string as password.
My doubts about the draft are:
the ttl recieved in the response is never used( at least not part of RTCPeerConnection). so how exactly is TURN know when to expire the user?
I see no option in turnserver arguments to specify the timestamp format, ss it is fixed a UNIX timestamp?
Does REST api implementation offer any advantage over my implementation( considering the fact the mine doesn't have a dependency on sync between webrtc server and TURN server time)
The timestamp generated by the REST endpoint as part of the username is ttl seconds in the future. So the TTL in the response is just informative.
The advantage of the overall approach is that (assuming time sync which is a solved problem) it requires no communication between the entity that generates the token and the TURN server. When deploying multiple TURN servers around the globe (see later in this I/O 2015 presentation) this is somewhat easier than syncing a redis database.

Session cookies with load balancing (Not sticky sessions)

I have scanned RFC 6265 but did not find the answer to the following.
I want to put a naive round-robbin load-balancer in front of multiple servers for a single webapp. The load-balancer does not provide sticky sessions. So a client will typically bounce from one appserver to another on successive requests.
On the first connection, the client has no SID and is randomly routed to, say, server A.
Server A responds with a session cookie, a nonce.
On the next connection, the client includes the SID from server A in the HTTP headers.
This time the client is randomly routed to, say, server B.
Server B sees the SID which (one hopes!) does not match any SID it has issued.
What happens? Does server B just ignore the "bad" SID, or complain, or ignore the request, or what?
The idea is, I don't want to use session cookies at all. I want to avoid all the complexities of stickiness. But I also know that my servers will probably generate -- and more to the point look for -- session cookies anyway.
How can I make sure that the servers just ignore (or better yet not set) session cookies?
I think the answer to this will vary greatly depending on the application that is running on the server. While any load balancer worth its salt has sticky sessions, operating without them can be done as long as all the servers in the pool can access the same session state via a centralized database.
Since you are talking about session IDs, I'm guessing that the application does rely on session state in order to function. In this case, if the request came in with a "bad" session ID, it would most likely be discarded and the user prompted to log in — again, the precise behavior depends on the app. If you were to disable session cookie entirely, the problem would likely get worse since even the absence of an ID would likely result in a login prompt as well.
If you really want to avoid complexity at the load balancer, you will need to introduce some mechanism by which all servers can process requests from all sessions. Typically this takes the form of a centralized database or some other shared storage. This allows session state to be maintained regardless of the server handling that particular request.
Maintaining session state is one of the sticking points (pun intended) of load balancing, but simply ignoring or avoiding session cookies is not the solution.

Is this stateful web service/wcf service?

Service layer has a login method which accepts username and password and returns a unique session id (a guid) if the account is valid.
On subsequent request the same session id will be passed instead of passing username and password, so is this stateful or stateless, because I don't need any state information except the authentication of each request
The client connects, exchanges data, stores it somewhere, and disconnects. Upon subsequent connections the SAME DATA must be passed back to the server. This is not stateful.
In a stateful connection, you would connect, authenticate, and then simply use the service. The server would "remember" you without having to constantly be reminded of your session ID. This is definitely stateless.
I would say it could be considered stateful. The server is storing information regarding your session including client activity (timeout, etc). I could also see the argument especially in the Java world where stateless and stateful Beans are much more well defined.