I have scanned RFC 6265 but did not find the answer to the following.
I want to put a naive round-robbin load-balancer in front of multiple servers for a single webapp. The load-balancer does not provide sticky sessions. So a client will typically bounce from one appserver to another on successive requests.
On the first connection, the client has no SID and is randomly routed to, say, server A.
Server A responds with a session cookie, a nonce.
On the next connection, the client includes the SID from server A in the HTTP headers.
This time the client is randomly routed to, say, server B.
Server B sees the SID which (one hopes!) does not match any SID it has issued.
What happens? Does server B just ignore the "bad" SID, or complain, or ignore the request, or what?
The idea is, I don't want to use session cookies at all. I want to avoid all the complexities of stickiness. But I also know that my servers will probably generate -- and more to the point look for -- session cookies anyway.
How can I make sure that the servers just ignore (or better yet not set) session cookies?
I think the answer to this will vary greatly depending on the application that is running on the server. While any load balancer worth its salt has sticky sessions, operating without them can be done as long as all the servers in the pool can access the same session state via a centralized database.
Since you are talking about session IDs, I'm guessing that the application does rely on session state in order to function. In this case, if the request came in with a "bad" session ID, it would most likely be discarded and the user prompted to log in — again, the precise behavior depends on the app. If you were to disable session cookie entirely, the problem would likely get worse since even the absence of an ID would likely result in a login prompt as well.
If you really want to avoid complexity at the load balancer, you will need to introduce some mechanism by which all servers can process requests from all sessions. Typically this takes the form of a centralized database or some other shared storage. This allows session state to be maintained regardless of the server handling that particular request.
Maintaining session state is one of the sticking points (pun intended) of load balancing, but simply ignoring or avoiding session cookies is not the solution.
Related
I have a page which requests multiple requests concurrently. So those requests are in the very same session. For accessing the session I use everywhere IHttpContextAccessor.
My problem is that regardless of the timing, some request does not see other requests already set session state, instead sees some previous state. (again in timing, the set state operation happened already, still)
As far as I know each requests has its own copy of the state, which is written back... (well "when"?) to the common "one" state. If this "when" is the delayed to when request is completely served, then the scenario what I experiencing is easily happen: The 2nd concurrent request within the session got his copy after the 1st request modified the state but before as it was finished completely.
However this all above means that in case of concurrent request serving within a session there is no way to maintain session integrity. The 2nd not seeing the already done changes by the 1st, will write back something what is not consistent with the already done 1st process change.
Am I missing something?
Is there any workaround? (with some cost of course)
First, you may know this already, but it bears point out, just in case: session state is specific to one client. What you're talking about here, then, is the same client throwing multiple concurrent requests at the same time, each of which is touching the same piece of session state. That, in general, seems like a bad design. If there's some actual application reason to have multiple concurrent requests from the same client, then what those requests do should be idempotent or at least not step on each others toes. If it's a situation where the client is just spamming the server, either due to impatience or maliciousness, it's really not your concern whether their session state becomes corrupted as a result.
Second, because of the reasons outline above, concurrency is not really a concern for sessions. There's no use case I can imagine where the client would need to send multiple simultaneous requests that each modify the same session key. If there is, please elucidate by editing your question accordingly. However, I'd still imagine it would be something you likely shouldn't be persisting in the session in the first place.
That said, the session is thread-safe in that multiple simultaneous writes/reads will not cause an exception, but no guarantee is or can be made about integrity. That's universal across all concurrency scenarios. It's on you, as the developer, to ensure data integrity, if that's a concern. You do so, by designing a concurrency strategy. That could be anything from locks/semaphores to gate access or just compensating for things happening out of band. For example, with EF, you can employ concurrency tokens in your database tables to prevent one request overwriting another. The value of the token is modified with each successful update, and the application-known value is checked against the current database value before the update is made, to ensure that it has not been modified since the application initiated the update. If it has, then an exception is thrown to give the application a chance to catch and recover by cancelling the update, getting the fresh data and modifying that, or just pushing through an overwrite. This is to elucidate that you would need to come up with some sort of similar strategy if the integrity of the session data is important.
In few words, if I am not wrong, a session is used when I want to ensure that the packages are sent in order, and to be able to use sessions is needed a reliable connection.
But my doubt what kind of applications need that? In my case is a simple application in which a client request to a service data from a database, the service get the data from the database and send to the client the results. Also the client can requeset to add, modify or delete data from database. In this case, should I need a reliable connection and sessions or not?
Thanks.
Session presumes that for some period of time you want to retain some data. Such a period of time, as far as session is concerned, refers to client's lifecycle that is when client opens up proxy, both service along with session are created, when client closes proxy service and session terminate their actions. There is exception when closing proxy does not actually perform it right away and this occures when you invoke one-way-operation. Service will keep working as long as operation performs its action despite the fact that it previously received an order to get rid of instance.
Based on provided information I assume the best choice would be PerCall. You do not store any data between calls and every single call can be perceived separately. Additionaly, leverage of ConcurrencyMode set to multiple so as to allow services being created simultaneously.
Personally, I find session useful in MSMQ, whenever I want to specific number of messages be wrapped into single queue-message. If error occures, regardless of whether which message is in charge of it, the whole queue-message is rolled back.
I set up a test version of a PHP coded website which uses sessions to handle user logins. On the test server, the session would expire on browser close, since copying everything to the "clean" live server, the session stays in place on browser close and the user is still logged in even the next day after full system reboot.
In php.ini
; Lifetime in seconds of cookie or, if 0, until browser is restarted.
; http://www.php.net/manual/en/session.configuration.php#ini.session.cookie-lifetime
session.cookie_lifetime = 0
Which implies that it should expire on browser restart.
I thought maybe it was being overridden somewhere, but if I print_r the session_get_cookie_params in PHP I get
Array
(
[lifetime] => 0
[path] => /
[domain] =>
[secure] =>
[httponly] =>
)
Is there something I am missing?
If you are using google chrome
if you set "continue where I left off", chrome will restore your browsing data and session cookies.
even Facebook login (without "remember me") session is retained.
for more info
google chrome setting
Issue is here that a Firefox has a feature called "Restore last session". If someone uses saving tabs on close then it's the same. When browser restores the last session then all session cookies will be restored too :)
So your session cookie can live forever. You can read more at Firefox session cookies
I was going to add this as a comment on Alexander's excellent answer, but its going to get a bit verbose.
How long the cookie is retained on the browser and how long the session data is retained by the server in the absence of a request are 2 seperate and independent things. There is no way to avoid this due to the stateless nature of HTTP - although there are some things you can do to mitigate what you perceive as a security flaw.
For the browser to access the same session after being closed down and some delay it requires that both the session cookie be retained by the browser (which Alexander has already explained) and for the server to have retained the session data.
The behaviour you describe may be much more pronounced on systems handling a low volume of requests and where the session handler does not verify the TTL of the sesion data (I'm not sure if the default handlers do, or if they just assume that any undeleted session data is considered current).
You've not provided any details of how the 2 servers are configured, notably the session.gc_maxlifetime.
If the session.gc_maxlifetime has expired between requests but the session data is still accessible this implies that the session handler merely considers this as the time at which the session is considered eligible for garbage collection (which, semantically, is what the configuration option is for). However there is a strong case for treating this value as a TTL. To address this you could either force the garbage collection to run more frequently and delete the session data, or use a session handler which ignores session data older than the specified limit.
That you see a difference between the 2 systems may be due to differing values for session.gc_maxlifetime or differences in the frequency of garbage collection or even different session handlers.
Thanks for taking time to read my questions.
I am having some basic doubts about the load balanced servers.
I assume that One application is hosted on the two servers, when one server is heavily loaded the load balancer is switching the responsibilities of handling the particular request to another server.
This is how I assumed about the load balancer.
Which is managing and monitoring the load and do all the transfers of requests?
How do the static variables are taken place for processing? For ex: , - I have a variable called as 'totalNumberOfClick'. Which is being incremented whenever we hit the page.
If a GET request is handled by a server and its POST method also should be managed by that server.Right? For Ex: in to- A user is requesting a page for editing, the Asp.Net runtime will create a set of viewstate (which has controlID and its values) and is maintained in the server and client side. When we hit the post button the server is validating the view state and allowing it to into a server and doing other processing.
If the post is getting transferred to another server, how the Runtime allow it to do the processing.
If you are using the load balancing built into Windows, then there are several options for how the load is distributed. The servers keep in communication with each other and organise the load between themselves.
The most scalable option is to evenly balance the requests across all of the servers. This means that each request could end up being processed by a different server so a common practice is to use "sticky sessions". These are tied to the user's IP address, and make sure that all requests from the same user go to the same server.
There is no way to share static variables across multiple servers so you will need to store the value in a database or on another server.
If you find an out of process to host session state (such as stateserver or sql server) then you can process any request on any server. Viewstate allows the server to recreate most of the data needed that generated the page.
I have some answers for you.
When it comes to web applications, load balancers need to provide what is calles Session Stickyness. That means that once a server is elected to serve a clients request all subsequent request will be directed to the same node as long as the session is active. Of course this is not neccessary if your web application does not rely on any state that has to be preserved (i.e. stateless, sessionless).
I think this can answer your third and maybe even your second question.
Your first question is on how load balancers work internally. Since I am not an expert in that I can only guess that the load balancer that each client is talking to measures ping response times to derive an estimated load amount on the server. Maybe more sophisticated techniques could be used.
Hey there guys, I am a recent grad, and looking at a couple jobs I am applying for I see that I need to know things like runtime complexity (straight forward enough), caching (memcached!), and load balancing issues
(no idea on this!!)
So, what kind of load balancing issues and solutions should I try to learn about, or at least be vaguely familiar with for .net or java jobs ?
Googling around gives me things like network load balancing, but wouldn't that usually not be adminstrated by a software developer?
One thing I can think of is session management. By default, whenever you get a session ID, that session ID points to some in-memory data on the server. However, when you use load-balacing, there are multiple servers. What happens when data is stored in the session on machine 1, but for the next request the user is redirected to machine 2? His session data would be lost.
So, you'll have to make sure that either the user gets back to the same machine for every concurrent request ('sticky connection') or you do not use in-proc session state, but out-of-proc session state, where session data is stored in, for example, a database.
There is a concept of load distribution where requests are sprayed across a number of servers (usually with session affinity). Here there is no feedback on how busy any particular server may be, we just rely on statistical sharing of the load. You could view the WebSphere Http plugin in WAS ND as doing this. It actually works pretty well even for substantial web sites
Load balancing tries to be cleverer than that. Where some feedback on the relative load of the servers determines where new requests go. (even then session affinity tends to be treated as higher priority than balancing load). The WebSphere On Demand Router that was originally delivered in XD does this. If you read this article you will see the kind of algorithms used.
You can achieve balancing with network spraying devices, they could consult "agents" running in the servers which give feedback to the sprayer to give a basis for decisions where request should go. Hence even this Hardware-based approach can have a Software element. See Dynamic Feedback Protocol
network combinatorics, max- flow min-cut theorems and their use