Sticky sessions with Load Balancer - load-balancing

It would be a great help if you can clarify on this please.
When am using a load balancer and I bind a server with a client with say either appsession or any other means. However if that server goes down then the load balancer redirects the client to another server and while doing so, the whole session is lost. So do i have to write my application in such a way that it stores session data externally so that it can be shared?
So how good is using a load balancer when a transaction fails halfway because the server goes unresponsive?
Please let me know, thanks.

There is a difference between the 2 concepts: session stickiness and session replication.
Session stickiness gives you an assurance that once a request from a client reaches a healthy server, subsequent requests from the same client will be handled by that server. When your server goes down, the stickiness is lost, and new requests go to a different healthy server. Session stickiness is usually offered by the load balancer and your application servers generally do not need to do anything.
Session replication gives you the capability of recovering the session when a server goes down. In the above case, stickiness is lost, but the new server will be able to recover the previous session based on an external session storage, which you will have to implement.

Related

Load balancer confusion (Load balancer mechanism )

Hi I'm little confused about load balancer concept
I've read some articles about loadbalancer in nginx and from what I've understand is that the load balancer spread the request into multiple servers !
But i thought if one server is down another one is up and running (not simultaneously all server together)
and another thing is when request spread between servers what happen to static data like sessions and InMemory Database like RedisDB
I think i'm confused and missunderstood the loadbalancer mechanism
and from what I've understand is that the load balancer spread the request into multiple servers ! But i thought if one server is down another one is up and running (not simultaneously all server together)
As it comes from the name the goal of load balancer (LB) is to balance the load. As per wiki definition for example:
In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.
To perform this task load balancer obviously need to have some monitoring over the resources, including liveness checks (so it can bring out of the rotation the failing servers/nodes). Ideally LB should work with stateless services (i.e. request could be routed to any of the servers supporting handling such request type) but that is not always the case due to multiple reasons, for example in ASP.NET in case of non-distributed session requests should have been routed to servers which handled the previous request from the session, which could have been handled with so called sticky session/cookie.
and another thing is when request spread between servers what happen to static data like sessions and InMemory Database like RedisDB
It is not very clear what is the question here. As I mentioned before ideally you will want to have stateless services which will use some shared datastore (s) to handle the requests, so if request comes for any server/node it can load all the needed data to handle it.
So in short when request comes to LB it selects one of the servers based on some algorithm (round robin, resource based, sharding, response time based, etc.) and send this request to this server so in theory based on the used approach sequential requests from the same source can hit different nodes/servers (so basically this is one of the ways to horizontally scale your application).
I actually found my answer in nginx doc page
Short answer is IP-Hash mechanism
Nginx doc word :
Please note that with round-robin or least-connected load balancing, each subsequent client’s request can be potentially distributed to a different server. There is no guarantee that the same client will be always directed to the same server.
If there is the need to tie a client to a particular application server — in other words, make the client’s session “sticky” or “persistent” in terms of always trying to select a particular server — the ip-hash load balancing mechanism can be used.
With ip-hash, the client’s IP address is used as a hashing key to determine what server in a server group should be selected for the client’s requests. This method ensures that the requests from the same client will always be directed to the same server except when this server is unavailable.
To configure ip-hash load balancing, just add the ip_hash directive to the server (upstream) group configuration:
upstream myapp1 {
ip_hash;
server srv1.example.com;
server srv2.example.com;
server srv3.example.com;
}
http://nginx.org/en/docs/http/load_balancing.html

Azure application gateway - caching and routing traffic

lets say there are 2 web services. The goal is, that the app gateway routes the requests to both of them. If one of them is down, it should cache all the requests. Once it is up again, which can happen hours later, all the requests cached in the meantime should be send to it in the correct sequence. This is to preserve both services in the same state. Is something like this possible with an application gateway? Or with any other webserver/tool?
Thanks!
u can do that but u need some configuration HTTP Load Balancing
Load Balancer Overview
The capacity of a single server is limited. Once a website gains more and more attraction the instance serving the site comes to a point where it can not handle any more users. The website starts to slow down or even become unavailable as the server goes down from the traffic.
This is the point where a load balancer enters the game. It allows to spread the “load” that all those visitors and their requests create to be “balanced” over a series of different instances.
In case of increasing load on a setup, capacity can easily be increased by adding more instances to the load balancers backend. This allows to scale your infrastructure without any downtime or delays whilst waiting for DNS zones to be updated.

How to load balance url request to a dedicated weblogic node?

For some performance issue, i need to process one kind of request in a dedicated node. For example, I need to process all request like http://hostname/report* on node1. So, I added a rule in load balancer to redirect http://hostname/report* to http://node1name/report*. But node1 ask me to login again. And I was logged in http://hostname/ already. How can I directly access without login again?
As #JoseK mentioned, it looks like you don't have session replication and failover configured between the servers. You will need all of your application servers to be inside the same WebLogic cluster and you will also have to pick their secondary session replication node to be the destination for in-memory replication. You can dictate this by assigning the dedicated node to a specific machine, which is then selected as the secondary replication target for all cluster members.
Also, for session replication to work, all objects within your session have to be/implement serializable.

How to troubleshoot issues caused by clustering or load balancing?

Hi I have a application that is deploy on two weblogic app servers
recently we have issue that for certain cases the user session returned is null. Developer feedback is that it could be caused by the session not replicating to the other server.
How do we prove if this is really the case?
Are you using a single session store that both application servers can access via some communication protocol? If not, then it is definitely the case. Think about it, if your weblogic servers are storing the session in memory anywhere, and having users pass their session id via cookies, than the other server has no way of accessing the memory on the other machine. Unless you are using sticky load balancing. Are you?
There's 2 concepts to consider here - Session stickiness and session replication.
Session Stickiness is a mechanism where weblogic server ensures that if a request from a user with session A goes to server 1 then the next request from user with session A will go to server 1 only.
This is achieved by configuring a hardware loadbalancer (like F5) which is capable of providing session stickiness. or configuring weblogic proxy installed on apache/iis/weblogic.
The first time a request reached WLS managed server, it responds with a session id and appends to it the JVM id of the server (this is the primary id), if the managed server is part of a cluster, it also attaches a secondary server jvm id (the secondary server is the server where the session is being replicated)
The proxy maintains a table of all JVM id's and corresponding IP of managed server, it also checks periodically if the servers are up and running or not.
The next time when another request passes the proxy with existing session id and a primary jvm id, the proxy parses this and tries to send the request to that server, if it cannot within some time it tries to send to secondary server.
Session Replication - This is enabled by default when you configure a WLS cluster with 2 or more managed server. Each time any data is updates to a session, its data is replication in a secondary server too.
So in your case if your application users are loosing session or getting redirected to login page between normal usage, then check that the session did not get invalidated because of a timeout, if you have defined a cluster and using WLS proxy then check the proxy debug output to make sure the primary and secondary server are being appended to the session id.
Finally there's a simple example in the sample application deployment of wls that you can use to test session replication and failover functionality.
So to prove why session is getting lost,
1) check server log to see if session got invalidated because of timeout,
2) if using wlproxy, enable debug, and the next time the issue happens check in the proxy log if the request was sent to a different server, and if that server is not the secondary server.

Glassfish failover without load balancer

I have a Glassfish v2u2 cluster with two instances and I want to to fail-over between them. Every document that I read on this subject says that I should use a load balancer in front of Glassfish, like Apache httpd. In this scenario failover works, but I again have a single point of failure.
Is Glassfish able to do that fail-over without a load balancer in front?
The we solved this is that we have two IP addresses which both respond to the URL. The DNS provider (DNS Made Easy) will round robin between the two. Setting the timeout low will ensure that if one server fails the other will answer. When one server stops responding, DNS Made Easy will only send the other host as the server to respond to this URL. You will have to trust the DNS provider, but you can buy service with extremely high availability of the DNS lookup
As for high availability, you can have cluster setup which allows for session replication so that the user won't loose more than potentially one request which fails.
Hmm.. JBoss can do failover without a load balancer according to the docs (http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html) Chapter 16.1.2.1. Client-side interceptor.
As far as I know glassfish the cluster provides in-memory session replication between nodes. If I use Suns Glassfish Enterprise Application Server I can use HADB which promisses 99.999% of availability.
No, you can't do it at the application level.
Your options are:
Round-robin DNS - expose both your servers to the internet and let the client do the load-balancing - this is quite attractive as it will definitely enable fail-over.
Use a different layer 3 load balancing system - such as "Windows network load balancing" , "Linux Network Load balancing" or the one I wrote called "Fluffy Linux cluster"
Use a separate load-balancer that has a failover hot spare
In any of these cases you still need to ensure that your database and session data etc, are available and in sync between the members of your cluster, which in practice is much harder.