I want to configure IP Load Balancing service for our VPS. I have got the documentation at http://docs.ovh.ca/en/products-iplb.html#presentation where I can integrate it.
I want to limit the number of requests on each server (S1, S2). How can I achieve this?
Suppose, I want S1 should handle all requests if requests sent to load balancer are less than 3500 per minute.
If requests are greater than 3500 (per minute), then load balancer should forward all extra requests to S2.
Regards,
Just had a look and I believe you won't be able to achieve what you are looking for with the available load balancing algorithm.
If you look at the available ones, you can see five ldb algorithm. I would say from my experience with load balancers (not from OVH) that they should do the following:
First: probably the first real server to reply (with health monitor) will get the query
leastcon: this distributes connections to the server that is currently managing the fewest open connections at the time the new connection request is received.
roundrobin: next connection is given to the next real server in line
source: not sure about this one but I believe you load-balance per src ip. Eg if request is coming from 143.32.Y.Z, send it to server A etc.
uri: I believe it load balances by URI. Typical if you are hosting different webservers.
I would advise to check with OVH what you can do. Typically in those scenario with an F5 load balancer for example, you can configure a simple script for this. Or groups, if the first group fail, we sent the traffic to the second one.
Now a ratio (also called weighted) ldb algo can do the job, not exactly what you want indeed.
Cheers
Related
I am working on a video-conferencing application. We have a pool of servers where rooms are created, a room can have n number of users. I was exploring HAProxy and several other load balancers, but couldn't find any solution for what I was looking for.
My requirements are as follows
A room should be created on the server with the lowest load at the time of creation.
All users of that room should join on the same server.
I have tried url_param balance logic with consistent hashing, but it is distributing load randomly. Is it even possible with modern L7 load balancers or do I need to write some custom logic (in some load balancer) or a separate application for this scenario?
Is there any way of balancing load based on connections or CPU usage while maintaining the session stickiness?
balance documentation says you can choose algorithm like leastconn and that this only applies when no persistence information is available, or when a connection is redispatched to another server.
So the second part of the answer are stick tables. Read docs about stick match and other stick keywords
So with stick table it looks like this:
backend foo
mode http
balance leastconn
stick store-request src
stick-table type ip size 200k expire 30m
server s1 192.168.1.1:8080
server s2 192.168.1.2:8080
There are more examples in the docs.
What you need to figure out (or tell us) is how can we know the room client wants based on the request and make such stick table and rules. If it's in URL or http header then it is perfectly doable in haproxy.
If leastconn is not good enough, then there is an option of dynamically adjusting servers' weights with haproxy's unix socket CLI and use roundrobin algorithm. Also agent options can be configured for servers to dynamically set servers' weights.
A server listening on a UDP port, many clients can connect to it, there are many groups of clients connected to it. In a group one client is sending message and the server needs to route the message to the rest in the group. Like this many groups could be running simultaneously. How can we test what is the maximum number of connections the server can handle without inducing a visible lag in the response time ?
Firstly, let me desrcibe your network topology again. There is a server and many clients, clients are divided into several groups. A client sends a message to the server, and then the server sends something to the other clients in that group.
If the topology is like what I describe above, is the connections limitation you want to reach about how many clients the server can send to at the same time? Or do you want to know how many clients can send to server at the same time?
The way to test these two different circumstances may be using multi-thread or go routine if you can write by go. But they need to set different judge to give out the limitation.
I know that ZMQ offers all of the flexibility to do your own load-balancing. However I would expect the out-of-the-box broker, about 4 lines of code using the line
zmq_device (ZMQ_QUEUE, frontend, backend);
to load balance quite well as the documentation says it does load balance.
ZMQ_QUEUE creates a shared queue that collects requests from a set of clients, and distributes these fairly among a set of services. Requests are fair-queued from frontend connections and load-balanced between backend connections. Replies automatically return to the client that made the original request.
I have an army of back-end services and yet find that often my front-end clients have to wait several seconds for something that takes < 1/10 of a second in a 1:1 setting (there are same # of client and service machines). I suspect that ZMQ is not load-balancing properly out of the box - it's sending too many requests to the same service even though it doesn't have bandwidth, etc.
I think this is partly because the services are multithreaded in a way that lets them take up to 10 concurrent requests yet it slows down greatly at near the 10th request even though it can still accept them. Random distribution would be ideal. Is there an out-of-the-box way to do this or can it be done in a few lines of code, or do I have to write my own broker from scratch?
Fwiw issue was the workers were taking on work when they didn't have room for it, issue was not in ZMQ layer per se.
Thanks for taking time to read my questions.
I am having some basic doubts about the load balanced servers.
I assume that One application is hosted on the two servers, when one server is heavily loaded the load balancer is switching the responsibilities of handling the particular request to another server.
This is how I assumed about the load balancer.
Which is managing and monitoring the load and do all the transfers of requests?
How do the static variables are taken place for processing? For ex: , - I have a variable called as 'totalNumberOfClick'. Which is being incremented whenever we hit the page.
If a GET request is handled by a server and its POST method also should be managed by that server.Right? For Ex: in to- A user is requesting a page for editing, the Asp.Net runtime will create a set of viewstate (which has controlID and its values) and is maintained in the server and client side. When we hit the post button the server is validating the view state and allowing it to into a server and doing other processing.
If the post is getting transferred to another server, how the Runtime allow it to do the processing.
If you are using the load balancing built into Windows, then there are several options for how the load is distributed. The servers keep in communication with each other and organise the load between themselves.
The most scalable option is to evenly balance the requests across all of the servers. This means that each request could end up being processed by a different server so a common practice is to use "sticky sessions". These are tied to the user's IP address, and make sure that all requests from the same user go to the same server.
There is no way to share static variables across multiple servers so you will need to store the value in a database or on another server.
If you find an out of process to host session state (such as stateserver or sql server) then you can process any request on any server. Viewstate allows the server to recreate most of the data needed that generated the page.
I have some answers for you.
When it comes to web applications, load balancers need to provide what is calles Session Stickyness. That means that once a server is elected to serve a clients request all subsequent request will be directed to the same node as long as the session is active. Of course this is not neccessary if your web application does not rely on any state that has to be preserved (i.e. stateless, sessionless).
I think this can answer your third and maybe even your second question.
Your first question is on how load balancers work internally. Since I am not an expert in that I can only guess that the load balancer that each client is talking to measures ping response times to derive an estimated load amount on the server. Maybe more sophisticated techniques could be used.
Hey there guys, I am a recent grad, and looking at a couple jobs I am applying for I see that I need to know things like runtime complexity (straight forward enough), caching (memcached!), and load balancing issues
 (no idea on this!!)
So, what kind of load balancing issues and solutions should I try to learn about, or at least be vaguely familiar with for .net or java jobs ?
Googling around gives me things like network load balancing, but wouldn't that usually not be adminstrated by a software developer?
One thing I can think of is session management. By default, whenever you get a session ID, that session ID points to some in-memory data on the server. However, when you use load-balacing, there are multiple servers. What happens when data is stored in the session on machine 1, but for the next request the user is redirected to machine 2? His session data would be lost.
So, you'll have to make sure that either the user gets back to the same machine for every concurrent request ('sticky connection') or you do not use in-proc session state, but out-of-proc session state, where session data is stored in, for example, a database.
There is a concept of load distribution where requests are sprayed across a number of servers (usually with session affinity). Here there is no feedback on how busy any particular server may be, we just rely on statistical sharing of the load. You could view the WebSphere Http plugin in WAS ND as doing this. It actually works pretty well even for substantial web sites
Load balancing tries to be cleverer than that. Where some feedback on the relative load of the servers determines where new requests go. (even then session affinity tends to be treated as higher priority than balancing load). The WebSphere On Demand Router that was originally delivered in XD does this. If you read this article you will see the kind of algorithms used.
You can achieve balancing with network spraying devices, they could consult "agents" running in the servers which give feedback to the sprayer to give a basis for decisions where request should go. Hence even this Hardware-based approach can have a Software element. See Dynamic Feedback Protocol
network combinatorics, max- flow min-cut theorems and their use