I have several Asterisk boxes and 2 kamailio servers (both for failover) loadbalancing calls between Asterisk boxes. Kamailio server receives calls from E1 to SIP gateways, and then forwards the call to the Asterisk cluster. There is no NAT, and the platform only processes inbound calls.
At this point, load balancing for Asterisk servers is OK: Asterisk cluster handles several thousand simultaneous calls without any problem, and if i want to have more calls, i "just" need to setup a new Asterisk server and set its IP address to Kamailio's dispatcher.
Regarding Kamailio, the failover cluster (if we can call it cluster, as there are only 2 serves) works perfectly.
But as any high-tech solution, there are limits: we cannot grow the Asterisk cluster indefinitely, so at some point, we'll need to add more Kamailio servers.
Knowing that the E1-to-SIP gateway redirect calls to only 1 IP address (the kamailio cluster address), question is:
How can we add any number of new Kamailio servers to thep platform, and load-balancing SIP requests between kamailio cluster?
"grosso-modo", how to load balance load balancers? :)
I thought about Kamailio + LVS integration. Any clues, anyone?
You have following choices
1) "root" kamailio with 301 redirect setup, which just redirect inbound calls to set of kamailio
2) dns which return always different ip. Clients have return dns
3) http://www.lartc.org/autoloadbalance.html
4) cisco router or iptables setup similar to lartc(just forward port random order to different ips)
But please note following: if you have so large load that single!! kamailio server can't do that - you dooing something wrong or you need hire expert at this stage.
Single kamailio server can easy serve upto 7000 calls per second.
Related
Suppose we have two servers serving requests through a load balancer. Is it necessary to have web server in both of our servers to process the requests. Can load balancer itself act as a web server. Suppose we are using apache web server and HAProxy. So does that mean that web server(Apache) should be installed in both the server and load balancer in any one of the server. Why can't we have load balancer in both of our server machine that will be receiving the request and talking to each other to process the requests.
At the very basic, you want to have Webservers fulfill requests for static contents, while Application servers handle business logics, i.e. handle requests for dynamic contents.
But Web servers can do many other things as well such as authenticate and validate requests, logging metrics. Also, the important part of Webserver is putting the Content it gets from Application servers with a View for client to represent.
You want to have LB sitting in front of both Web and App servers if you have more than one server. Also, there's nothing preventing you from putting both Web and App server in one.
The load balancer is in front of your webserver(s) to redirect requests according to number of sessions, a hash of source IP and destination IP, requested URL or other criteria. Additionally, it will check availability of the backend servers to ensure requests get answered even if one server fails.
It's not installed on every webserver - you only need one instance. It could be a hardware appliance, or a software (like HAproxy) which may or may not be installed on one of the webservers. Although this would not be prudent, as this webserver could fail and then the proxy would not be able to redirect traffic to the remaining server.
There are several different scenarios for this. One is load balancing requests to 2 webservers which serve the same HTML content, to provide redundancy.
Another would be to provide multiple websites using just one public address, i.e. applying destination NAT according to the requested URL. For this, the software has to determine the URL in the HTML request and redirect traffic to the backend webserver servicing this site. This sometimes is called 'reverse proxy' as it hides the internal server addresses from the outside.
I have an AWS EC2 server that hosts 3 domains with Apache 2. This server sits behind an AWS ELB load balancer which sends it requests. If I want to update this server, instead of taking the server down, I can create a new identical EC2 server and install all the software using the same scripts that built the first server and when it is ready I can add the new server to the ELB and then remove the old server. This gives me zero downtime which is great.
But before I remove the old server how do test the new server to prove everything is working and it is serving those 3 domains? DNS points to the ELB for these domains, the ELB sendsthe requests to the server, and the Apache install on the server routes the traffic to the appropriate site depending on what subdomain was requested. Is there a way make a request to the new server via IP address since that is the only way to address it before it is behind the ELB but tell it I want to make a request to a specific subdomain? If not how else can I prove all 3 sites are running and working properly without just adding it to the ELB, removing the old server, and crossing my fingers?
P.S. Sorry for the poor title. Please edit it if you can think of a better one that better represents what I am asking.
Use ELB healthcheck to perform the check. I recommend you to enable Apache server status mod. Use health check against /server-status and if it returns 200 for certain period of time, ELB will mark the instance as active and healthy.
I would like to find a nice way to route all my production http traffic temporarly to a staging server (equivalent to the production) to be able to monitor it.
How can I do that (we use apache / tomcat 7), but any solution might be helpful as a starting point.
You really want to solve this at the network layer by having a low level load balancer / ip sprayer in front of these instances and being able to cut over to one in an active/passive manner.
DNS is another alternative way to subvert some/all of the traffic after a period of time.
When a reverse proxy is used primarily for load balancing, it is obvious why the routing of requests to a pool of N proxied servers should help balance the load.
However, once the server-side computations for the requests are complete and it's time to dispatch the responses back to their clients, how come the single reverse proxy server never becomes a bottleneck?
My intuitive understanding of the reverse proxy concept tells me,
that the reverse proxy server that is proxying N origin servers behind it would obviously NOT become a bottleneck as easily or as early as a setup involving a single-server equivalent of the N proxied servers, BUT it too would become a bottleneck at some point because all N proxied servers' responses are going through it.
that, to delay the above sort of a bottleneck point (from being reached) even further, the N proxied servers should really be dispatching the responses directly to the client 'somehow', instead of doing it via the single reverse proxy sitting in front of them.
Where am I amiss in my understanding of the reverse proxy concept? Maybe point #2 is by definition NOT a reverse proxy server setup, but keeping definitions aside, why #2 is not popular relative to the reverse proxy option?
A reverse proxy, when used for load-balancing, will proxy all traffic to the pool of origin servers.
This means that the client TCP connection terminates at the LB (the reverse proxy), and the LB initiates a new TCP connection to one of the origin nodes on behalf of the client. Now the node, after having processed the request, cannot communicate to the client directly, because client TCP connection is open with the Load Balancer's IP. The client is expecting a response from LB, and not from any other random dude, or a random IP (-: of some node. Thus, the response usually flows the same way as the request, via the LB. Also, you do not want to expose the node's IP to the client. This all usually scales very well for request-response systems. So my answer to #1 is: the LB usually scales well for request-response systems. If at all required, more LBs can be added to create redundancy behind a VIP.
Now, having said this, it still makes sense to bypass the LB for writing responses if your responses are huge. For example, if you are streaming videos in response, then you probably don;t want to choke your LB with humongous responses. In such a scenario, one would configure a Direct Server Return LB. This is essentially what you are thinking of in #2. This allows responses to flow directly from origin servers, bypassing the LB, and still hiding the IP of origin nodes from clients. This is achieved by configuring the ARP in a special way, such that the responses written by origin nodes carry the IP of LB. This is not straight forward to setup, and the usual proxy mode of LB is fine for most use cases.
I set up a NLB cluster given two servers (WS 2008 R2). Each server has one NIC card which I set up for a static ip address. I assigned the cluster an internet name (MyCluster), and assigned it a static ip address. The third box is acting as a client sending TCP data (over WCF) to the cluster's IP I configured (static IP). I am observing the NLB cluster from the NLB manager at one of the nodes - both nodes are green, say started. However, I am only able to see traffic coming in to one of the NLB servers. When I suspend it, I see traffic going to the other NLB server, and so on. I was expecting traffic to be split equally between them. I can't figure out what I missed, any tips please?
If you need more detailed information please ask, not sure how much detail to put in here.
Thanks/.
By default, a port rule created with a Filtering mode of multiple host will use single affinity. In other words, multiple requests from the same client will get directed to the same host. To see traffic going to both hosts try accessing the cluster from multiple clients. You could also set the affinity to "none", but this can lead to other problems.
There's good information on the affinity parameter and how to use it in the NLB help file.