Maybe the title is not as descriptive as the question, but hopefully someone can help me with this.
I am setting up a new SOA for hosting our website (E-commerce platform). I have 3 servers available for this (bare-metals at the moment) and wish to balance the load over the 3 servers. Each server contains a full stack of the application and API's are connected across servers via a local network (10.x.x.x).
Each server has a couple of public IP-addresses to accept requests on.
What i would like to see is that server 1 accepts all requests and balances them through to the other servers depending on the load. This part I've already setup using HA-Proxy instances. The part i'm missing is what should be done when server 1 fails and every connection should be redirected through Server 2 or 3.
I was looking at VRRP (keepaliveD) but i'm only seeing setups of 2 servers, not 3 servers or more.
Does anyone have a suggestion for my kind of setup?
Related
We have configured a new webfarm using IIS10 with 3 hosts operating with the web traffic with a loadbalancing IIS ARR3.0 server sitting infront to balance incoming requests between all the nodes. During initial testing (Basic HTML pages) the round robin setup (33.33%) distribution between each node was working well but we had to enable server / client affinity so that our applications kept a consistent connection between our client session and the application. Since then, we are finding that all traffic going to these applications originating from different machines on different networks are all being forwarded to the same application server. If you take the server offline the application seamlessly starts running on the next server in the list (Client obviously must sign in again). Whilst one server is fine at this time to run the two applications we have running when we ramp up our migration and have all our 140 applications running, I don’t think one server will be too happy with the load.
ADDITIONAL INFORMATION
LoadBalancers/Arr Servers: LB-01 (LB-02 DUPLICATED Server for redundancy). Default ARR URL ReWrite with Route to Server Farm Action. Image of LB/ARR URL ReWrite Rule Server Affinity Enabled Client Affinity enabled use hostname selected no Advanced Settings, no routing rules. ARR Default Proxy Settings Image of Proxy Settings
Web/Application Servers WEB-01, WEB-02, WEB-03 FileSystem Shared using DFS All running on Shared Config's
The Applications would be as follows
https://www.domainname.com/application-name1
https://www.domainname.com/application-name2
...
Were the application launch page changes but the domain name stays the same
Image of IIS Monitoring and Management Window showing distribution
If there is a setting you wish to verify please ask for them. I know people arent physchic but huge paragraphs of information never really help.
My hunch is it is something to do with the URL rewrite I have tried the settings in the below post to no avail.
IIS ARR & load balancing
Uncheck 'Host Name Affinity' to dispatch to all your hosts
I have an AWS EC2 server that hosts 3 domains with Apache 2. This server sits behind an AWS ELB load balancer which sends it requests. If I want to update this server, instead of taking the server down, I can create a new identical EC2 server and install all the software using the same scripts that built the first server and when it is ready I can add the new server to the ELB and then remove the old server. This gives me zero downtime which is great.
But before I remove the old server how do test the new server to prove everything is working and it is serving those 3 domains? DNS points to the ELB for these domains, the ELB sendsthe requests to the server, and the Apache install on the server routes the traffic to the appropriate site depending on what subdomain was requested. Is there a way make a request to the new server via IP address since that is the only way to address it before it is behind the ELB but tell it I want to make a request to a specific subdomain? If not how else can I prove all 3 sites are running and working properly without just adding it to the ELB, removing the old server, and crossing my fingers?
P.S. Sorry for the poor title. Please edit it if you can think of a better one that better represents what I am asking.
Use ELB healthcheck to perform the check. I recommend you to enable Apache server status mod. Use health check against /server-status and if it returns 200 for certain period of time, ELB will mark the instance as active and healthy.
Here is my situation. Recently, my production environment has been burned by a few Windows updates that caused some production servers to stop responding. While we have since resolved the issue of both of the servers (which are in a load balancing configuration) getting updates on the same day, the question arouse, how do we check that the application running on each server is still working? If we call the load balancing IP, we may or may not hit a server that is working. So if the update takes out the application on one server, how do we know that this has happened
The only idea I have for this is to purchase 2 more SSL certificates and allocate 2 ip addresses and assign one to each server. This way I would be guaranteed that I would know each server is up (we have a 3rd party service pinging our servers). But I have to believe that there is a better way to do this?
Please note that I am a .Net developer by trade with only an extremely small smattering of networking and IIS experience, but I'm what my small company has. So please assume I don't know where a lot of stuff is and dumb down the answer.
Load balancer maintains live status of the servers ( based on timeouts or http health checks ). It uses this status to route the traffic only to active servers.
Generally, LBs have a dashboard through which you can check this status. If not, you can check it's logs.
I have three different web servers in my internal lab and each one of them have its own database and web content. How can I access each one of them from the internet without mapping a different port on my firewall for each one of them?
In other words, can I set up a fourth apache server that checks when aconnection comes http://mydomain.com/WebServer1 to it sends the user to my server one on my internal network and http://mydomain.com/WebServer2, it redirect him to server 2 without having to open a port for the other 3 servers only for the main web server? I have attached a diagram to show my set up.
Thanks in advance
I wanted to send all the requests for same content to the same backend server. How I can do this. Are there any open source versions like HaProxy which can do this.
For example. Client 1 has requested for Content A, and my load balancer directs that request to one of the backend server say X on round robin basis. Now if I receive a request from different client 2 for the same content A, this request should be directed to the same backend server X. Are there any open source solution which can do this.
Any help/pointers would be appreciated.
Thanks, Nikhil
Ha proxy can do what you want and more. It has many acl options available to suit most requirements. Varnish is another option that has a robust acl language.
Interesting question!
I'm affraid it can depend on technology. As long as you're in HTTP domain, maybe you can somehow configure your loadbalancer.
I'm a Java guy, so, in java you can have, say EJB. These are distributed components installed on server and can be run remotely. Their communication protocol is binary and I doubt load balancer can read it.
So, in JBoss, for example you can create a cluster of servers, and deploy different EJBs on different servers.
For example, lets assume, there are two EJBs in the system. One allows to buy milk, and one for pizza.
So you deploy the milk ejb on server 1 and pizza ejb on server 2.
Now you have a naming resolution service (in java/jboss its called HA-JNDI).
It basic idea is to provide a remote stub based on the name:
PizzaEJB pizzaEjb = NamingService.getMyStub(PizzaEJB.class);
Its not a real working code of course, but it demonstrates an idea.
The trick is that this naming server knows where each EJB is deployed, so if you have the pizza ejb only on server 2 it will always return a stub that will go to server 2 and buy the pizza :)
Java programmers so, don't really care how its implemented under the hood. Just to give an idea - the naming service has some form of agent deployed on each server and they talk with each other...
This is how java can work here.
Now what I think, maybe you can base your api on Restful web services, in this case its easily parsable http request, so the implementation can be relatively easy (again, if your load balancer supports this kind of processing).
Hope this helps somehow