Apache load balancer just uses first entry - apache

I have configured a loadbalancer for Apache as following:
<Proxy "balancer://mycluster">
BalancerMember "http://127.0.0.1:8081/" loadfactor=1
BalancerMember "http://127.0.0.1:8082/" loadfactor=1
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName subdomain.example.com
ProxyPass / "balancer://mycluster"
ProxyPassReverse / "balancer://mycluster"
</VirtualHost>
If i fire some requests at it like in
for i in `seq 1 100`
curl http://subdomain.example.com/ping &
I get my pong (which is the balancers response) 100 times
But the logs are only showing the requests in my first BalancerMember at port 8081.
How can I debug this or change to a round-robin style?

Related

Apache 2.4 - configure ProxyPass based on full url instead of trailing path

Currently I have the following ProxyPass's configured in my Apache httpd.conf file.
The goal is to have one Proxypass on http://myurl.com:port1/mypath to one balance group, and then have any additional Proxypass go to http://myurl.com:port2/mypath to any additional balance groups.
Here is my code currently as is which only works based on the /mypath apparently and can have no proceeding URL. The problem is my two /mypath's are the same and only differ by port1 and port2 in the URL.
I am currently listening on Port1 and Port2 defined above in Apache, but I have no way currently to distinguish if someone who comes in on myurl.com:port1 will get directed to Group1 or Group2 in the balance manager because the /mypath is the same for both.
<IfModule proxy_module>
ProxyPass /mypath balancer://Group1/ stickysession=JSESSIONID|jsessionid
ProxyPass /mypath balancer://Group2/ stickysession=JSESSIONID|jsessionid
<Proxy balancer://Group1>
BalancerMember ajp://myurl.com:portX/mypath route=TC01
</Proxy>
<Proxy balancer://Group2>
BalancerMember ajp://myurl.com:portY/mypath route=TC01
</Proxy>
</IfModule>
The below does not work but this is essentially what I am trying to do:
<IfModule proxy_module>
ProxyPass http://myurl.com:port1/mypath balancer://Group1/ stickysession=JSESSIONID|jsessionid
ProxyPass http://myurl.com:port2/mypath balancer://Group2/ stickysession=JSESSIONID|jsessionid
<Proxy balancer://Group1>
BalancerMember ajp://myurl.com:portX/mypath route=TC01
</Proxy>
<Proxy balancer://Group2>
BalancerMember ajp://myurl.com:portY/mypath route=TC01
</Proxy>
</IfModule>
Since ProxyPass cannot occur within <If> section, seems like you are left with splitting your configuration in two VirtualHosts:
<VirtualHost *:port1>
ServerName myurl.com
<Proxy balancer://Group1>
BalancerMember ajp://myurl.com:portX/mypath route=TC01
</Proxy>
ProxyPass /mypath balancer://Group1/ stickysession=JSESSIONID|jsessionid
</VirtualHost>
<VirtualHost *:port2>
ServerName myurl.com
<Proxy balancer://Group2>
BalancerMember ajp://myurl.com:portY/mypath route=TC01
</Proxy>
ProxyPass /mypath balancer://Group2/ stickysession=JSESSIONID|jsessionid
</VirtualHost>

Apache 2.2 mod_proxy failover/load balancing

I am using mod_proxy_balancer to manage failover of backend servers (tcServers). Backend servers may return an error code instead of timing out when some other backend service fails such as NFS and we want such servers also to be marked as failed nodes. Hence we are using failonstatus directive.
<Proxy balancer://awstestbalancer>
ProxySet failonstatus=503
BalancerMember https://host:port/context/ retry=30
# the hot standby
BalancerMember https://host:port/context/ status=+H retry=0
</Proxy>
ProxyPass /context balancer://awstestbalancer
ProxyPassReverse /context balancer://awstestbalancer
Currently the failover works perfectly with one glitch. When active node fails the user gets a 503 error and from the next request the Standby server takes over.
I dont want even a single request to fail though. Cant mod_proxy failover with out ever returning an error to the client? If active node fails I want mod_proxy to try the Standby for the same request and not just from the subsequent request!
I have also tried the following settings, but they did not work. Using APACHE 2.2.59
<Proxy balancer://awstestbalancer>
BalancerMember https://host:port/context route=tcserver1 loadfactor=1
BalancerMember https://host:port/context route=tcserver2 loadfactor=1
ProxySet lbmethod=bybusyness
ProxySet nofailover=Off
ProxySet stickysession=JSESSIONID
</Proxy>
ProxyPass /context balancer://awstestbalancer
ProxyPassReverse /context balancer://awstestbalancer
AND
<Proxy balancer://awstestbalancer>
BalancerMember https://host:port/context route=tcserver1 loadfactor=1 ping=5
BalancerMember https://host:port/context route=tcserver2 loadfactor=1 ping=5
ProxySet lbmethod=bytraffic
ProxySet nofailover=On
ProxySet stickysession=JSESSIONID
</Proxy>
ProxyPass /context balancer://awstestbalancer
ProxyPassReverse /context balancer://awstestbalancer
Thanks!!!
Sid
Following configuration would work. if backend is on AWS and its status changes frequently you could try to decrease the connectiontimeout.
<Proxy balancer://awstestbalancer>
BalancerMember https://host:port/context/ connectiontimeout=5
BalancerMember https://host:port/context/ connectiontimeout=5
</Proxy>
ProxyPass /context balancer://awstestbalancer failonstatus=500,501,502,503,504
ProxyPassReverse /context balancer://awstestbalancer

apache mod_proxy cluster with websockets and http

I have
apache 2.4.10 192.168.0.10
jboss8 node1 192.168.0.20 - in domain mode
jboss8 node2 192.168.0.21 - in slave mode -
I trying create cluster via mod_proxy http://192.168.0.10/myapp with http and websocket connections
<VirtualHost *:80>
ServerAdmin webmaster#dummy-host.example.com
DocumentRoot /var/www/html/cluster1
ServerName 192.168.0.10
ErrorLog logs/cluster1_log_error
CustomLog logs/cluster1_log_comm common
TransferLog logs/cluster1_log_trans
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://jboss>
BalancerMember ws://192.168.0.21:8080 route=2
BalancerMember http://192.168.0.21:8080 route=2
BalancerMember ws://192.168.0.20:8080 route=1
BalancerMember http://192.168.0.20:8080 route=1
ProxySet stickysession=ROUTEID
ProxySet nofailover=off
</Proxy>
ProxyPreserveHost On
ProxyRequests Off
ProxyPass /myapp balancer://jboss/myapp
ProxyPassReverse /myapp balancer://jboss/myapp
<Location /mcm>
SetHandler balancer-manager
</Location>
but If I disable workers ws and http for jboss2 via balancer manager -
traffic still sending to node
how to do it right ?
two balancers ?
one balancer ?
I need help

Load Balancer with Apache HTTPD

I'm struggling to set up a an Apache httpd load balancer in front of a couple of application servers. This is my configuration:
ProxyRequests off
<Proxy balancer://mycluster>
BalancerMember http://127.0.0.1:8080
BalancerMember http://remote-svr:8080
ProxySet lbmethod=bybusyness
ProxySet stickysession=JESSIONIDSSO
</Proxy>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
ProxyPassReverseCookieDomain http://127.0.0.1:8080 localhost
ProxyPassReverseCookieDomain http://remote-svr:8080 localhost
I'm not sure the last 2 lines do anything, although one of the many examples I've looked at online used them, so I added them to see if it fixed my problem (it didn't).
The issue is that if I comment out either of the BalancerMember lines eg:
#BalancerMember http://127.0.0.1:8080
BalancerMember http://remote-svr:8080
Then the behaviour from a user perspective is fine, however when both members are active, the behaviour is wrong.
The application initially displays a login screen, however when both load balancers are active, the user on submitting their username and password just get redirected back to the login screen again, maybe the session is being lost somewhere. Does anyone have any idea what the issue might be?
EDIT - NOW WORKING
for reference, this setup now seems to work:
ProxyRequests off
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<Proxy balancer://mycluster>
BalancerMember http://127.0.0.1:8080 route=localServer
BalancerMember http://remote-svr:8080 route=remoteServer
ProxySet lbmethod=bybusyness
ProxySet stickysession=ROUTEID
</Proxy>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
Note the 'route' attribute for the individual nodes needs to be set on the nodes themselves (server.xml in this case, as the servers run JBoss). JSESSIONID worked ok as the sticky session cookie for individual applications but there is more than one application on each server, and the user needs to use the same node for all.
If I were to guess you probably loose the session due to a typo in this section:
<Proxy balancer://mycluster>
BalancerMember http://127.0.0.1:8080
BalancerMember http://remote-svr:8080
ProxySet lbmethod=bybusyness
ProxySet stickysession=JESSIONIDSSO
</Proxy>
ProxySet stickysession=JESSIONIDSSO this should probably say ProxySet stickysession=JSESSIONIDSSO? Or maybe even JSESSIONID?

mod_proxy: sticky session does not work

I have two JBoss AS 7 servers and I'm doing load balancing using mod_proxy. Almost everything works fine besides sticky sessions. I have session id in a cookie not in the URL as JSESSIONID.
Here is my apache configuration:
NameVirtualHost *:80
<VirtualHost *:80>
ProxyPass / balancer://mycluster/ stickysession=JSESSIONID|jsessionid nofailover=Off
ProxyPassReverse / balancer://tutcluster/
ProxyPassReverse / http://server1:8080/
ProxyPassReverse / http://server2:8080/
ProxyPreserveHost On
ProxyRequests Off
<Location / >
Order deny,allow
Allow from All
</Location>
<Proxy balancer://mycluster/>
BalancerMember http://server1:8080 route=jbossWeb1 retry=60
BalancerMember http://server2:8080 route=jbossWeb2 retry=60
</Proxy>
</VirtualHost>
OK, I've found it. There were two problems Firstly I forgot to set jvmRoute property in the JBoss configuration. So I set:
<system-properties>
<property name="jvmRoute" value="nodeX"/>
</system-properties>
and changed workers configuration to:
BalancerMember http://server1:8080 route=nodeX retry=60
The second problem was nofailover=Off. It probably caused that some parts of the static content was loaded from one server and some parts of it -- from another one.