Load balancing between servers using Apache and JBoss - apache

I am facing the following scenario: I have three servers, each with an instance of my application deployed in a standalone JBoss, I am trying to use a machine that will do the load balancing service between these three servers, for this I am using the module mod_proxy_balancer from Apache (or at least trying), and it was even easy to do the balancing, it worked correctly, however I am having problems in keeping users session and cookies, because whenever a new request is made, the balancer sends it to another server, causing that the user loses his session, I would like that when a user already had a session in one of the servers the same one was sent to him, or something of the type.
Is it possible to achieve the desired result using such resources? If so, how should I make such a setup? If not, what other tool or feature should I use?
Here's the virtual host configuration:
<VirtualHost *:80>
ServerName server.int
ProxyPass / balancer://balance/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On
ProxyPass /balancer-manager !
ProxyPassReverse / balancer://balance/ stickysession=JSESSIONID|jsessionid scolonpathdelim=On
ProxyPassReverseCookiePath / /
<Proxy balancer://balance/>
BalancerMember "http://server1.int" loadfactor=50
BalancerMember "http://server2.int" loadfactor=25
BalancerMember "http://server3.int" loadfactor=25
ProxySet lbmethod=byrequests
</Proxy>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
</VirtualHost>

Although no one has answered, I will leave here the solution to my problem if this helps anyone in the future. I ended up using HAProxy that can do exactly what I needed in a very simple way.
frontend app
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/cert.pem
redirect scheme https if !{ ssl_fc }
mode http
default_backend app
backend app
balance leastconn
mode http
option httpchk HEAD / HTTP/1.0
cookie SERVERID insert indirect nocache
server server1 server1.test.com:80 check weight 50 fall 3 rise 2 cookie server1
server server2 server2.test.com:80 check weight 50 fall 3 rise 2 cookie server2
server server3 server3.test.com:80 check weight 50 fall 3 rise 2 cookie server3

Related

Apache load balancer dropping the HTTP request body

I have configured an Apache http server with mod_proxy to load balance between two jetty servers (sticky sessions).
Everything works fine and as expected while the two servers are up and running. But if I get one of the servers down and then attempt to make an http post to that server, the Apache balancer redirects the post to the running server but with an empty body, losing the original request.
After the request that triggered the redirect to the running server, all subsequent requests work fine.
Apache configuration:
<Proxy balancer://cluster>
BalancerMember http://localhost:9090 route=node1
BalancerMember http://localhost:9091 route=node2
ProxySet stickysession=JSESSIONID
</Proxy>
ProxyPreserveHost On
ProxyPass "/" "balancer://cluster/"
ProxyPassReverse "/" "balancer://cluster/"
I'm using Apache Server 2.4 and Jetty 9.4.22
Any ideas on why this is happening?
Thanks.
It looks like you hit the bug introduced as a regression in 2.4.41. You can check out the details here: https://bz.apache.org/bugzilla/show_bug.cgi?id=63891
To remedy, you will need to upgrade to 2.4.42 or greater.

JSessionID is not persistent on Apache web server

below is my httpd.conf configuration. I have tomcat server as backend server and I am using apache webserver as proxy to my tomcat server.
Below configuration is working fine for all the web pages where session is not required.
When investigated further, I had observed JSESSIONID is changing on every web request meaning, that ID is not getting persisted when request and response are going via Apache http server.
Please note that, when I tried to expose tomcat server directly to web JSESSIONID is persistent and is working as expected. However as security requirement, we need to use tomcat server as backend internal server only.
So I am not sure why apache http server is not liking to handle JSESSIONID properly. Request your help on the same and guide me what I am missing in my configuration.
Note: We don't need any load balancer setup so I am not considering mod_proxy_balancer module at this moment.
<VirtualHost *:443>
ServerName www.external.com
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
AddOutputFilterByType SUBSTITUTE text/html
ProxyPreserveHost off
ProxyPass / http://localhost:8080/internal/
ProxyPassReverse / http://localhost:8080/internal/
Substitute "s|http://localhost:8080/internal|https://www.external.com|i"
SSLProtocol all
SSLEngine on
SSLCertificateFile C:/keys/site/external_cert.cer
SSLCertificateKeyFile C:/keys/site/www_internal_private.p12.pri.pem
SSLCertificateChainFile C:/keys/site/Intermediate_CA.cer
</VirtualHost>
Apache web server is Apache 2.4 and tomcat engine is Tomcat 8.5
Follow serverfault answer by adding Set-Cookie Header:
In the end I just had to add the following line to my VirtualHost configuration, which changes all cookie paths from /WEBAPP_NAME to / (root):
Header edit Set-Cookie "^(.*; Path=)/WEBAPP_NAME/?(.*)" $1/$2
It is enough to set a hard path to cookies in web.xml:
<session-config>
<cookie-config>
<path>/</path>
</cookie-config>
</session-config>

apache https to http Nginx

My configuration is as follows - 1 unix server with two http servers running at the same time:
apache server on ports 80 and 443
Nginx server on port 8200 (www.myserver.com:8200)
The problem is that when I log in to Nginx site I need to authorize there. Doing this over internet with no SSL is not wise... I would like to connect to my apache server with SSL, be transparently redirected to another site and authorize still having encrpyted connection.
Nginx works via http so no ssl there... I would like to have url
https://www.myserver.com/duplicati to be proxied to http://www.myserver.com:8200
Effectively I want to have:
encrypted connection from the web client to www.myserver.com
proxy connection from https://www.myserver.com/duplicati to http://www.myserver.com:8200 (unencrypted), but limited to 1 physical machine which I don't care much about encryption (or actually lack of it)
What I did was the following
What I did was the following Apache config:
ProxyRequests Off
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
ProxyPass /duplicati/ http://127.0.0.1:8200/ngax/
ProxyPassReverse /duplicati/ http://127.0.0.1:8200/ngax/
<Location /duplicati/>
ProxyPassReverse /
Order deny,allow
Allow from all
</Location>
Header edit Location ^http://127.0.0.1:8200/ngax/ https://127.0.0.1:8200/ngax/
still no luck with that config....
It looks like a simple thing to do but after 5h of struggle I need to send my very first post to Stackoverflow community ;-)
Could you kindly help me with it?

Endeca cluster setup with Apache load balancing

I configured endeca cluster with Apache load balancing of 2 dgraph.. both dgraph are running in different machine... Apache port 5555 used for load balancing...I have two application servers... I'm getting endeca response from only one dgraph not able to get the response from another dgraph and it give no record.... In which machine 5555 port must be running??? It should run both dgraph machine or web server machine???? Can you guys me for getting response from both dgraph.... I need to finish it quickly....
Thanks in advance,
thank you...
DgraphA1 - running in machine A
DgraphB1 - running in machine B (ITL Host)
App server1 pointing to DgraphA1 and Appserver2 pointing to DgraphB1.
Below things are configured in apache for endeca load balancing.I configured the listen port 5555 in Machine A apache..
For App servers, apache are configured in Machine A httpd.conf file.
NameVirtualHost *:5555
<VirtualHost *:5555>
ServerName MachineA
ProxyPass / balancer://dgraphs/
ProxyPassReverse / balancer://dgraphs/
<Proxy balancer://dgraphs>
BalancerMember http://MachineA:15000 loadfactor=1 retry=0
BalancerMember http://MachineB:15000 loadfactor=1 retry=0
</Proxy>
</VirtualHost>
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
Figured it out myself.
Apache 5555 port need to Run in ITLHost(DgraphB1)
Both App server need to point the ITL Host(DgraphB1) and Port 5555.
Everything working fine..

Apache mod_proxy on Azure

I keep running into an issue with Apache's mod_proxy where it won't forward any traffic. I'm using a Windows Azure virtual machine running Ubuntu 13.04 and have configured the proper HTTPS endpoint (port 443) for it. The proper Apache modules (proxy, ssl, etc.) are all installed, and the error logs show nothing, not even a warning to explain why this is happening. My VirtualHost setup is as follows:
<VirtualHost *:443>
RequestHeader set X-Forwarded-Proto "https"
ProxyPreserveHost On
ServerName www.example.com
SSLEngine On
#SSLProxyEngine On
SSLCertificateFile /ssl/my.com.crt
SSLCertificateKeyFile /ssl/my.key
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
<Location />
SSLRequireSSL
Order deny,allow
Allow from all
</Location>
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/
</VirtualHost>
I have Listen 443 and NameVirtualHost *:443 all set as well. My service on the other port is running fine as doing a wget responds with an HTTP 200 OK response and I can reach it by manually inputting the port number. I have disabled all firewalls (for testing) to no avail as well. However, whenever I try to reach the service from the outside world through mod_proxy (port 443), the request times out and I get the usual "website not available" browser error.
If it means anything, the app I am running on the other port I need to forward HTTPS traffic to is a Play Framework 2.1 application. I set the server up exactly as in their documentation but still have these problems, so I'm assuming it may have something to do with Azure.
Any ideas? Is there some other type of endpoint configuration that I need to do specific for Windows Azure virtual machines to support SSL/TLS?
So, apparently, I have no idea how or why - but the Azure Gods decided to shine upon my setup all of a sudden. Overnight, without so much as a reboot or anything, mod_proxy on Azure just started working. I have no idea what the issue was, or even if there was one in the first place, but apparently the problem lies with something in the Azure infrastructure.
Sorry I couldn't be of more help for others encountering similar issues, but just giving it time worked for some unknown reason.