Clustering (Apache 2.2.25 mod_jk + JBoss EAP 6.2) with Session Replication OFF and stickiness ON does not work - apache

I am using JBoss EAP 6.2 and Apache 2.2.25-no-ssl for load balancing and clustering deployment of my application.
I want Session Replication off and Sticky session on.
But after doing all sort of configurations, what I noticed that my load balancer not sticking user request based on session-id to one particular node, instead forwarding request to another node.
Below are my cluster configurations.
No of cluster nodes = 2
Apache Load balancer = Apache 2.2.25-no-ssl
App server = JBoss EAP 6.2.0
Apache Load Balancer configuration
workers.properties
# Define list of workers that will be used
# for mapping requests
worker.list=loadbalancer,status
# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=172.20.150.33
worker.node1.type=ajp13
worker.node1.ping_mode=A
worker.node1.lbfactor=1
# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8209
worker.node2.host=172.20.150.33
worker.node2.type=ajp13
worker.node2.ping_mode=A
worker.node2.lbfactor=1
# Load-balancing behavior
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1
# Status worker for managing load balancer
worker.status.type=status
uriworkermap.properties
# Simple worker configuration file
# Mount the Servlet context to the ajp13 worker
/*=loadbalancer
mod-jk.conf
# Load mod_jk module
# Specify the filename of the mod_jk lib
LoadModule jk_module modules/mod_jk.so
# Where to find workers.properties
JkWorkersFile conf/workers.properties
# Where to put jk logs
JkLogFile logs/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel debug
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
# JkOptions indicates to send SSK KEY SIZE
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat
JkRequestLogFormat "%w %V %T"
# Mount your applications
# The default setting only sends Java application data to mod_jk.
# Use the commented-out line to send all URLs through mod_jk.
# JkMount /* loadbalancer
JkMount /* loadbalancer
# Add shared memory.
# This directive is present with 1.2.10 and
# later versions of mod_jk, and is needed for
# for load balancing to work properly
JkShmFile logs/jk.shm
# You can use external file for mount points.
# It will be checked for updates each 60 seconds.
# The format of the file is: /url=worker
# /examples/*=loadbalancer
JkMountFile conf/uriworkermap.properties
# Add jkstatus for managing runtime data
<Location /jkstatus/>
JkMount status
Order deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
mod-jk.conf loaded in httpd.conf file and Apache runs on port 80.
In JBoss EAP, create two replica named node1 and node2 of standalone folder on same machine as depicted below
Command to start server :
node1
standalone.bat -c standalone-ha.xml -b 172.20.150.33 -u 230.0.10.0 -Djboss.server.base.dir=../node1 -Djboss.node.name=node1 -Dlogging.configuration=file:/${JBOSS_HOME}/node1/configuration/logging.properties
node2
standalone.bat -c standalone-ha.xml -b 172.20.150.33 -u 230.0.10.0 -Djboss.server.base.dir=../node2 -Djboss.node.name=node2 -Dlogging.configuration=file:/${JBOSS_HOME}/node2/configuration/logging.properties -Djboss.socket.binding.port-offset=200
I tried with Session Replication On (by adding in web.xml) but still same problem exists.
Below are my JSESSIONID observations.
on first request
JSESSIONID = SY1d0wVTmX2b-czp50whdmCW.61423f3f-b623-3da4-bd2f-69ba448af636 where 61423f3f-b623-3da4-bd2f-69ba448af636 is JVM-ROUTE for node2.
on second request
JSESSIONID = QMTCTAzt2u-ANTidqZdBIzxO.f742b8d4-46f7-3914-86bb-1044d0a1bfce where f742b8d4-46f7-3914-86bb-1044d0a1bfce is a JVM-ROUTE for node1.
It seems even though jvm-route is appended to primary session id , still load balancer(apache mod-jk) sending request to other node instead of sticking to one on which session established.
Please do helpful.

For this kind of scenario you need to implement this architecture. We are using the same architecture to host Wildfly
Note: Please do not forget to enable session stickiness/connection persistency on Load Balancer(LB) and Apache mod_jk.
In this architecture:
For
WEB1 -> App1 is Active APP NODE
WEB2 -> APP2 is Active APP NODE
So if request/connection comes ON LB it is redirected to WEB1. As session stickiness/connection persistency is enabled on LB All requests coming from the same client are redirected to WEB1 only.
Here is my Apache Load Balancer configuration:
workers.properties For node1
# Define list of workers that will be used
# for mapping requests
# The configuration directives are valid
# for the mod_jk version 1.2.18 and later
#
worker.list=loadbalancer,status
# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=10.62.147.58
worker.node1.type=ajp13
worker.node1.lbfactor=1
#worker.node1.socket_timeout=600
#worker.node1.ping_timeout=1000
worker.node1.ping_mode=A
#worker.node1.connection_pool_timeout=600
worker.node1.redirect=node2
# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8009
worker.node2.host=10.62.147.59
worker.node2.type=ajp13
worker.node2.lbfactor=1
#worker.node2.socket_timeout=600
#worker.node2.ping_timeout=1000
worker.node2.ping_mode=A
#worker.node2.connection_pool_timeout=600
worker.node2.activation=disabled
# Load-balancing behavior
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1
worker.loadbalancer.retry_interval=30
worker.loadbalancer.recover_time=20
#worker.loadbalancer.sticky_session_force=1
# Status worker for managing load balancer
worker.status.type=status
workers.properties For node2
# Define list of workers that will be used
# for mapping requests
# The configuration directives are valid
# for the mod_jk version 1.2.18 and later
#
worker.list=loadbalancer,status
# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=10.62.147.58
worker.node1.type=ajp13
worker.node1.lbfactor=1
#worker.node1.socket_timeout=600
#worker.node1.ping_timeout=1000
worker.node1.ping_mode=A
#worker.node1.connection_pool_timeout=600
worker.node1.activation=disabled
# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8009
worker.node2.host=10.62.147.59
worker.node2.type=ajp13
worker.node2.lbfactor=1
#worker.node2.socket_timeout=600
#worker.node2.ping_timeout=1000
worker.node2.ping_mode=A
#worker.node2.connection_pool_timeout=600
worker.node2.redirect=node1
# Load-balancing behavior
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1
worker.loadbalancer.retry_interval=30
worker.loadbalancer.recover_time=20
#worker.loadbalancer.sticky_session_force=1
# Status worker for managing load balancer
worker.status.type=status
mod-jk.conf
# Load mod_jk module
# Specify the filename of the mod_jk lib
LoadModule jk_module modules/mod_jk.so
# Where to find workers.properties
JkWorkersFile conf/workers.properties
# Where to put jk logs
JkLogFile logs/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
# JkOptions indicates to send SSK KEY SIZE
# Notes:
# 1) Changed from +ForwardURICompat.
# 2) For mod_rewrite compatibility, use +ForwardURIProxy (default since 1.2.24)
# See http://tomcat.apache.org/security-jk.html
JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories +ForwardURIProxy +ForwardURICompat
# JkRequestLogFormat
JkRequestLogFormat "%w %V %T"
# Mount your applications
#JkMount /__application__/* loadbalancer
# Let Apache serve the images
#JkUnMount /__application__/images/* loadbalancer
# You can use external file for mount points.
# It will be checked for updates each 60 seconds.
# The format of the file is: /url=worker
# /examples/*=loadbalancer
JkMountFile conf/uriworkermap.properties
# Add shared memory.
# This directive is present with 1.2.10 and
# later versions of mod_jk, and is needed for
# for load balancing to work properly
# Note: Replaced JkShmFile logs/jk.shm due to SELinux issues. Refer to
# https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=225452
JkShmFile run/jk.shm
JkMount /* loadbalancer
# Add jkstatus for managing runtime data
<Location /jkstatus>
JkMount status
Order deny,allow
Deny from none
Allow from All
</Location>

Related

Apache HTTP / mod_jk only working when one worker is active

I added the following mod-jk.conf file and include it in httpd.conf:
LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"
JkMount /MyApp/* loadbalancer
JkShmFile logs/jk.shm
JkMount /status status
I also added the following workers.properties file:
worker.list=loadbalancer,status
worker.node1.port=8009
worker.node1.host=10.1.4.49
worker.node1.type=ajp13
worker.node1.lbfactor=1
worker.node2.port=8009
worker.node2.host=10.1.4.51
worker.node2.type=ajp13
worker.node2.lbfactor=1
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.status.type=status
When I have two workers running the http server connects to the tocat server/app but it says the connection to the server is closed. However, if I take out one of the workers (doesn't matter which one) I can connect to the tomcat server/app just fine.
For whatever reason my load balancer is only working when there is 1 active worker.
When using a load balancer with sticky sessions, you need to setup each tomcat with the correct jvmRoute that matches what you have defined in the workers.properties file. In my case I have workers named node1 and node2 so I should expect to find
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">
on the tomcat at address 10.1.4.49 and
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node2">
on the tomcat at address 10.1.4.51

JK_Mount apache forbidden without trailing slash

I'm running apache 2.4.6 on CentOS 7 and got apache tomcat 7 working with apache through mod_jk. I'm using tomcat to provide apache solr. The solr instance is working great on both port 8080 through tomcat directly and now on port 80 using the mod_jk connector.
I notice that the Solr page works fine if I put this URL into my browser:
http://solr1.mydomain.com/solr/
However, if I give the URL without a trailing slash, like so:
http://solr1.mydomain.com/solr
I get the following response from Apache:
Forbidden
You don't have permission to access /solr on this server.
This is how I have everything setup in my apache VHOST:
# Update this path to match your modules location
LoadModule jk_module modules/mod_jk.so
# Where to find workers.properties
# Update this path to match your conf directory location (put workers.properties next to httpd.conf)
JkWorkersFile /etc/httpd/conf/workers.properties
# Where to put jk shared memory
# Update this path to match your local state directory or logs directory
JkShmFile /var/log/httpd/mod_jk.shm
# Where to put jk logs
# Update this path to match your logs directory location (put mod_jk.log next to access_log)
JkLogFile /var/log/httpd/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# Send everything for context /examples to worker named worker1 (ajp13)
<VirtualHost *:80>
ServerName solr1.mydomain.com
# Select the timestamp log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# Send everything for context /examples to worker named worker1 (ajp13)
Alias /solr /usr/share/tomcat/webapps/solr
JkMount /test/* worker1
JkMount /solr/* worker1
</VirtualHost>
This is the response that I am getting in the error logs:
[Sun Nov 02 15:53:22.289517 2014] [authz_core:error] [pid 22386] [client 47.18.111.100:40247] AH01630: client denied by server configuration: /usr/share/tomcat/webapps/solr
I'd really appreciate it if I could get your feedback on how I could get rid of the trailing slash!
Thanks
Later, but maybe useful for someone else:
JkMount /solr* worker1
Solution is to remove slash "/" behing URL prefix from JkMount command, it's working for me...

Nexus kills forwarding from apache to tomcat

I'm trying to run Nexus as a war in tomcat6. It deploys, starts and initializes and doesn't show any errors in the logs but it kills forwarding from apache to tomcat.
We are using libapache2-mod-jk and this should be correctly configured. Hudson is also running as a war and also doesn't work anymore. If I remove nexus from tomcat everything works fine again.
The error I found was in /var/log/apache2/mod_jk.log:
[error] ajp_send_request::jk_ajp_common.c (1630): (ajp13_worker) connecting to backend failed. Tomcat is probably not started or is listening on the wrong port (errno=111)
Config looks like this /etc/libapache2-mod-jk/workers.properties
#------ worker list ------------------------------------------
#---------------------------------------------------------------------
#
#
# The workers that your plugins should create and work with
#
worker.list=ajp13_worker
#
#------ ajp13_worker WORKER DEFINITION ------------------------------
#---------------------------------------------------------------------
#
#
# Defining a worker named ajp13_worker and of type ajp13
# Note that the name and the type do not have to match.
#
worker.ajp13_worker.port=8009
worker.ajp13_worker.host=localhost
worker.ajp13_worker.type=ajp13
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
# ----> lbfactor must be > 0
# ----> Low lbfactor means less work done by the worker.
worker.ajp13_worker.lbfactor=1
#
# Specify the size of the open connection cache.
#worker.ajp13_worker.cachesize
#
#------ DEFAULT LOAD BALANCER WORKER DEFINITION ----------------------
#---------------------------------------------------------------------
#
#
# The loadbalancer (type lb) workers perform wighted round-robin
# load balancing with sticky sessions.
# Note:
# ----> If a worker dies, the load balancer will check its state
# once in a while. Until then all work is redirected to peer
# workers.
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=ajp13_worker
And in /etc/apache2/sites-available/default-ssl
######## Tomcat
JkMount /agilefant/* ajp13_worker
JkMount /hudson/* ajp13_worker
JKMount /nexus/* ajp13_worker
Any idea?
You should not run Nexus as a war. This is not recommended and about to be completely deprecated. Run it from the bundle download that has a preconfigure Eclipse Jetty instance in it.

apache mod_jk loadbalancing glassfish cluster instances issue

I have a Java EE6 EAR application deployed on an Open Source GlassFish v3.1 server running on a Windows 2003 R2 Server on 2 clusters with 2 instances each. To load balance the work load, I am using an apache http server with mod_jk. When I look at the jk status page however I see that the work is being distributed to only one of the instances in each cluster even though all four have the same lbfactor of 1. Any ideas?
Here is my workers.properties config:
worker.list=loadbalancerLocal,status
worker.status.type=status
worker.ViewerLocalInstance1.type=ajp13
worker.ViewerLocalInstance1.host=localhost
worker.ViewerLocalInstance1.port=8109
worker.ViewerLocalInstance1.lbfactor=1
worker.ViewerLocalInstance1.socket_keepalive=1
worker.ViewerLocalInstance1.socket_timeout=1000
worker.ViewerLocalInstance2.type=ajp13
worker.ViewerLocalInstance2.host=localhost
worker.ViewerLocalInstance2.port=8209
worker.ViewerLocalInstance2.lbfactor=1
worker.ViewerLocalInstance2.socket_keepalive=1
worker.ViewerLocalInstance2.socket_timeout=1000
worker.ViewerLocalInstance3.type=ajp13
worker.ViewerLocalInstance3.host=localhost
worker.ViewerLocalInstance3.port=8309
worker.ViewerLocalInstance3.lbfactor=1
worker.ViewerLocalInstance3.socket_keepalive=1
worker.ViewerLocalInstance3.socket_timeout=1000
worker.ViewerLocalInstance4.type=ajp13
worker.ViewerLocalInstance4.host=localhost
worker.ViewerLocalInstance4.port=8409
worker.ViewerLocalInstance4.lbfactor=1
worker.ViewerLocalInstance4.socket_keepalive=1
worker.ViewerLocalInstance4.socket_timeout=1000
worker.loadbalancerLocal.type=lb
worker.loadbalancerLocal.sticky_session=True
worker.loadbalancerLocal.balance_workers=ViewerLocalInstance1,ViewerLocalInstance2,ViewerLocalInstance3,ViewerLocalInstance4
Here is my httpd config for mod_jk
LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
# Where to put jk logs
JkLogFile logs/mod_jk.log
# Set the jk log level [debug/error/info]
JkLogLevel info
# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y] "
# JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"
JkMount /Viewer/* loadbalancerLocal
JkMount /Viewer loadbalancerLocal
JkMount /jkstatus/* status
You have the sticky session turned on. Try turning it off. You may need to unset jvmRoute too if you have set it.

Spring Session Replication Problem

I've currently researching in load balancing my Spring project. I've used Apache web server as front-end to multiple Tomcat instances. I've used mod_jk for the load balancing. When I run it, if I shutdown one server, i had to login again to the system. Previously I also tried it in simpler example with the Tomcat's session example program and the session replication worked fine.
This is my configuration for the Apache's httpd.conf mod_jk part:
LoadModule jk_module modules/mod_jk.so
JkWorkersFile conf/workers.properties
JkLogFile logs/jk.log
JkLogLevel debug
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkRequestLogFormat "%w %V %T"
JkMount /test balancer <-- this is the spring program
JkMount /test/* balancer <-- this is the spring program
JkMount /jk_status status`
And this is my workers.properties setting:
workers.tomcat_home=/worker1
workers.java_home=$JAVA_HOME
ps=/
worker.list=balancer,status
worker.worker1.port=8009
worker.worker1.host=localhost
worker.worker1.type=ajp13
worker.worker1.lbfactor=1
worker.worker2.port=8109
worker.worker2.host=localhost
worker.worker2.type=ajp13
worker.worker2.lbfactor=1
worker.balancer.type=lb
worker.balancer.balance_workers=worker1,worker2
worker.balancer.method=B
worker.balancer.sticky_session=1
worker.status.type=status
And I've put a sample of one of my tomcat's server.xml here: http://pastebin.com/0j0ta2WA
I've also added <distributable/> tag to my application's web.xml. Is there something I missed here that made the session replication not working?
Tomcat 5.5
Apache 2.2
mod_jk
Spring 2.5.6
JDK 1.6.0_01
Do you have a jvmRoute defined in your server.xml?
Here are the docs:
http://tomcat.apache.org/tomcat-5.5-doc/config/engine.html
I would have looked at your server.xml but the link is wrong.