I followed these instructions from Stuart Douglas video to enable Wildfly to balance request without the need of apache + mod_cluster, feature that is available since Wildfly 9.
It worked like in the video. But then, instead of adding the 3rd backend server to the same host, I created another host and added the backend3 server to it, which was also added to the backend-servers group.
So I had the following layout:
Server one (host controller and load balancer):
Backend1
Backend2
Server two (slave):
Backend3
I started the 2nd host as a slave and I could access the clustering-demo using its ip and the backend3 port. Besides, the host controller was able to register the slave:
[Host Controller] 10:05:52,198 INFO [org.jboss.as.domain.controller] (Host Controller Service Threads - 56) WFLYHC0019: Registered remote slave host "srv217", JBoss WildFly Full 10.0.0.Final (WildFly 2.0.10.Final)
However, when I accessed the main server, the load was still being balanced only to backend1 and backend2.
I tried to stop both and let only backend3 started, but then I couldn't access clustering-demo through the load balancer anymore.
Anyone know if an addicional configuration is required for the load balancer to work with a slave host?
EDIT:
I'm adding my host controller and slave log.
Host controller: http://pastebin.com/nyaDiPzS
Slave: http://pastebin.com/kMS72E4U
These lines caught my attention:
[Server:backend2] 08:56:58,956 INFO [org.infinispan.CLUSTER] (remote-thread--p7-t1) ISPN000310: Starting cluster-wide rebalance for cache clustering-demo.war, topology CacheTopology{id=1, rebalanceId=1, currentCH=DefaultConsistentHash{ns=80, owners = (1)[master:backend2: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[master:backend2: 40+40, master:backend1: 40+40]}, unionCH=null, actualMembers=[master:backend2, master:backend1]}
[Server:backend2] 08:56:59,023 INFO [org.infinispan.CLUSTER] (remote-thread--p7-t1) ISPN000310: Starting cluster-wide rebalance for cache routing, topology CacheTopology{id=1, rebalanceId=1, currentCH=DefaultConsistentHash{ns=80, owners = (1)[master:backend2: 80+0]}, pendingCH=DefaultConsistentHash{ns=80, owners = (2)[master:backend2: 40+40, master:backend1: 40+40]}, unionCH=null, actualMembers=[master:backend2, master:backend1]}
[Server:backend2] 08:56:59,376 INFO [org.infinispan.CLUSTER] (remote-thread--p7-t2) ISPN000336: Finished cluster-wide rebalance for cache clustering-demo.war, topology id = 1
It seems to confirm that slave:backend3 is not detected.
Change on your slave host default interface addresses to be visible for the master.
ie:
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address}"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address}"/>
</interface>
<interface name="unsecure">
<inet-address value="${jboss.bind.address}"/>
</interface>
</interfaces>
where jboss.bind.address is a real IP of slave host.
And do the same on master host.
I am heaving the same issue.My master and slaves log output looks exactly the same.
I followed the video tutorial and the related project on GitHub: https://github.com/stuartwdouglas/modcluster-example
I have set all public IP addresses on the master and slave servers. The host controller registers the slaves, infinispan displays log messages for cluster rebalancing displaying the 2 slaves. But the load balancing does not work. It also appears that the session replication also does not work, despit the fact that infinispan displays all slaves as members of the cluster.
If I repeat the steps as described in the above mentioned GitHub project on a single machine, that is all servers are on the same IP, but on different ports - everything works.
I also found this: http://wildfly9.blogspot.bg/2015/10/wildfly-9-reverse-proxy-config-with.html?m=1, where he mentions adding the public IP addresses as default-host aliases. Did that too and still it is not working.
Can someone point us in the right direction. I presume it is something small, but substantial that is missing from the documentation and there is no adequate demo/tutorial on the net that demonstrated the set-up when all the slaves and the load balancer reside on different hosts (different IP addresses).
Related
I am trying to solve OMNET++ Ad hoc wireless UDP message to all nodes. My config file is as:
<config>
<interface hosts='host*' address='192.168.0.x' netmask='255.255.255.x'/>
<interface hosts='*' address='192.x.x.x' netmask='255.255.255.x'/>
</config>
and in ini file
*.host*.app[0].destAddresses = "255.255.255.255"
but this is not working. destAddress can be set as
*.host*.app[0].destAddresses=moduleListByNedType("inet.node.inet.AdhocHost")
but this still randomly chooses one host at a time. How can I send packets to all nodes/hosts.
Indeed, 255.255.255.255 is a broadcast address, but if you are using Adhoc routing the various nodes are routers and the broadcast is not forwarded by them. If you insist using UDP you MUST modify the INET sources to support sending to multiple destinations instead of randomly choosing one. You can also use PingApp which does behave as you expect (i.e. if you specify * as a destination address it pings ALL node interfaces in the simulation).
If you need UDP, you should take a look at the PingApp sources and get an inspiration from there to modify the UDPBasicApp.
I have single machine with single IP address(192.168.1.3) . I copied domain directory as host1.
Changed domain.xml , host.xml to differentiate between domain controller and host controller . Now i have to run both domain controller and host controller in the single machine on single ip address , how can i make this configuration ? Could you suggest what other changes i have to make ?
Download EAP installer. Extract it. Make two copies of domain directories node1 and node2. Then execute these commands:
cd $JBOSS_HOME
cp -r ./domain ./node1
cp -r ./domain ./node2
To start this domain instances, you just have to change native port and management port in host.xml.
<management-interfaces>
<native-interface security-realm="ManagementRealm">
<socket interface="management" port="${jboss.management.native.port:10999}"/>
</native-interface>
<http-interface security-realm="ManagementRealm" http-upgrade-enabled="true">
<socket interface="management" port="${jboss.management.http.port:10990}"/>
</http-interface>
</management-interfaces>
or you can mention them at runtime like:
./bin/domain.sh -Djboss.domain.base.dir=./node1/ -Djboss.bind.address=192.168.1.3 -Djboss.bind.address.management=192.168.1.3 <REST_OF_PARAMATERS>
./bin/domain.sh -Djboss.domain.base.dir=./node2/ -Djboss.bind.address=192.168.1.3 -Djboss.bind.address.management=192.168.1.3 -Djboss.management.native.port=10999 -Djboss.management.http.port=10990 <REST_OF_PARAMATERS>
Also you need to make sure that the servers defined in host.xml must have different port offsets for 'node1 domain' and 'node2 domain'.
Otherwise you would get an
java.net.BindException: Address already in use
error.
Question-1
I am trying run an Infinispan cluster on Openshift Tomcat gears. With nodes sitting on two different hosts.I am using TCP as the data transfer protocol and MPING as the discovery protocol.
If I try to use any of the JGROUPS provided key word for bind address like GLOBAL, SITE_LOCAL, LINK_LOCAL, NON_LOOPBACK, match-interface, match-host, match-address, except for LOOPBACK it binds the service to public IP(64.X.X.X) and for LOOPBACK it binds it to 127.0.0.1.This is not what I want to achieve.
I want it to run the JGROUPS service on the custom IP address provided by Openshift which looks some what like this 127.2.155.1. If I am able to run it in the given IP then it will be easy for me to write Port forwarding rules so that the cluster members will be able to discover each other even if they exist in different hosts.
Using environment property
Map<String, String> envKeys = System.getenv();
for (String keys : envKeys.keySet()) {
System.out.println(keys + ":" + envKeys.get(keys));
if (keys.equalsIgnoreCase("OPENSHIFT_JBOSSEWS_IP")) {
System.setProperty("OPENSHIFT_JBOSSEWS_IP", envKeys.get(keys));
}
}
It fails while doing above saying could not find the IP address or 127.2.155.1 is an invalid IP address. Please find the sample jgroups.xml I am using in my project.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema /JGroups-3.4.xsd">
<TCP
bind_addr="${OPENSHIFT_JBOSSEWS_IP}"
bind_port="${jgroups.tcp.port:7800}"
port_range="0"
recv_buf_size="20m"
send_buf_size="640k"
max_bundle_size="31k"
use_send_queues="true"
enable_diagnostics="false"
bundler_type="sender-sends-with-timer"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="2"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="100"
internal_thread_pool.rejection_policy="Discard"
/>
<!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->
<!--
<TCPPING timeout="3000"
initial_hosts="localhost[7800],localhost[7801]"
port_range="5"
num_initial_members="3"
ergonomics="false"
/>
-->
<MPING bind_addr="{OPENSHIFT_JBOSSEWS_IP}"
break_on_coord_rsp="true"
mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43376}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/>
<MERGE3/>
<FD_SOCK/>
<FD timeout="3000" max_tries="5"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
Question-2
When Infinispan is successfully started, It runs 2 java processes one on the port 7800 as mentioned in the above config file and other on a port number randomly picked up by Infinispan. I would like to understand more about the processes.
**COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME**
java 5640 1334 44u IPv4 1556890779 0t0 TCP 127.2.155.1:7800 (LISTEN)
java 5640 1334 44u IPv4 1556890779 0t0 TCP 127.2.155.1:20772 (LISTEN)
Question-1:
Setting bind_addr to bind_addr=match-address=127.2.\* should pick the correct address (127.2.155.1).
Does 127.2.155.1 show when you do an ifconfig?
Question-2:
The second port is probably opened by FD_SOCK. You can control this by setting attributes bind_addr and start_port in FD_SOCK.
[1] http://www.jgroups.org/manual/index.html#CommonProps
I am trying to run in jboss instance in domain mode. While I do that I am getting the following issue......
[Host Controller] 12:45:56,535 WARN [org.jboss.as.host.controller] (Controller Boot Thread) JBAS010900: Could not connect to remote domain controller at remote://nnn.nn.nn.88:9999 -- java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out
I had ran two JBoss instance in domain mode after configuring...
First JBoss instance->
./domain.sh -b nnn.nn.nn.88 -Djboss.bind.address.management=nnn.nn.nn.88
Second JBoss Instance ->
./domain.sh -b nnn.nn.nn.89 -Djboss.domain.master.address=nnn.nn.nn.88 --host-config=host-slave.xml
nnn.nn.nn.88 host.xml configuration is as follows...
<domain-controller>
<local/>
</domain-controller>
nnn.nn.nn.89 host-slave.xml configuration is as follows...
<domain-controller>
<remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
I am able to telnet to port 9999 on host nnn.nn.nn.88 from 89..... as I configured by removing loopback ip for public & management port...... Although is it the implication that <domain-controller> has <local/>....
Please help me to solve this issue... JDK version is JDK 7 Update 80.... EAP 6.3....
In HC host.xml and if we use --host-config=host-slave.xml that particular xml has to connected with DC under <domain-controller> node....
jboss.domain.master.address should be Domain Controller address nnn.nn.nn.88....
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
As per the solution article from redhat....
https://access.redhat.com/solutions/218053#
I ran following command for the same configuration which I had while posting this question..... And I got succeeded.....
DC->
./domain.sh -b my-host-ip1 -bmanagement my-host-ip1
HC->
./domain.sh -Djboss.domain.master.address=my-host-ip1 -b my-host-ip2 -bmanagement my-host-ip2
Although is this way of configuring gives clustering capability to DC and HCs..... I had raised same question to Redhat on the same solution article..... The answer must be yes I hope....
https://access.redhat.com/solutions/218053#comment-975683
I am using apache-activemq-5.11.1 which is the stable version runs on JDK 7 (Major version 51.0), I am using JDK 7 Update 80. I had error if I run the same on JDK 6.
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/ac
tivemq/console/Main : Unsupported major.minor version 51.0
Coming to my problem I need to have two running instances of ActiveMQ in my system. I had followed the following steps to create two instance.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
I had changed to different set of port numbers for instance2 as below,
<!--EDITED: apache-activemq-5.11.1\instance2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
Now I am starting instance1 & instance2 as follows.....
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance1\bin>instance2 start
Among these the second instance which I am trying to start gives the following kahadb lock problem.....
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#7209d9af: startup date [Thu May 07 16:16:23 IST 2015]; root of context hierarchy
INFO | PListStore:[C:\apache-activemq-5.11.1\data\localhost\tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[C:\apache-activemq-5.11.1\data\kahadb]
INFO | Database C:\apache-activemq-5.11.1\data\kahadb\lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: File 'C:\apache-activemq-5.11.1\data\kahadb\lock' could not be locked.
Please give a solution for this db lock issue.
Make a replica of your ActiveMQ like apache-activemq-x.xx.x to apache-activemq-x.xx.x_2
Change the ports of apache-activemq-x.xx.x_2\conf\activemq.xml. Make sure the port numbers that you are changing are not in clash.
<!--EDIT: apache-activemq-5.11.1_2\conf\activemq.xml-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5772?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1983?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And along with port changes we have to correct jetty.xml http management console port as well.
<bean id="jettyPort" class="org.apache.activemq.web.WebConsolePort" init-method="start">
<!-- the default port number for the web console -->
<property name="host" value="0.0.0.0"/>
<property name="port" value="8162"/>
</bean>
In this way you can run two services of ActiveMQ in one system.
Running more instances created for ActiveMQ gives the Fail Over option.
In this way when one instance goes down for some reason. The other instance coming up automatically, since the first instance which locks the KahaDB is been released.
For this ports are not to be changed,- as we're configuring instances for Fail Over Mode.
C:\>cd \apache-activemq-5.11.1
C:\apache-activemq-5.11.1>.\bin\activemq create instance1
C:\apache-activemq-5.11.1>.\bin\activemq create instance2
Please start instances without changing any of the configuration. So when instance1 goes down for any reason, instance2 coming up.
C:\apache-activemq-5.11.1\instance1\bin>instance1 start
C:\apache-activemq-5.11.1\instance2\bin>instance2 start
I hope this must be the purpose of creating multiple instances under ActiveMQ. And more than this some more tweaky config. also available for Kahadb.
I did do the following instructions from:
Running Multiple ActiveMQ Instances on One Machine (Dzone)
It works fine for Mac (Not tested in Linux)
Note: the instances must be started through instanceNumber start (the console argument/parameter is not valid anymore).
I had the same problem about the kahadb locking only for Windows, it for ActiveMQ 5.13.3 and 5.14.5 versions
The same author from DZone wrote practically the same post in his blog
Running multiple ActiveMQ instances on one machine (Blog)
But there exists an important update.
You must open each instanceNumber.bat file for each instance from each bin directory and add these two lines:
set ACTIVEMQ_CONF="ACTIVEMQ_HOME/instanceNumber/conf"
set ACTIVEMQ_DATA="ACTIVEMQ_HOME/instanceNumber/data"
Where ACTIVEMQ_HOME represents the path location of your ActiveMQ and instanceNumber is the instance being edited such as: instanceA and instanceB
what is happening is you have changed the port numbers correctly but both the instances that you created use a same Database(in this case file system KahaDB) to store their messages,
So when one instance is up and running, it holds the lock for that database and other instance of activeMQ will be waiting to gain a lock of this DB.
Essentially this is becoming a master slave configuration .
look at this line in activeMQ.xml
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
this will be pointing so same location for both instances.
what my solution is to copy entire folder apache-activemq-x.xx.x in different location change the port numbers for second instance and run them differently
by this you will have 2 instances of activeMQ running on same machine
hope this helps!
Good luck!
Although because of KahaDB restriction load balancing/fault tolerant configuration is restricted. We can use following kind of connection URL to utilize ActiveMQ load....
failover://(tcp://192.nnn.nn.nn:61616,tcp://192.nnn.nn.nn:61616)?randomize=false
randomize=true will made message shuffles between two AciveMQ in active mode, rather not by just fail-over of ActiveMQ......
Complete reference for this can be found under the following Apache Site link....
http://activemq.apache.org/failover-transport-reference.html
But Still high availability (i.e, cluster) configuration make things stable for your App although Apache must advance ActiveMQ High Availability, hence things can work smoother.
The present Apache ActiveMQ High Availability configuration available in the following link.
http://activemq.apache.org/clustering.html
Although KahaDB has file lock restriction, following tweaking/alternates ways of configuration can be done...
1)Shared File System Master Slave,- A shared file system such as a SAN
http://activemq.apache.org/shared-file-system-master-slave.html
2)JDBC Master Slave,- A Shared database
http://activemq.apache.org/jdbc-master-slave.html
3)Replicated LevelDB Store,- ZooKeeper Server
http://activemq.apache.org/replicated-leveldb-store.html
Over & above by having JCA connectors,- AS like JBoss, Weblogic, Websphere, Geronimo, Glassfish,- ActimeMQ patching as a kind of Resource Adapter can be done. And with Apache Camel (karaf), JBoss Fuse ESB kind of products HA & clustering of ActiveMQ can be done.