Infinispan, cross-dc replication: java.lang.IllegalStateException: Site <sitename> not defined in all the cluster members - replication

I'm trying to setup a cross-datacenter replication mode between two infinispans 9.4.x as per Keycloak documentation, but the thing is that I'm trying to do this in sligtly modified environment:
multicast doesn't work between DC for obvious reasons
I have to use the port 7601 because 7600 is already used on this host by Keycloak JGroups transport (yup, by it's internal infinispan, and my future question would be "why do I need to use external extra Infinispan instance instead of setiing up replication between internal Keycloak's Infinispans, but first things first).
These a parts of my config that I added/modified:
[...]
<replicated-cache-configuration name="sessions-cfg" mode="SYNC" start="EAGER" batching="false">
<backups>
<backup site="site2" failure-policy="FAIL" strategy="SYNC" enabled="true">
<take-offline after-failures="3" min-wait="60000"/>
</backup>
</backups>
<locking acquire-timeout="0"/>
</replicated-cache-configuration>
[...]
<subsystem xmlns="urn:infinispan:server:jgroups:9.4">
<channels default="infinicluster">
<channel name="infinicluster" stack="tcp"/>
<channel name="xsite" stack="tcp"/>
</channels>
<stacks default="${jboss.default.jgroups.stack:udp}">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp"/>
<protocol type="PING"/>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="UFC_NB"/>
<protocol type="MFC_NB"/>
<protocol type="FRAG3"/>
<relay site="site1">
<remote-site name="site2" channel="xsite"/>
</relay>
</stack>
<stack name="tcp">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">
host1.tld[7601],host2.tld[7601]
</property>
<property name="ergonomics">
false
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD_ALL"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">
false
</property>
Of course, I changed the port numbers in JGroups socket bindings accordingly.
Both instances seem to be starting okay (complaining only about rest https bindings, which seems to be a minor error), I can even see the communications between instances in the logs:
2020-05-06 23:28:54,713 INFO [org.infinispan.CLUSTER] (remote-thread--p2-t20)[Context=___hotRodTopologyCache]ISPN100002: Starting rebalance with members [host1.tld, host2.tld], phase READ_OLD_WRITE_ALL, topology id 2
2020-05-06 23:28:54,779 INFO [org.infinispan.CLUSTER] (remote-thread--p2-t2) [Context=___hotRodTopologyCache]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2020-05-06 23:28:54,807 INFO [org.infinispan.CLUSTER] (remote-thread--p2-t21)[Context=___hotRodTopologyCache]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2020-05-06 23:28:54,834 INFO [org.infinispan.CLUSTER] (remote-thread--p2-t4)[Context=___hotRodTopologyCache]ISPN100010: Finished rebalance with members [host1.tld,host2.tld], topology id 5
The main issue is, that as soon as I open the web management page of any of the instances, I get the error on the logs (suppose I open the management page from site1, host1.tld):
2020-05-06 23:30:49,057 ERROR [org.jboss.as.controller.management-operation] (External Management Request Threads -- 1) WFLYCTL0013: Operation ("read-attribute") failed - address: ([
("subsystem" => "datagrid-infinispan"),
("cache-container" => "clustered")]): java.lang.IllegalStateException: Site host2.tld not defined in all the cluster members
at org.infinispan.xsite.XSiteAdminOperations.clusterStatus(XSiteAdminOperations.java:78)
at org.infinispan.xsite.GlobalXSiteAdminOperations.globalStatus(GlobalXSiteAdminOperations.java:93)
at org.jboss.as.clustering.infinispan.subsystem.CacheContainerMetricsHandler.filterSitesByStatus(CacheContainerMetricsHandler.java:343)
at org.jboss.as.clustering.infinispan.subsystem.CacheContainerMetricsHandler.executeRuntimeStep(CacheContainerMetricsHandler.java:297)
at org.jboss.as.controller.AbstractRuntimeOnlyHandler$1.execute(AbstractRuntimeOnlyHandler.java:59)
at org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:999)
at org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:743)
at org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:467)
at org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1411)
at org.jboss.as.controller.ModelControllerImpl.internalExecute(ModelControllerImpl.java:423)
at org.jboss.as.controller.ModelControllerImpl.lambda$execute$1(ModelControllerImpl.java:243)
at org.wildfly.security.auth.server.SecurityIdentity.runAs(SecurityIdentity.java:265)
at org.wildfly.security.auth.server.SecurityIdentity.runAs(SecurityIdentity.java:231)
at org.jboss.as.controller.ModelControllerImpl.execute(ModelControllerImpl.java:243)
at org.jboss.as.domain.http.server.DomainApiHandler.handleRequest(DomainApiHandler.java:212)
at io.undertow.server.handlers.encoding.EncodingHandler.handleRequest(EncodingHandler.java:72)
at org.jboss.as.domain.http.server.DomainApiCheckHandler.handleRequest(DomainApiCheckHandler.java:93)
at org.jboss.as.domain.http.server.security.ElytronIdentityHandler.lambda$handleRequest$0(ElytronIdentityHandler.java:62)
at org.wildfly.security.auth.server.SecurityIdentity.runAs(SecurityIdentity.java:289)
at org.wildfly.security.auth.server.SecurityIdentity.runAs(SecurityIdentity.java:246)
at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:254)
at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:225)
at org.jboss.as.domain.http.server.security.ElytronIdentityHandler.handleRequest(ElytronIdentityHandler.java:61)
at io.undertow.server.handlers.BlockingHandler.handleRequest(BlockingHandler.java:56)
at io.undertow.server.Connectors.executeRootHandler(Connectors.java:360)
at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:830)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1349)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:485)
If I open the web management page from another site, the error is mirrored - this time it complains about host1.tld. It's obvious that I did something wrong, but I have no idea what exactly. Will be glad if someone could help me.

Related

RabbitMQ set queue Parameters while connecting through camel

I am trying to connect to RabbitMQ queues present in the server using apache-camel configuration.
It works fine when I create the queues with durable field false and auto-delete field true. But doesn't work when either of them is otherwise.
applicationContext.xml file looks like this -
<bean id="customConnectionFactory" class="com.rabbitmq.client.ConnectionFactory">
<property name="host" value="localhost" />
<property name="port" value="5672" />
<property name="username" value="guest" />
<property name="password" value="guest" />
<property name="virtualHost" value="Test" />
</bean>
<bean id="testBean" class="test.TestBean" />
<camelContext id="camel" xmlns="http://camel.apache.org/schema/spring">
<route>
<from
uri="rabbitmq://localhost:5672/ex1?connectionFactory=#customConnectionFactory&queue=Q1&autoDelete=true&durable=true" />
<to uri="bean:testBean?method=hello" /> <!-- This method consumes and prints the message -->
</route>
</camelContext>
Here I need to specify the properties autoDelete and durable for the queue Q1 not the exchange ex1. (I have already specified for the exchange in the URI)
As the error is -
Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - inequivalent arg 'auto_delete' for queue 'Q1' in vhost 'Test': received 'true' but current is 'false', class-id=50, method-id=10)
Here reply-code=406 indicates that the parameters of queues/exchanges are not matching with the actual configuration. Its because of the queues properties here.
As I don't have the access to the remote queues, I cannot change the properties of queues. (Example I stated here is localhost)
Note: I have a requirement of doing this using spring beans only.

How to Set Jgroups UDP to unicast instead of default multicast in standalone-ha.xml

Jgroups use "IP multicasting by default to send messages to all members (UDP) and for discovery of the initial members . However, if multicasting cannot be used, the UDP can be configured to send multiple unicast messages instead of one multicast message. To configure UDP to use multiple unicast messages to send a group message instead of using IP multicasting, the ip_mcast property has to be set to false." (as per jboss documentation https://developer.jboss.org/ )
My question is how can I pass "ip_mcast" value to false in wildfly? Below is the sample jgroups subsystem in the standalone-ha.xml. In the xsd I don't see a way to pass this value. Please help!!
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="udpgossip"/>
</channels>
<stacks>
<stack name="udpgossip">
<transport type="UDP" socket-binding="jgroups-tcp"/>
<protocol type="TCPGOSSIP">
<property name="initial_hosts">172.17.0.2[12001]</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
</subsystem>
In the schema, <transport/> extends <protocol/>, and protocols can have properties, as your config sample already shows. So the correct way to set it should be
<transport type="UDP" socket-binding="jgroups-tcp">
<property name="ip_mcast">false</property>
</transport>

WildFly 10 Jgroups allways binding to localhost interface

Hi I'm trying to develop a clustered application that uses Infinispan for caching. First I tried to run in replicated mode by starting two instance of wildfly using the localhost as binding interface (with port offsets). This worked fine. But once I start the server using interface IP, cluster is not forming. Still I can access other services using the interface IP.
I tried to telnet the Jgroups port using interface IP address and it failed. But telnetting to localhost works for Jgorups port.
(Then entered localhsot[port] IP's to initial host configuration element in tcpping. Then cluster formation worked.)
So my question is why does it bind to localhost even after starting wildfly using interface IP.
Here is my configuration. (I cant use UDP, therefore need to use tcpping for cluster formation)
Started the wilfly server using
standalone.bat -Djboss.server.base.dir=../standalone_isuru -c standalone-full-ha.xml -b 192.168.17.33 -Djboss.node.name=isuru -Djboss.socket.binding.port-offset=1
Jgourps configuration
<subsystem xmlns="urn:jboss:domain:jgroups:4.0">
<channels default="ee">
<channel name="ee" stack="tcpping"/>
</channels>
<stacks>
<stack name="udp">
.
.
</stack>
<stack name="tcp">
.
.
</stack>
<stack name="tcpping">
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="TCPPING">
<property name="initial_hosts">
192.168.17.33[7601], 192.168.14.39[7700], 192.168.17.33[7800]
</property>
<property name="num_initial_members">
2
</property>
<property name="port_range">
5
</property>
<property name="timeout">
1000
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2"/>
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
</stack>
</stacks>
Infinispan cache config
<cache-container name="replicated_cache" default-cache="default" module="org.wildfly.clustering.server" jndi-name="infinispan/replicated_cache">
<transport lock-timeout="60000"/>
<replicated-cache name="customer" jndi-name="infinispan/replicated_cache/customer" mode="SYNC">
<transaction locking="OPTIMISTIC" mode="FULL_XA"/>
<eviction strategy="NONE"/>
</replicated-cache>
</cache-container>
I posted the same question in Jboss developer since I didn't get any answer here.
And this is the answer I got from there.
By default Jgroups bind to private interface. When starting the server this IP can be provided as well.
standalone.bat -b 192.168.17.39 -bprivate=192.168.17.39
You can refer to the interfaces section for interface configuration.
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address:127.0.0.1}"/>
</interface>
<interface name="private">
<inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
</interface>
<interface name="unsecure">
<inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/>
</interface>
</interfaces>
socket bindings, binds jgroups to private interface
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
.
<socket-binding name="jgroups-tcp" interface="private" port="7600"/>
.
</socket-binding-group>
Jgroups subsystem
<stack name="tcpping">
<transport type="TCP" socket-binding="jgroups-tcp"/>
.
</stack>

Configuring Consumer Cancellation in RabbitMQ

We are using an 2-Node active-active RabbitMQ cluster with mirrored queue. With the mirroring policy being :
"policies":[{"vhost":"/","name":"ha-all","pattern":"","apply->to":"all","definition":{"ha-mode":"all","ha-sync-mode":"automatic"},"priority":0}]
Versions : RabbitMQ 3.5.4, Erlang 17.4 , spring-amqp/spring-rabbit :1.4.5.RELEASE
Now,we are trying to achieve consumer cancellation,as mentioned in Highly Available Queues.
However,since we have not used channel,we can't use {{basicConsumer}} method as given in the above link.
How do I set,"x-cancel-on-ha-failover" to true in the configuration,itself?
With the beans xml being thus :
<rabbit:connection-factory id="connectionFactory"
addresses="localhost:5672"
username="guest"
password="guest"
channel-cache-size="5" />
<!-- CREATE THE JsonMessageConverter BEAN -->
<bean id="jsonMessageConverter" class="org.springframework.amqp.support.converter.JsonMessageConverter" />
<!-- Spring AMQP Template -->
<rabbit:template id="amqpTemplate" connection-factory="connectionFactory" retry-template="retryTemplate" message-converter="jsonMessageConverter" />
<!-- in case connection is broken then Retry based on the below policy -->
<bean id="retryTemplate" class="org.springframework.retry.support.RetryTemplate">
<property name="backOffPolicy">
<bean class="org.springframework.retry.backoff.ExponentialBackOffPolicy">
<property name="initialInterval" value="500" />
<property name="multiplier" value="2" />
<property name="maxInterval" value="30000" />
</bean>
</property>
</bean>
<rabbit:queue name="testQueue" durable="true">
<rabbit:queue-arguments>
<entry key="x-max-priority">
<value type="java.lang.Integer">10</value>
</entry>
</rabbit:queue-arguments>
</rabbit:queue>
<bean id="messsageConsumer" class="consumer.RabbitConsumer">
</bean>
<rabbit:listener-container
connection-factory="connectionFactory" concurrency="5" max-concurrency="5" message-converter="jsonMessageConverter">
<rabbit:listener queues="testQueue" ref="messsageConsumer" />
</rabbit:listener-container>
The <rabbit:listener-container> actually populates a SimpleMessageListenerContainer bean on background. And the last one supports public void setConsumerArguments(Map<String, Object> args) on the matter.
So, to fix your requirements you just need to build the raw SimpleMessageListenerContainer <bean> for your messsageConsumer.
Meanwhile you are fixing that for your application, I'd ask you for the JIRA regarding adding <consumer-arguments> component. And we may be able to address it with the current GA deadline.

ActiveMQ as local JNDI tomcat ressource

i'm trying to set up ActiveMQ as Tomcat ressource with local JNDI. But when i add the config-file to the
Broker URI "brokerConfig=xbean:activemq.xml" the broker isn't starting up without any error message.
it just keeps telling me:
Mrz 30, 2012 10:23:19 AM org.springframework.jms.listener.DefaultMessageListenerContainer refreshConnectionUntilSuccessful
Warnung: Could not refresh JMS Connection for destination 'FOO.QUEUE' - retrying in 5000 ms. Cause: Could not create Transport. Reason: java.io.IOException: Could not load xbean factory:java.lang.NoClassDefFoundError: Could not initialize class org.apache.activemq.xbean.XBeanBrokerFactory
I used the default config from http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/conf/activemq.xml and is placed in the root of my src folder.
i'm using "activemq-all_5.4.3.jar"
My web.xml in "WebContent\META-INF"
<resource-ref>
<description>JMS Connection</description>
<res-ref-name>jms/ConnectionFactory</res-ref-name>
<res-type>org.apache.activemq.ActiveMQConnectionFactory</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
<resource-ref>
<res-ref-name>jms/FooQueue</res-ref-name>
<res-type>javax.jms.Queue</res-type>
<res-auth>Container</res-auth>
</resource-ref>
My applicationContext.xml in "WebContent\WEB-INF"
<jee:jndi-lookup id="fooQueue"
jndi-name="java:comp/env/jms/FooQueue"
cache="true"
resource-ref="true"
lookup-on-startup="true"
expected-type="org.apache.activemq.command.ActiveMQQueue"
proxy-interface="javax.jms.Queue" />
<bean id="singleConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
p:targetConnectionFactory-ref="connectionFactory"/>
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="singleConnectionFactory"
p:defaultDestination-ref="fooQueue"/>
<bean id="messageSenderService"
class="by2.server.JmsMessageSenderService"
p:jmsTemplate-ref="jmsTemplate" />
<bean id="jmsMessageDelegate"
class="by2.server.JmsMessageDelegate" />
<bean id="myMessageListener"
class="org.springframework.jms.listener.adapter.MessageListenerAdapter"
p:delegate-ref="jmsMessageDelegate"
p:defaultListenerMethod="handleMessage">
</bean>
<jms:listener-container
container-type="default"
connection-factory="singleConnectionFactory"
acknowledge="auto">
<jms:listener destination="FOO.QUEUE" ref="myMessageListener" />
</jms:listener-container>
My context.xml in "WebContent\META-INF"
<Context reloadable="true">
<Resource auth="Container" name="jms/ConnectionFactory"
type="org.apache.activemq.ActiveMQConnectionFactory" description="JMS Connection Factory"
factory="org.apache.activemq.jndi.JNDIReferenceFactory" brokerURL="vm://localhost?brokerConfig=xbean:activemq.xml"
brokerName="FooBroker" />
<Resource auth="Container" name="jms/FooQueue"
type="org.apache.activemq.command.ActiveMQQueue" description="JMS queue"
factory="org.apache.activemq.jndi.JNDIReferenceFactory" physicalName="FOO.QUEUE" />
</Context>
For me it looks like a classpath error.
Did you have the xbean-spring-x.x.jar in your class path?
If not copy this file also from activemq distribution and put it in your app server classpath.