Activemq Shutdown fails and then kills process - activemq

I am implementing replicated leveldb activemq setup. I have 3 instance of activemq running on same box. I am changing their rmiPort, amqpport and openwire port in config file.
config like lookslie this:
<?xml version="1.0" encoding="UTF-8"?><beans xmlns=" http://www.springframework.org /schema/beans" xmlns:amq=" http://activemq.apache.org/schema/core" xmlns:xsi=" http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="activemq_8200" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry producerFlowControl="false" topic=">">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry producerFlowControl="false" queue=">">
<deadLetterStrategy>
<!--
Use the prifix 'DLQ.' for the destination name, and make the DLQ a queue rather than a topic
-->
<individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true"/>
</deadLetterStrategy>
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<replicatedLevelDB bind="tcp://0.0.0.0:0" directory="${activemq.data}/leveldb" replicas="3" zkAddress="gwxdev05.northamerica.cerner.net:2181,gwxdev05.northamerica.cerner.net:2182,gwxdev05.northamerica.cerner.net:2183" zkPassword="password" zkPath="/opt/gwx/activemqdata"/>
</persistenceAdapter>
<systemUsage>
<systemUsage sendFailIfNoSpace="true">
<memoryUsage>
<memoryUsage limit="256 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="128 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:${openwirePort}?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:${amqpPort}?maximumConnections=1000&wireformat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook"/>
</shutdownHooks>
</broker>
<import resource="jetty.xml"/>
my instance file looks like this:
ACTIVEMQ_BASE=`cd "$ACTIVEMQ_BASE" && pwd`
## Add system properties for this instance here (if needed), e.g
#export ACTIVEMQ_OPTS_MEMORY="-Xms256M -Xmx1G"
#export ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY
-Dorg.apache.activemq.UseDedicatedTaskRunner=true
-Djava.util.logging.config.file=logging.properties"
export ACTIVEMQ_SUNJMX_CONTROL="-Dactivemq.jmx.url=service:jmx:rmi:///jndi/rmi://127.0.0.1:8100/jmxrmi"
#
ACTIVEMQ_SUNJMX_START="-Dcom.sun.management.jmxremote.port=8100 "
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.authenticate=false"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=false"
export ACTIVEMQ_SUNJMX_START=$ACTIVEMQ_SUNJMX_START
export ACTIVEMQ_HOME=/opt/gwx/apache-activemq-5.10-SNAPSHOT
export ACTIVEMQ_BASE=$ACTIVEMQ_BASE
export JAVA_HOME=/opt/gwx/apache-activemq-5.10-SNAPSHOT/jdk1.7.0_25
${ACTIVEMQ_HOME}/bin/activemq "$#"
Here is exception I get:
Connecting to pid: 2410
INFO: failed to resolve jmxUrl for pid:2410, using default JMX url Connecting to JMX URL: service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
ERROR: java.lang.RuntimeException: Failed to execute stop task. Reason: java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] java.lang.RuntimeException: Failed to execute stop task. Reason: java.io.IOException: Failed to retrieve RMIServer stub:
javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused]
at
I checked firewall. It not issue.
Any idea what might be causing this issue.
Activemq version 5.10 snapshot
Java 1.7
OS linux 6.4

If it still helps someone:
When I had in bin/env this
export ACTIVEMQ_SUNJMX_CONTROL="-Dactivemq.jmx.url=service:jmx:rmi:///jndi/rmi://127.0.0.1:8100/jmxrmi"
#
ACTIVEMQ_SUNJMX_START="-Dcom.sun.management.jmxremote.port=8100 "
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.authenticate=false"
ACTIVEMQ_SUNJMX_START="$ACTIVEMQ_SUNJMX_START -Dcom.sun.management.jmxremote.ssl=false"
, it did not work, process failed even without logging.
I don't know why, but this method works(for AMQ 5.13): u add to xml this:
<managementContext>
<managementContext connectorPort="1099"/>
</managementContext>
and nothing like ACTIVEMQ_SUNJMX_START in env file

Related

Apache Ignite Structured Logging

I am looking to enable structured logging for Ignite.
Ignite runs inside a docker container.
I enabled the log4j2 module and added a log4j2 configuration file that tries to use <JsonTemplateLayout.../> as described here but in the logs i get the message:
Console contains an invalid element or attribute "JsonTemplateLayout"
Which is probably caused by not having the log4j-layout-template-json dependency available inside ignite. Is there a way how to add the dependency to Ignite or is there another option on how to get structured logging working?
Ignite configuration:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="gridLogger">
<bean class="org.apache.ignite.logger.log4j2.Log4J2Logger">
<constructor-arg type="java.lang.String" value="config/ignite-log4j2-custom.xml"/>
</bean>
</property>
</bean>
</beans>
log4j2 configuration:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration monitorInterval="60" status="debug">
<Appenders>
<Console name="CONSOLE" target="SYSTEM_OUT">
<!-- <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/> -->
<ThresholdFilter level="ERROR" onMatch="DENY" onMismatch="ACCEPT"/>
<JsonTemplateLayout eventTemplateUri="classpath:EcsLayout.json"/>
</Console>
<Console name="CONSOLE_ERR" target="SYSTEM_ERR">
<!-- <PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/> -->
<JsonTemplateLayout eventTemplateUri="classpath:EcsLayout.json"/>
</Console>
<File name="CONSISTENCY" fileName="${sys:IGNITE_HOME}/work/log/consistency.log">
<PatternLayout>
<Pattern>"[%d{ISO8601}][%-5p][%t][%c{1}] %m%n"</Pattern>
</PatternLayout>
</File>
<Routing name="FILE">
<Routes pattern="$${sys:nodeId}">
<Route>
<RollingFile name="Rolling-${sys:nodeId}" fileName="${sys:IGNITE_HOME}/work/log/${sys:appId}-${sys:nodeId}.log"
filePattern="${sys:IGNITE_HOME}/work/log/${sys:appId}-${sys:nodeId}-%i-%d{yyyy-MM-dd}.log.gz">
<PatternLayout pattern="[%d{ISO8601}][%-5p][%t][%c{1}]%notEmpty{[%markerSimpleName]} %m%n"/>
<Policies>
<TimeBasedTriggeringPolicy interval="6" modulate="true" />
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Route>
</Routes>
</Routing>
</Appenders>
<Loggers>
<!-- <Logger name="org.apache.ignite" level="INFO"/> -->
<!--
Uncomment to disable courtesy notices, such as SPI configuration
consistency warnings.
-->
<!--
<Logger name="org.apache.ignite.CourtesyConfigNotice" level=OFF/>
-->
<Logger name="org.springframework" level="WARN"/>
<Logger name="org.eclipse.jetty" level="WARN"/>
<Logger name="org.apache.ignite.internal.visor.consistency" additivity="false" level="INFO">
<AppenderRef ref="CONSISTENCY"/>
</Logger>
<!--
Avoid warnings about failed bind attempt when multiple nodes running on the same host.
-->
<Logger name="org.eclipse.jetty.util.log" level="ERROR"/>
<Logger name="org.eclipse.jetty.util.component" level="ERROR"/>
<Logger name="com.amazonaws" level="WARN"/>
<Root level="INFO">
<!-- Uncomment to enable logging to console. -->
<AppenderRef ref="CONSOLE" level="INFO"/>
<AppenderRef ref="CONSOLE_ERR" level="ERROR"/>
<AppenderRef ref="FILE" level="DEBUG"/>
</Root>
</Loggers>
</Configuration>
When adding the JAR to libs (as suggested by Stanislav below) i get a step further but also get an error (not a java person so any hint is highly appreciated):
main ERROR An exception occurred processing Appender CONSOLE org.apache.logging.log4j.core.appender.AppenderLoggingException: java.lang.IllegalAccessError: class org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder tried to access method 'void org.apache.logging.log4j.core.layout.TextEncoderHelper.encodeText(java.nio.charset.CharsetEncoder, java.nio.CharBuffer, java.nio.ByteBuffer, java.lang.StringBuilder, org.apache.logging.log4j.core.layout.ByteBufferDestination)' (org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder and org.apache.logging.log4j.core.layout.TextEncoderHelper are in unnamed module of loader 'app')
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:165)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:134)
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:125)
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:89)
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:542)
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:500)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:483)
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:82)
at org.apache.logging.log4j.core.Logger.log(Logger.java:161)
at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2205)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2159)
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2142)
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2017)
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1983)
at org.apache.logging.log4j.spi.AbstractLogger.info(AbstractLogger.java:1275)
at org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:472)
at org.apache.ignite.logger.log4j2.Log4J2Logger.info(Log4J2Logger.java:464)
at org.apache.ignite.internal.GridLoggerProxy.info(GridLoggerProxy.java:137)
at org.apache.ignite.internal.plugin.IgniteLogInfoProviderImpl.ackConfiguration(IgniteLogInfoProviderImpl.java:222)
at org.apache.ignite.internal.plugin.IgniteLogInfoProviderImpl.ackKernalInited(IgniteLogInfoProviderImpl.java:98)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:902)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1721)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1160)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1054)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:940)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:839)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:709)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:365)
Caused by: java.lang.IllegalAccessError: class org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder tried to access method 'void org.apache.logging.log4j.core.layout.TextEncoderHelper.encodeText(java.nio.charset.CharsetEncoder, java.nio.CharBuffer, java.nio.ByteBuffer, java.lang.StringBuilder, org.apache.logging.log4j.core.layout.ByteBufferDestination)' (org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder and org.apache.logging.log4j.core.layout.TextEncoderHelper are in unnamed module of loader 'app')
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder.encode(JsonTemplateLayout.java:241)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout$StringBuilderEncoder.encode(JsonTemplateLayout.java:216)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout.encode(JsonTemplateLayout.java:304)
at org.apache.logging.log4j.layout.template.json.JsonTemplateLayout.encode(JsonTemplateLayout.java:58)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:197)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:190)
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:181)
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:161)
... 31 more
Solution
As Stanislav Lukyanov (see accepted answer) suggested the solution was to just download the JAR and place it below $IGNITE_HOME/libs. The error mentioned above was caused by a version mismatch. Having the following JARs with correct version made it work:
log4j-api-2.17.1.jar (default provided by ignite distribution)
log4j-core-2.17.1.jar (default provided by ignite distribution)
ignite-log4j2-2.13.0.jar (default provided by ignite distribution)
log4j-layout-template-json-2.17.1.jar (added, did not work with version 2.18.x)
If you run Ignite using Maven, you'll need to add the required dependency to your application POM, as described in the documentation:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
<version>2.18.0</version>
</dependency>
If you run Ignite using a ZIP distribution, you'll need to download the dependency as a JAR, e.g. from here and add it to the $IGNITE_HOME/libs.

Ignite Local Node Binary Configuration Not Equal to Remote Node Configuration

I use the Ignite docker image to set up an ignite cluster on my local machine. Things work fine as long as I use the default configuration. When I tried overriding the config using the command
docker run -d --name my_ignite \
-p 11211:11211 -p 47100:47100 -p 47500:47500 -p 49112:49112 \
-e “OPTION_LIBS=ignite-indexing” \
-e "CONFIG_URI=file:///Users/abc/Documents/ignite_configs/ignite-config.xml" \
-v $(pwd):/Users/abc/Documents/ignite_configs apacheignite/ignite:2.7.0
Ignite starts up correctly but when I try to write to the cache I get the following exception.
My config file ignite-config.xml is
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="binaryConfiguration">
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="idMapper">
<bean class = "org.apache.ignite.binary.BinaryBasicIdMapper">
<property name = "lowerCase" value = "true"/>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
How should I fix this?
Stacktrace:
2020-05-20 14:34:49 [main] WARN o.a.i.i.p.c.d.d.t.PartitionsEvictManager - Logging at INFO level without checking if INFO level is enabled: Evict partition permits=4
2020-05-20 14:34:50 [tcp-client-disco-msg-worker-#4%fn_Instance_979931923%] ERROR ROOT - Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: GridWorker [name=tcp-client-disco-msg-worker, igniteInstanceName=svexecfn_Instance_979931923, finished=true, heartbeatTs=1590010490223]]]
org.apache.ignite.IgniteException: GridWorker [name=tcp-client-disco-msg-worker, igniteInstanceName=svexecfn_Instance_979931923, finished=true, heartbeatTs=1590010490223]
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
at org.apache.ignite.internal.worker.WorkersRegistry.onStopped(WorkersRegistry.java:169)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:153)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:304)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
2020-05-20 14:34:50 [main] ERROR o.a.i.i.IgniteKernal%svexecfn_Instance_979931923 - Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#7ed9ae94], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:300)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:939)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1682)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1066)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:603)
at org.apache.ignite.Ignition.start(Ignition.java:323)
...
Caused by: org.apache.ignite.spi.IgniteSpiException: Local node's binary configuration is not equal to remote node's binary configuration [locNodeId=4fc2317b-a7b5-43e0-bc37-20747fab2d73, rmtNodeId=a7ce1554-64cb-4de1-a71f-5c3ddb252ffa, locBinaryCfg=null, rmtBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper, compactFooter=true, globSerializer=null}]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1946)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1888)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$1.body(ClientImpl.java:304)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Take a look at the binary configuration of all your nodes:
https://apacheignite.readme.io/docs/binary-marshaller#configuring-binary-objects
you have:
Local node's binary configuration is not equal to remote node's binary configuration [locNodeId=4fc2317b-a7b5-43e0-bc37-20747fab2d73, rmtNodeId=a7ce1554-64cb-4de1-a71f-5c3ddb252ffa, locBinaryCfg=null, rmtBinaryCfg={globIdMapper=org.apache.ignite.binary.BinaryBasicIdMapper, compactFooter=true, globSerializer=null}]
which means you are inserting from a node w/a different binary config.
Take a look at the binary config of 4fc2317b-a7b5-43e0-bc37-20747fab2d73, it should be the same as the remote node.
<bean class="org.apache.ignite.configuration.BinaryConfiguration">
<property name="idMapper">
<bean class = "org.apache.ignite.binary.BinaryBasicIdMapper">
<property name = "lowerCase" value = "true"/>
</bean>
</property>
</bean>
This is binary configuration ^ it should be the same in your server node's ignite_configs/ignite-config.xml file

ActiveMQ failover seems not to work

I have super simple scenario: one broker and one consumer with durable subscription.
This is the code of my consumer app:
package test;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageListener;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.jms.Topic;
import org.apache.activemq.ActiveMQConnectionFactory;
import pojo.Event;
import pojo.StockUpdate;
public class Consumer
{
private static transient ConnectionFactory factory;
private transient Connection connection;
private transient Session session;
public static int counter = 0;
public Consumer(String brokerURL) throws JMSException
{
factory = new ActiveMQConnectionFactory(brokerURL);
connection = factory.createConnection();
connection.setClientID("CLUSTER_CLIENT_1");
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
public void close() throws JMSException
{
if (connection != null)
{
connection.close();
}
}
public static void main(String[] args) throws JMSException
{
try
{
// extract topics from the rest of arguments
String[] topics = new String[2];
topics[0] = "CSCO";
topics[1] = "ORCL";
// define connection URI
Consumer consumer = new Consumer("failover:(tcp://localhost:61616)?maxReconnectAttempts=-1&useExponentialBackOff=true");
for (String stock : topics)
{
try
{
Destination destination = consumer.getSession().createTopic("STOCKS." + stock);
// consumer.getSession().
MessageConsumer messageConsumer = consumer.getSession().createDurableSubscriber((Topic) destination, "STOCKS_DURABLE_CONSUMER_" + stock);
messageConsumer.setMessageListener(new Listener());
}
catch (JMSException e)
{
e.printStackTrace();
}
}
}
catch (Throwable t)
{
t.printStackTrace();
}
}
public Session getSession()
{
return session;
}
}
class Listener implements MessageListener
{
public void onMessage(Message message)
{
try
{
TextMessage textMessage = (TextMessage) message;
String json = textMessage.getText();
Event event = StockUpdate.fromJSON(json, StockUpdate.class);
System.out.println("Consumed message #:" + ++Consumer.counter + "\n" + event);
}
catch (Exception e)
{
e.printStackTrace();
}
}
}
Here is my activemq.xml
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="R6_cluster_broker1" persistent="true">
<networkConnectors>
<networkConnector uri="static:(failover:(tcp://remote_master:61616,tcp://remote_slave:61617))"/>
</networkConnectors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="/work/temp/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<!-- <transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/> -->
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
When I have both broker and consumer running and then stop the broker my consumer exits few moments after. As far I understood it must attempt to reconnect, but it is not the case. What am I doing wrong, please advise.
!NOTE! I launch my consumer in Eclipse, i do not build a standalone jar for this task.
I have updated my broker to the latest 5.9.1 and did the same to my consumer. Result is the same - after I stop the broker my consumer dies few seconds after. It works fine if broker is up and running.
For anyone who can't setup the failover via the URI/URL parameter in the ConnectionFactory ,because of schema not found in ARTEMIS :
As far as I know in the ActiveMQ the URL is the following:
failover:(tcp://localhost:61616,tcp://localhost:51516)?randomize=false
But in Artemis the above will fail ,because of schema not found so remove just the failover: prefix:
(tcp://localhost:61616,tcp://localhost:51516)?randomize=false
Alright, the problem was actually in my code: there was nothing that would prevent main thread from exiting. Since thread that implements failover is a daemon thread, consumer app terminated right after there was nothing to hold on to (no non-daemon threads).
Most likely you are using a version of ActiveMQ that has a bug which causes there to be all daemon threads which means there's nothing to keep the client running. Upgrade to a later version such as v5.9.1 and see if that helps. If not post more information as you haven't really provided much.

How to create an internal bridge to forward a queue to a topic

I'm trying to set-up an internal bridge from a queue to a topic and I'm getting a "Lookup failed" exception.
Here's some more details:
Env: GlassFish 3.1.2.2 (fresh install) and embedded JMS host
Create queue connection pool (javax.jms.QueueConnectionFactory): pool1 (jndi and pool name)
Create topic connection pool (javax.jms.TopicConnectionFactory): pool2 (jndi and pool name)
Create queue: queue1 (jndi and queue name)
Create topic: queue1 (jndi and queue name)
The domain.xml is configured as follows:
<jms-service default-jms-host="default_JMS_host">
<jms-host host="localhost" name="default_JMS_host" lazy-init="false">
<property name="imq.bridge.enabled" value="true"></property>
<property name="imq.bridge.admin.user" value="admin"></property>
<property name="imq.bridge.admin.password" value="admin"></property>
<property name="imq.bridge.bridge1.type" value="jms"></property>
<property name="imq.bridge.activelist" value="bridge1"></property>
<property name="imq.bridge.bridge1.xmlurl" value="file:///c:/tmp/bridge.xml"></property>
</jms-host>
</jms-service>
And the bridge.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE jmsbridge SYSTEM "sun_jmsbridge_1_0.dtd">
<jmsbridge name="bridge1">
<link name="link1">
<source connection-factory-ref="pool1" destination-ref="queue1">
</source>
<target connection-factory-ref="pool2" destination-ref="topic1">
</target>
</link>
<connection-factory ref-name="pool1" lookup-name="pool1"/>
<connection-factory ref-name="pool2" lookup-name="pool2" />
<destination ref-name="queue1" name="queue1" type="queue" lookup-name="queue1" />
<destination ref-name="topic1" name="topic1" type="topic" lookup-name="topic1" />
</jmsbridge>
EXCEPTION:
[01/Feb/2013:16:14:56 CAT] WARNING [B2217]: Failed to start bridge service manager:
javax.naming.NamingException: Lookup failed for 'pool1' in SerialContext[myEnv={java.naming.factory.initial=com.sun.enterprise.naming.impl.SerialInitContextFactory, java.naming.factory.state=com.sun.corba.ee.impl.presentation.rmi.
JNDIStateFactoryImpl, java.naming.factory.url.pkgs=com.sun.enterprise.naming} [Root exception is javax.naming.NameNotFoundException: pool1 not found]
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:518)
at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:455)
at javax.naming.InitialContext.lookup(InitialContext.java:411)
at javax.naming.InitialContext.lookup(InitialContext.java:411)
at com.sun.messaging.bridge.service.jms.JMSBridge.createConnectionFactory(JMSBridge.java:513)
at com.sun.messaging.bridge.service.jms.JMSBridge.createLink(JMSBridge.java:333)
at com.sun.messaging.bridge.service.jms.JMSBridge.init(JMSBridge.java:227)
at com.sun.messaging.bridge.service.jms.BridgeImpl.start(BridgeImpl.java:125)
at com.sun.messaging.bridge.BridgeServiceManagerImpl.startBridge(BridgeServiceManagerImpl.java:447)
at com.sun.messaging.bridge.BridgeServiceManagerImpl.start(BridgeServiceManagerImpl.java:255)
at com.sun.messaging.jmq.jmsserver.Broker._start(Broker.java:1548)
at com.sun.messaging.jmq.jmsserver.Broker.start(Broker.java:456)
at com.sun.messaging.jmq.jmsserver.BrokerProcess.start(BrokerProcess.java:164)
at com.sun.messaging.jmq.jmsserver.DirectBrokerProcess.start(DirectBrokerProcess.java:92)
at com.sun.messaging.jmq.jmsclient.runtime.impl.BrokerInstanceImpl.start(BrokerInstanceImpl.java:206)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.sun.enterprise.glassfish.bootstrap.GlassFishMain.main(GlassFishMain.java:97)
at com.sun.enterprise.glassfish.bootstrap.ASMain.main(ASMain.java:55)
Caused by: javax.naming.NameNotFoundException: pool1 not found
at com.sun.enterprise.naming.impl.TransientContext.doLookup(TransientContext.java:248)
The documentation is pretty good but I can't get around that issue...

ActiveMQ as local JNDI tomcat ressource

i'm trying to set up ActiveMQ as Tomcat ressource with local JNDI. But when i add the config-file to the
Broker URI "brokerConfig=xbean:activemq.xml" the broker isn't starting up without any error message.
it just keeps telling me:
Mrz 30, 2012 10:23:19 AM org.springframework.jms.listener.DefaultMessageListenerContainer refreshConnectionUntilSuccessful
Warnung: Could not refresh JMS Connection for destination 'FOO.QUEUE' - retrying in 5000 ms. Cause: Could not create Transport. Reason: java.io.IOException: Could not load xbean factory:java.lang.NoClassDefFoundError: Could not initialize class org.apache.activemq.xbean.XBeanBrokerFactory
I used the default config from http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/conf/activemq.xml and is placed in the root of my src folder.
i'm using "activemq-all_5.4.3.jar"
My web.xml in "WebContent\META-INF"
<resource-ref>
<description>JMS Connection</description>
<res-ref-name>jms/ConnectionFactory</res-ref-name>
<res-type>org.apache.activemq.ActiveMQConnectionFactory</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
<resource-ref>
<res-ref-name>jms/FooQueue</res-ref-name>
<res-type>javax.jms.Queue</res-type>
<res-auth>Container</res-auth>
</resource-ref>
My applicationContext.xml in "WebContent\WEB-INF"
<jee:jndi-lookup id="fooQueue"
jndi-name="java:comp/env/jms/FooQueue"
cache="true"
resource-ref="true"
lookup-on-startup="true"
expected-type="org.apache.activemq.command.ActiveMQQueue"
proxy-interface="javax.jms.Queue" />
<bean id="singleConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
p:targetConnectionFactory-ref="connectionFactory"/>
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="singleConnectionFactory"
p:defaultDestination-ref="fooQueue"/>
<bean id="messageSenderService"
class="by2.server.JmsMessageSenderService"
p:jmsTemplate-ref="jmsTemplate" />
<bean id="jmsMessageDelegate"
class="by2.server.JmsMessageDelegate" />
<bean id="myMessageListener"
class="org.springframework.jms.listener.adapter.MessageListenerAdapter"
p:delegate-ref="jmsMessageDelegate"
p:defaultListenerMethod="handleMessage">
</bean>
<jms:listener-container
container-type="default"
connection-factory="singleConnectionFactory"
acknowledge="auto">
<jms:listener destination="FOO.QUEUE" ref="myMessageListener" />
</jms:listener-container>
My context.xml in "WebContent\META-INF"
<Context reloadable="true">
<Resource auth="Container" name="jms/ConnectionFactory"
type="org.apache.activemq.ActiveMQConnectionFactory" description="JMS Connection Factory"
factory="org.apache.activemq.jndi.JNDIReferenceFactory" brokerURL="vm://localhost?brokerConfig=xbean:activemq.xml"
brokerName="FooBroker" />
<Resource auth="Container" name="jms/FooQueue"
type="org.apache.activemq.command.ActiveMQQueue" description="JMS queue"
factory="org.apache.activemq.jndi.JNDIReferenceFactory" physicalName="FOO.QUEUE" />
</Context>
For me it looks like a classpath error.
Did you have the xbean-spring-x.x.jar in your class path?
If not copy this file also from activemq distribution and put it in your app server classpath.