I am trying to extract messages from AQ into topic in ActiveMq. I followed this instructions: http://activemq.apache.org/jms-bridge-with-oracle-aq.html . But when I started ActiveMQ I got this error:
2015-09-02 12:33:18,269 | WARN | Setup of JMS message listener invoker failed for destination 'event_queue' - trying to recover. Cause: JMS-137: Payload factory must be specified for destinations with ADT payloads | org.apache.camel.component.jms.DefaultJmsMessageListenerContainer | Camel (camel) thread #1 - JmsConsumer[event_queue]
What is the reason for this error and how it can be solved?
Yes, I found solution. So, at first AQ supports messages of the following types:
• RAW Queues • Oracle Object (ADT) Type Queues • Java Message Service
(JMS) Type Queues/Topics
The reason for this error(JMS 137) is the ADT payload is not valid for using with ActiveMq. And if you are planning to make a bridge between ActiveMq and Oracle AQ you should use JMS Types.
The other thing is that I could not find required aqjms.jar, I replaced it with aqapi.jar from jlib directory of my Oracle Client.
Also beans tag attributes should be:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
Related
I'm trying to set up mcollective/activemq on a puppetmaster (open source puppet). I am having a problem where ActiveMQ does not recognize the Stomp protocol. Here is the relevant snippet in my /etc/activemq/instances-enabled/activemq/activemq.xml file that should enable stomp+ssl:
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616"/>
<transportConnector name="stomp+ssl" uri="stomp+ssl://0.0.0.0:61614?needClientAuth=true&transport.enabledProtocols=TLSv1,TLSv1.1,TLSv1.2"/>
</transportConnectors>
When I start ActiveMQ via service activemq start, I notice that the daemon doesn't end up running (I don't see it as a process). Then I tried running service activemq console activemq, and it looks like the problem is that it can't find the stomp Transport scheme. Here is the first error that I see in the output (and the error persists throughout the output):
ERROR | Failed to start Apache
ActiveMQ ([localhost,
ID:my-servers-hostname.example.com-40447-1475514312306-0:1], java.io.IOException: Transport
Connector could not be registered in
JMX: java.io.IOException: Transport
scheme NOT recognized: [stomp+ssl])
ActiveMQ recognizes openwire just fine. When using openwire+ssl only, without using stomp+ssl, the ActiveMQ daemon starts fine with no errors. However, when I try running mco find, I get an error because it seems that mco is still trying to use stomp+ssl (and ActiveMQ only has openwire+ssl enabled):
error 2016/10/03 17:26:59: activemq.rb:149:in `on_ssl_connectfail' SSL session creation with stomp+ssl://mcollective#localhost:61614 failed: Connection refused - connect(2) for "localhost" port 61614
Perhaps I need to adjust my mco config to use openwire instead of stomp? I wasn't sure where or what file that configuration would be in. Not sure why it doesn't recognize stomp, but I was wondering what my options are here. Is it even possible to use MCollective/ActiveMQ using only openwire+ssl, or is using stomp a requirement if I want to use mco? I don't think this is a port issue, as the relevant ports are open on the server I believe.
Here are the relevant packages/versions installed on my machine:
OS: Ubuntu 16.04 (xenial)
puppet: 4.7.0
ActiveMQ: 5.13.2
ruby-stomp: 1.3.5-1
MCollective (mco) version: 2.9.0
I've run into the same problem with the embedded ActiveMQ server in my project. Turns out I needed to add the following dependencies to my pom.
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-stomp</artifactId>
<version>5.15.0</version>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-kahadb-store</artifactId>
<version>5.15.0</version>
</dependency>
In your case, I think you need to get hold of those 2 jars and add them to your ActiveMQ installation.
In activemq package provided by ubuntu 16+ library stomp transport was not included. I do not know why.
Yoy can download it manually and place in /usr/share/activemq/lib:
cd /usr/share/activemq/lib
# check your actviemq version before (apt-cache policy activemq) and use the relevant version of library.
wget https://repository.apache.org/content/repositories/releases/org/apache/activemq/activemq-stomp/5.13.5/activemq-stomp-5.13.5.jar
service activemq restart
I had a terrible night trying to figure out what is going on with RabbitMQ and SpringXD, unfortunately without a success.
The problem:
SpringXD closes RabbitMQ connections repeatedly,
or reports warnings related to the channel cache size.
Fragment from the SpringXD log (during stream initialization/autowiring):
2016-05-03T07:42:43+0200 1.3.0.RELEASE WARN
DeploymentsPathChildrenCache-0 listener.SimpleMessageListenerContainer
- CachingConnectionFactory's channelCacheSize can not be less than the
number of concurrentConsumers so it was reset to match: 4
...
2016-05-03T07:54:17+0200 1.3.0.RELEASE ERROR AMQP Connection
192.168.120.125:5672 connection.CachingConnectionFactory - Channel shutdown: connection error
2016-05-03T17:38:58+0200 1.3.0.RELEASE ERROR AMQP Connection
192.168.120.125:5672 connection.CachingConnectionFactory - Channel shutdown: connection error; protocol method:
method<connection.close>(reply-code=504, reply-text=CHANNEL_ERROR -
second 'channel.open' seen, class-id=20, method-id=10)
Fragment from the RabbitMQ log:
=WARNING REPORT==== 3-May-2016::08:08:09 === closing AMQP connection <0.22276.61> (192.168.120.125:59350 -> 192.168.120.125:5672): client
unexpectedly closed TCP connection
=ERROR REPORT==== 3-May-2016::08:08:11 === closing AMQP connection 0.15409.61> (192.168.120.125:58527 -> 192.168.120.125:5672):
{writer,send_failed,{error,closed}}
state blocked error is rare
=ERROR REPORT==== 3-May-2016::17:38:58 === Error on AMQP connection <0.20542.25> (192.168.120.125:59421 -> 192.168.120.125:5672, vhost:
'/', user: 'xd', state: blocked), channel 7: operation channel.open
caused a connection exception channel_error: "second 'channel.open'
seen"
My setup (6 nodes)
- springxd 1.3.0 distributed (zookeeper)
- RabbitMQ 3.6.0, Erlang R16B03-1 cluster
ackMode: AUTO ## or NONE
autoBindDLQ: false
backOffInitialInterval: 1000
backOffMaxInterval: 10000
backOffMultiplier: 2.0
batchBufferLimit: 10000
batchingEnabled: false
batchSize: 200
batchTimeout: 5000
compress: false
concurrency: 4
deliveryMode: NON_PERSISTENT ## or PERSISTENT
durableSubscription: false
maxAttempts: 10
maxConcurrency: 10
prefix: xdbus.
prefetch: 1000
replyHeaderPatterns: STANDARD_REPLY_HEADERS,*
republishToDLQ: false
requestHeaderPatterns: STANDARD_REQUEST_HEADERS,*
requeue: true
transacted: false
txSize: 1000
spring:
rabbitmq:
addresses:
priv1:5672,priv2:5672,priv3:5672,
priv4:5672,priv5:5672,priv6:5672
adminAddresses:
http://priv1:15672, http://priv2:15672, http://priv3:15672, http://priv4:15672, http://priv5:15672,http://priv6:15672
nodes:
rabbit#priv1,rabbit#priv2,rabbit#priv3,
rabbit#priv4,rabbit#priv5,rabbit#priv6
username: xd
password: xxxx
virtual_host: /
useSSL: false
ha-xdbus policy:
- ^xdbus\. all
- ha-mode: exactly
- ha-params: 2
- queue-master-locator: min-masters
Rabbit conf
[
{rabbit,
[
{tcp_listeners, [5672]},
{queue_master_locator, "min-masters"}
]
}
].
When ackMode is NONE the following happens:
Eventually the number of consumers drop to zero and I have a zombie streams that don't recover from that state, which in turn causes unwanted queueing.
When ackMode is AUTO the following happens:
Some messages left un-acked forever.
SpringXD streams and durable queues
Rabbit module is being used as source or sink, no custom autowiring.
Typical stream definitions are as follows:
Ingestion:
event_generator | rabbit --mappedRequestHeaders=XDRoutingKey --routingKey='headers[''XDRoutingKey'']'
Processing/Sink:
rabbit --queues='xdbus.INQUEUE-A' | ENRICHMENT-PROCESSOR-A | elastic-sink
rabbit --queues='xdbus.INQUEUE-B' | ENRICHMENT-PROCESSOR-B | elastic-sink
xdbus.INQUEUE-xxx are manually created from the Rabbit admin GUI. (durable)
GLOBAL statistics (from the RabbitMQ Admin)
Connections: 190
Channels: 2263 (Channel cache problem perhaps ?)
Exchanges: 20
Queues: 120
Consumers : 1850
Finally:
I would appreciate if someone could answer what is wrong with the configuration (I am pretty sure the network is performing well, so there are no network problems and there is no problem related to max open files limitation).
Message rates vary from 2K/sec to max 30k/sec which is relative small load.
Thanks!
Ivan
We have seen some similar instability when churning channels at a high rate.
The work-around was to increase the channel cache size to avoid the high rate of churning; it's not clear where the instability lies, but I don't believe it is in Spring AMQP.
One problem, however, is that XD doesn't expose channelCacheSize as a property.
The answer at the link above has a work-around to add the property by replacing the bus configuration XML. Increasing the cache size solved that user's problem.
We have an open JIRA issue to expose the property but it's not implemented yet.
I see you originally posted this as an 'answer' to that question.
Could someone be more specific and explain where exactly rabbit-bus.xml should be installed and why is this happening anyway.
As it says there, you need to put it under the xd config directory:
xd/config/META-INF/spring-xd/bus/rabbit-bus.xml.
EDIT
Technique using the bus extension mechanism instead...
$ cat xd/config/META-INF/spring-xd/bus/ext/cf.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
<bean id="rabbitConnectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<constructor-arg ref="rabbitFactory" />
<property name="addresses" value="${spring.rabbitmq.addresses}" />
<property name="username" value="${spring.rabbitmq.username}" />
<property name="password" value="${spring.rabbitmq.password}" />
<property name="virtualHost" value="${spring.rabbitmq.virtual_host}" />
<property name="channelCacheSize" value="${spring.rabbitmq.channelCacheSize:100}" />
</bean>
</beans>
EDIT: TEST RESULTS
Prepopulated queue foo with 1 million messages.
concurrency: 10
prefetch: 1000
txSize: 1000
.
xd:>stream create foo --definition "rin:rabbit --concurrency=10 --maxConcurrency=10 --prefetch=1000 --txSize=1000 | t1:transform | t2:transform | rout:rabbit --routingKey='''bar'''" --deploy
Created and deployed new stream 'foo'
So with this configuration, we end up with 40 consumers.
I never saw more than 29 publishing channels from the bus, there were 10 publishers for the sink.
1m messages were transferred from foo to bar in less than 5 minutes (via xdbus.foo.0, xdbus.foo.1 and xdbus.foo.2) - 4m messages published.
No errors - but my laptop needs to cool off :D
I have encountered an issue with ActiveMQ where the entire cluster will fail when the master Zookeeper node goes offline.
We have a 3-node ActiveMQ cluster setup in our development environment. Each node has ActiveMQ 5.12.0 and Zookeeper 3.4.6 (*note, we have done some testing with Zookeeper 3.4.7, but this has failed to resolve the issue. Time constraints have so far prevented us from testing ActiveMQ 5.13).
What we have found is that when we stop the master ZooKeeper process (via the "end process tree" command in Task Manager), the remaining two ZooKeeper nodes continue to function as normal. Sometimes the ActiveMQ cluster is able to handle this, but sometimes it does not.
When the cluster fails, we typically see this in the ActiveMQ log:
2015-12-18 09:08:45,157 | WARN | Too many cluster members are connected. Expected at most 3 members but there are 4 connected. | org.apache.activemq.leveldb.replicated.MasterElector | WrapperSimpleAppMain-EventThread
...
...
2015-12-18 09:27:09,722 | WARN | Session 0x351b43b4a560016 for server null, unexpected error, closing socket connection and attempting reconnect | org.apache.zookeeper.ClientCnxn | WrapperSimpleAppMain-SendThread(192.168.0.10:2181)
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)[:1.7.0_79]
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)[:1.7.0_79]
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)[zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)[zookeeper-3.4.6.jar:3.4.6-1569965]
We were immediately concerned by the fact that (A)ActiveMQ seems to think there are four members in the cluster when it is only configured with 3 and (B) when the exception is raised, the server appears to be null. We then increased ActiveMQ's logging level to DEBUG in order to display the list of members:
2015-12-18 09:33:04,236 | DEBUG | ZooKeeper group changed: Map(localhost -> ListBuffer((0000000156,{"id":"localhost","container":null,"address":null,"position":-1,"weight":5,"elected":null}), (0000000157,{"id":"localhost","container":null,"address":null,"position":-1,"weight":1,"elected":null}), (0000000158,{"id":"localhost","container":null,"address":"tcp://192.168.0.11:61619","position":-1,"weight":10,"elected":null}), (0000000159,{"id":"localhost","container":null,"address":null,"position":-1,"weight":10,"elected":null}))) | org.apache.activemq.leveldb.replicated.MasterElector | ActiveMQ BrokerService[localhost] Task-14
Can anyone suggest why this may be happening and/or suggest a way to resolve this? Our configurations are shown below:
ZooKeeper:
tickTime=2000
dataDir=C:\\zookeeper-3.4.7\\data
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.0.10:2888:3888
server.2=192.168.0.11:2888:3888
server.3=192.168.0.12:2888:3888
ActiveMQ (server.1):
<persistenceAdapter>
<replicatedLevelDB
directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:61619"
zkAddress="192.168.0.11:2181,192.168.0.10:2181,192.168.0.12:2181"
zkPath="/activemq/leveldb-stores"
hostname="192.168.0.10"
weight="5"/>
//server.2 has a weight of 10, server.3 has a weight of 1
</persistenceAdapter>
I'm trying to understand the difference between ActiveMQ redeliveryPlugin and consumer's attempt to recieve messages before it marks it as a poison pill. What's the difference. In the documentation there'is an example:
<broker xmlns="http://activemq.apache.org/schema/core" schedulerSupport="true" >
....
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
<!-- a destination specific policy -->
<redeliveryPolicy queue="SpecialQueue" maximumRedeliveries="4"
redeliveryDelay="10000" />
</redeliveryPolicyEntries>
<!-- the fallback policy for all other destinations -->
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4" initialRedeliveryDelay="5000"
redeliveryDelay="10000" />
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
Now, I uderstand the broker's redelivery system as a separate to the client's one. For instance, after making 6 attempts (by default) to acknowledge a message (CLIENT_ACKNOWLDGMENT mode) the consumer send a poison pill. So, is it true that after receiving the poison pill, broker will try to resend the message to the consumer which will make another 6 attempt.
So, in total we may have 4 x 6 = 24 attempts before the message will send to a DLQ.
Is my understading correct?
Yes. The broker is not aware of any client redelivery. That happens in "the driver" - in memory. The broker won't consider if the client has already retried or not. The result is nested retries which is good to be aware of.
I've been struggling to start AMQ broker node with persistent store on an NFSv3 share.
I keep getting the below error complaining of unavailable locks.
I've made sure that all java processes are killed and the lock file on the shared folder is deleted before starting the AMQ master broker.
When I start AMQ, it seems to create a lock file on the shared folder and after that it complains of unavailable locks.
Loading message broker from: xbean:activemq.xml
INFO | Refreshing org.apache.activemq.xbean.XBeanBrokerFactory$1#73cf56e9: startup date [Mon Dec 23 05:28:23 UTC 2013]; root of context hierarchy
INFO | PListStore:[/home/pnarayan/apache-activemq-5.9.0/activemq-data/notificationsBroker/tmp_storage] started
INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/home/y/share/nfs/amqnfs]
INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
INFO | Database /home/y/share/nfs/amqnfs/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: No locks available
Below is the activemq xml configuration file I'm using:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<broker
xmlns="http://activemq.apache.org/schema/core"
xmlns:spring="http://www.springframework.org/schema/beans"
brokerName="notificationsBroker"
useJmx="true"
start="true"
persistent="true"
useShutdownHook="false"
deleteAllMessagesOnStartup="false">
<persistenceAdapter>
<kahaDB directory="/home/y/share/nfs/amqnfs" />
</persistenceAdapter>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
</broker>
</beans>
Is this because of the reason that I'm using NFSv3, but not NFSv4 as recommended by AMQ?
I believe the issue with NFSv3 is that it cannot cleanup the lock if at all the broker process dies abruptly. However it shouldn't be having any issues in starting the broker. If my understanding is right why am I observing the above error?
You are absolutely right in what you say - NFS3 does not clean up its locks properly. When using KahaDB, the broker creates a file in $ACTIVEMQ_DATA/lock. If that file exists, chances are that something has a hold on it (or at least NFS3 thinks that it does) and the broker will be blocked. Check to see whether the file is there, and if so, use the lsof command to determine the process id of its holder.