I'm only trying to use ActiveMQ queue mechanism and not topic. Does the below activemq.xml configuration force any clients to use a topic instead of a queue of can I ignore the policyEntry topic=">"? I'm seeing issues where multiple consumers sometimes (very small percentage) pick up the same message and process it. In the admin console it shows I have a queue. I do see that the default activemq.xml contains a policyEntry for both topic and queue. Its a bit odd that not all messages are consumed by multiple consumer threads if this config is indeed invalid for a queue based approach.
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}" persistent="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" >
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
The line you pointed out only means that policy is defined for all topics,
In activemq '>' does the same thing as '*' does in other languages i.e it is default handler for all types.
You can remove that configuartion if you have doubts.
As your pending message stratergy is defined in the policyEntry it will be applicable to all topics,not queues.
Related
I followed the instructions in the ActiveMQ documentation with no success. The relevant portion of the configuration file (conf/activemq.xml) looks like this:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}"
deleteAllMessagesOnStartup="true" schedulePeriodForDestinationPurge="60000">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" gcInactiveDestinations="true" inactiveTimoutBeforeGC="300000">
[...]
Note the use of three attributes, as per the docs: schedulePeriodForDestinationPurge on the broker element, and gcInactiveDestinations and inactiveTimoutBeforeGC (sic) on the policyEntry element.
However, the broker (version 5.15.11) is not purging inactive destinations.
What am I doing wrong?
Be sure your '<broker ..' element has the scheduler support enabled
<broker ... schedulerSupport="true" ... >
Problem statement: There are two Queues in two different brokers. Each Queue has one Consumer to it. The producer is dropping messages on the first Queue. We would want to send a copy of message to the second Queue. For visualization
Producer
|
Broker1 --> Queue1 --> Consumer1
| (copy)
Broker2 --> Queue2 --> Consumer2 (consumes same message as Consumer1 but is independent of Consumer1)
The ask is
Only 1 queue is created in each broker. I have achieved the above with 4 Queues but looking for more optimized solution.
Prefer no topics to be used.
To be done only through activemq provided configuration.
What have I done till now:
I managed to do the above with 4 queues.
In Broker1, Queue1 forwarding a copy to a Virtual Destination Queue. Also, sending the messages in Virtual Destination to broker 2 through network connector.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="Queue1" forwardOnly="false">
<forwardTo>
<queue physicalName="IntermediateQueue"/>
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<networkConnectors>
<networkConnector
name="Q:broker1->broker2"
uri="static:(tcp://localhost:31616)"
duplex="false"
staticBridge="true">
<staticallyIncludedDestinations>
<queue physicalName="IntermediateQueue"/>
</staticallyIncludedDestinations>
</networkConnector>
</networkConnectors>
In Broker2, forwarding all messages received in the intermediate Queue to the actual destination queue.
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<compositeQueue name="IntermediateQueue">
<forwardTo>
<queue physicalName="FinalDestinationQueue" />
</forwardTo>
</compositeQueue>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
Appreciate any help, as going through activemq documentation and forums didn't yield an optimized answer to this problem.
You are essentially re-creating pub+sub and then adding in a transmission-queue pattern for multi-broker integration. There are valid use cases to do this and your approach is valid and within the intended design of Composite Destinations and Network Connectors. The trade-off in this approach is the heavy administration and configuration management that is required.
I understand you prefer to not use topics. However, you may consider looking at Virtual Topics1 which solve this problem in an elegant way and allows you to add new consumers dynamically and without having to modify the broker configuration.
Producer send to Topic:
topic://VT.ORDER.EVENT
Consumer(s) read from special named Queues
clientA: queue://VQ.CLIENTA.VT.ORDER.EVENT
clientB: queue://VQ.CLIENTB.VT.ORDER.EVENT
ref: Virtual Topics
I'm attempting to use a network of brokers that bridges two LANs over a duplex WAN connector:
There are actually many subscribers in our setup, each connecting to a different "Broker A", if that makes sense. All of the Broker A instances have their own connections to a single "Broker B".
Software and configurations:
ActiveMQ 5.14.0, Java 8
All brokers have non-persistent topics only; advisory messages are on.
OS: Linux (RHEL 6)
When I initially bring everything online, regardless of the order in which I bring things online, communication between the publisher and subscriber works flawlessly. I've had the system up-and-running for weeks at a time without issue.
What I've observed is that if broker C is restarted, no new topics that show up in broker B ever appear in broker C. New topics are still appearing in broker B as they are created by the subscriber(s). Neither existing nor new topics ever propagate across the WAN to broker C. I've verified this using jconsole.
If I restart broker B, the problem goes away immediately. The topics contained in broker B (according to jconsole) are the same as they were prior to restart, but now they've magically appeared in C.
Brokers B and C have the same configuration (shown below). The only difference is that B creates a duplex network connector to C created using the following code:
final NetworkConnector wanNC = new DiscoveryNetworkConnector(
new URI(String.format("static:(failover:(tcp://%s:%d))", parentNode, port)));
wanNC.setCheckDuplicateMessagesOnDuplex(true);
wanNC.setDecreaseNetworkConsumerPriority(true);
wanNC.setDuplex(true);
wanNC.setName(NetworkUtils.getHostName());
wanNC.setNetworkTTL(10);
wanNC.setSuppressDuplicateTopicSubscriptions(false);
broker.addNetworkConnector(wanNC);
broker.xml:
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="${broker.id}" start="false"
offlineDurableSubscriberTimeout="5000" offlineDurableSubscriberTaskSchedule="5000"
persistent="false" useJmx="true" schedulePeriodForDestinationPurge="86400000">
[...]
<networkConnectors>
<networkConnector name="z-broker-${broker.id}-x-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_X">
<excludedDestinations>
<topic physicalName="X.A" />
</excludedDestinations>
</networkConnector>
<networkConnector name="z-broker-${broker.id}-y-nc"
decreaseNetworkConsumerPriority="true"
networkTTL="10"
uri="multicast://225.5.5.5:6555?group=TO_Y">
<excludedDestinations>
<topic physicalName="X.B.>" />
</excludedDestinations>
</networkConnector>
</networkConnectors>
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://${broker.ip}:${broker.port}?maximumConnections=1000&wireFormat.maxFrameSize=104857600"
discoveryUri="multicast://225.5.5.5:6555?group=TO_Z" />
</transportConnectors>
</broker>
</beans>
Why don't topics from broker B (existing or new) ever show up in broker C?
Why does restarting broker B solve the issue immediately?
Apparently the trick was changing the network connector URI from
static:(failover:(tcp://<ip>:<port>))
to
static:(tcp://<ip>:<port>)
I didn't need failover transport for any reason since the connection is intended as a network bridge and there's a single remote address.
For whatever reason, using failover prevented topics from propagating on reconnect.
Using AMQ 5.9 I had a network issue that caused messages in my queue to back up. I started getting log messages
"Usage (default:store:queue://myqueue:store) percentUsage=101% usage=6254005794, limit=6144430090, ...Stopping producer...to prevent flooding queue..)
I have not been able to recover and start consuming messages again.
I tried going into activemq.xml to increase my max store usage:
<systemUsage>
<systemUsage>
<storeUsage>
<storeUsage limit="15 gb" />
</storeUsage>
</systemUsage>
</systemUsage>
I also tried to turn off flow control with
<policyEntry queue=">" producerFlowControl="false"/>
But I still get the same error message.
I have the disk space. There are no settings being overridden on the command line. How can I recover and get my messages processed?
I ended up finding an ugly (yet effective) way to get around this. If you connect to activemq via JMX (jconsole), you can navigate to the mbean org.apache.activemq.Broker and find attribute StoreLimit. You can MANUALLY increase this value in JConsole then activemq will restart message processing shortly thereafter.
I'm trying to understand the difference between ActiveMQ redeliveryPlugin and consumer's attempt to recieve messages before it marks it as a poison pill. What's the difference. In the documentation there'is an example:
<broker xmlns="http://activemq.apache.org/schema/core" schedulerSupport="true" >
....
<plugins>
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
<!-- a destination specific policy -->
<redeliveryPolicy queue="SpecialQueue" maximumRedeliveries="4"
redeliveryDelay="10000" />
</redeliveryPolicyEntries>
<!-- the fallback policy for all other destinations -->
<defaultEntry>
<redeliveryPolicy maximumRedeliveries="4" initialRedeliveryDelay="5000"
redeliveryDelay="10000" />
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
</plugins>
Now, I uderstand the broker's redelivery system as a separate to the client's one. For instance, after making 6 attempts (by default) to acknowledge a message (CLIENT_ACKNOWLDGMENT mode) the consumer send a poison pill. So, is it true that after receiving the poison pill, broker will try to resend the message to the consumer which will make another 6 attempt.
So, in total we may have 4 x 6 = 24 attempts before the message will send to a DLQ.
Is my understading correct?
Yes. The broker is not aware of any client redelivery. That happens in "the driver" - in memory. The broker won't consider if the client has already retried or not. The result is nested retries which is good to be aware of.