We're running ActiveMQ 5.6.0. We have 3 brokers operating in a static network in our test environment. Here's the current scenario. We have 6 consumers randomly connecting to the 3 brokers. One broker has 3 consumers, the second has 2, the 3rd has 1. When we pile on message to the queue, we're seeing that messages are backlogging on the 3rd broker with 1 consumer, the other two brokers aren't given any of the backlog and the remaining 5 consumers are idle.
Below you'll find our configuration for all one of our brokers (dev.queue01), the other 2 are similar with the proper changes for the static hostnames.
I would expect that messages would be automatically distributed to the other brokers for consumption by the idle consumers. Please tell me if I've missed something in my description of the problem. Thanks in advance for any guidance.
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="prd.queue01" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false" memoryLimit="1mb">
<pendingSubscriberPolicy>
<vmCursor />
</pendingSubscriberPolicy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="false" memoryLimit="64mb" optimizedDispatch="true" enableAudit="false" prioritizedMessages="true">
<networkBridgeFilterFactory>
<conditionalNetworkBridgeFilterFactory replayWhenNoConsumers="true" />
</networkBridgeFilterFactory>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="true"/>
</managementContext>
<persistenceAdapter>
<amqPersistenceAdapter directory="${activemq.data}/data/amqdb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="256 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="750 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="750 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" updateClusterClients="true" updateClusterClientsOnRemove="true" rebalanceClusterClients="true"/>
</transportConnectors>
<networkConnectors>
<networkConnector uri="static:(tcp://dev.queue02:61616,tcp://dev.queue03:61616)" name="queues_only" conduitSubscriptions="false" decreaseNetworkConsumerPriority="false" networkTTL="4">
<dynamicallyIncludedDestinations>
<queue physicalName=">"/>
</dynamicallyIncludedDestinations>
<excludedDestinations>
<topic physicalName=">"/>
</excludedDestinations>
</networkConnector>
</networkConnectors>
</broker>
<import resource="jetty.xml"/>
Late answer, but hopefully it might help future readers.
You've described a network ring of brokers, where B1, B2, and B3 all talk to one another, with 3 consumers (C1-C3) on B1, 2 consumers (C4 & C5) on B2, and 1 consumer (C6) on B3. You didn't describe where your messages are being produced (which broker they go to first), but let's say it's B3. (B3 will produce the worst-case scenario that most accurately matches your description, though you'll still see uneven load no matter where the message is produced.)
B3 has three attached consumers: C6, B1, and B2. That broker will round-robin messages across those consumers, so 1/3 of the messages will go to C6, 1/3 to B1, and 1/3 to B2.
B1 has five attached consumers: C1, C2, C3, B2, and B3. But messages won't be delivered to the same broker they just came from, so there are 4 consumers that count for the messages from B3: C1, C2, C3, and B2. So of the 1/3 of the total messages, C1, C2, and C3 will each get 1/4 (1/12 of the total), and B2 will get the same 1/12 of the total. More on that in a second.
B2 has four attached consumers: C4, C5, B1, and B3. But messages won't be delivered to the same broker they just came from, so there are 3 consumers that count for the messages from B3: C4, C5, and B1. So of the 1/3 of the total messages, C4 and C5 will each get 1/3 (1/9 of the total), and B1 will get the same 1/9 of the total. More on that in a second, too.
So far we've seen C6 get 1/3 of the total messages, C1-C3 get 1/12 of the total messages, C4-C5 get 1/9 of the total messages, and 1/12 + 1/9 = 7/36 of the total messages routed on to a second broker. Let's return to those messages now.
Of the messages that have followed the B3 -> B1 -> B2 path (1/12 of the total), they will get round-robined across C4 and C5 (because messages can't go back to their original broker B3), for an additional 1/24 of the total messages each. So C4 and C5 will have received 1/9 + 1/24 = 11/72 of the total.
Similarly, of the messages that have followed the B3 -> B2 -> B1 path (1/9 of the total), they will get round-robined across C1, C2, and C3, so C1, C2, and C3 will have received 1/12 + 1/27 = 13/108 of the total.
Of the messages that have followed the B3 -> B1 -> B2 -> B3 path (1/36 of the total), half go to C6 (1/72 of the total), and half go to B1 (1/72 of the total). Similarly, of the messages that have followed the B3 -> B2 -> B1 -> B3 path (1/36 of the total), half go to C6 (1/72 of the total), and half go to B2 (1/72 of the total). So C6 gets 1/36 of the messages (totaling 13/36), B1 gets 1/72 of the total, and B2 gets 1/72 of the total.
We're getting into diminishing returns now, but you can see by now that C6 gets an outsized share (36%) of the total messages, while the consumers that are connected to B1 (which has the most consumers) each get an undersized share (under 10%), resulting in C6 having lots of work to do and C1-5 having far less work and spending time idle as you observed. You can also see how some messages might take a long path through the network resulting in high latency, but that's not what your question was about.
A far fetch, as i'm not really sure but in your config you have all topics excluded
<excludedDestinations>
<topic physicalName=">"/>
</excludedDestinations>
Can you remove that restriction for testing. Activemq uses advisory topics to communicate when clients connect to a specific queue/topic. So its possible your 3th broker does not know about the other clients since you blocked the advisory topics.
If I understood you correctly, broker means queue here.
All your brokers have same type of objects.
All yours consumers do same kind of process.
And you want to equally share workload between your consumers.
Sequence of operation is not of that much importance.
I tried to do same thing on Active MQ 5.5.1.
All I did was created one Queue, and created multiple consumers.
I pointed all consumers to same queue.
Active-MQ automatically took care of distribution.
I observed following Example:
If I have a Queue - having 2000 records.
If I point 2 consumers to this queue at the same time, 1st consumer will process objects starting from 0. Second consumer will start processing objects after a random offset (say from 700 .)
Once, 1st consumer has completed processing objects from 0 - 700 and 2nd consumer has processed 200 records (700 - 900), the 1st consumer may start getting objects from any random offset(may be 1200 ).
The adjustment of offset was controlled by ActiveMQ automatically.
I had observed this. I am very much sure that this happens.
Hope I have answered your question(or at-least understood your problem correctly.).
What I did not understand here was ,
If Active-MQ creates QUEUES, how did it serve Objects from somewhere in between ?
Related
Suppose we have 3 nodes in a cluster.
node1,node2,node3
In node1 we have a
exchange e1 bounded to a queue q1 with binding key =key1
It is attached to a consumer1.
In node2 we have a
exchange e2 bounded to a queue q2 with binding key =key2
It is attached to a consumer2.
Can consumer2 read messages from q1 in cluster ? If not how can this be implemented ?
you can read rabbitMQ route totorial.Though it's using python,the concept would be the same.In the Putting it all together part the consumer 2 can receive info,error and warning from queue 2 while the consumer 1 get error from queue 1.
In your case,c2 can't read message from queue 1 now.To implement,the exchange setting don't need to change.Just bind queue 2 with exchange 1 key 1.
Say I have this pub/sub pattern implemented:
So basically I deliver a message to each C, who subscribed to exchange X.
I have instances of P, and a lot subscribers like C. Let's define C10, C11, C12, C13 as a group C1 and C20, C21, C22, C23 as a group C2.
How do I deliver a message so only one C will receive a message from each group? (I'm perfectly fine with round robin)
Just go to topics tutorial.
Routing key should look like C.C1 or C.C2.
Basically, send messages with routing key C.* ( so it they will go to C.C1 xor C.C2) , and subscribe each consumer to C.C1 xor C.C2. RMQ will distribute messages to all consumers subscribed to C.CN routing key in round-robin fashion.
So appears what I needed was a fanout exchange with named queues instead of exclusive ones.
Each C service declares a non-exclusive named queue and binds it to the exchange. And binds a consumer to that queue.
If two services would declare a same queue, and bind a consumer to it, they end up being round-robin'ed.
I have producer app with 2 separate instances (p1, p2), and consumer app with 2 separate instances (c1, c2).
Producer p1 connects to exchange with topic= t1, queueName =name1.
Consumer c1 connects to exchange with topic= t1, queueName =name1.
Producer p2 connects to exchange with topic= t2, queueName =name1.
Consumer c2 connects to exchange with topic= t2, queueName =name1.
I see in RabbitMQ GUI that I have 2 exchanges but only 1 queue.
Instead that c1 will receive messages from p1 only, and c2 will receive messages from p2 only, RabbitMQ is doing round robin on messages between c1 and c2. So the messages I send from p2 are being received both by c1 and c2.
I thought that in RabbitMQ the correlation is multiple queues per exchange, and the behavior here is unexpected. Why?
You can have multiple queues for every exchange, it's true; but the routing key is a queue matter, not a consumer matter.
The routing key will be used by rabbit to send the message to the right queue; once the message is received on a topic exchange, the message will be sent to all the queues binded to that specific topic. You have only one queue here, that's why both C1 and C2 get the message.
Check this link for a clear example.
If you need to separate C1 and C2, you need to bind them to 2 different queues, not to the same one.
Can somebody please explain how to draw the gantt chart for the following using multilevel feedback queue scheduling
Consider a multilevel feedback queue scheduling with three queues, numbered as Q1,Q2,Q3. The scheduler first executes processes in Q1, which is given a time quantum of 10 milli-seconds. If a process does not finish within this time, it is moved to the tail of the Q2.The scheduler executes processes in Q2 only when Q1 is empty. The process at the head of the Q2 is given a quantum of 16 milli-seconds.If it does not complete, it is preempted and is put into Q3. Processes in Q3 are run on an FCFS basis, only when Q1 and Q2 are empty.
Processes Arrival time Burst time
P1 0 17
P2 12 25
P3 28 8
P4 36 32
P5 46 18
First of all, let's fix a quantum time = 10 ms as we need to implement Multilevel Feedback Queue Scheduling algorithm.
Processes will be kept in the ready queue! So, queue will contain P1,P2,P3,P4,P5 in queue as per time, but, feedback will be keep on sending to a lower queue if a process crosses the quantum time and hence, will be placed in the lower queue, if left with incomplete execution!
As given below, last times are inclusive to the interval and starting times are exclusive, but the time-interval in between has to be considered :-
1--->10 ms-------P1
10-->17 ms-------P1 // P1 finished execution..........
17-->20 ms-------P2
20-->30 ms-------P2 // P2 sent to 1st lower queue as it's still incomplete
30-->38 ms-------P3 // P3 finished execution..........
38-->40 ms-------P4
40-->50 ms-------P4 // pushed next to P2 in 1st lower queue
50-->60 ms-------P5 // pushed next to P4 in 1st lower queue
Now,1st lower queue comes in action with time-quantum of 16 ms.
60-->82 ms-------P2 // P2 finished execution.........
82-->98 ms-------P4 // P4 sent in 2nd lower queue as it's still incomplete
99->107 ms-------P5 // P5 finished execution..........
Now,2nd lower queue comes in action with FCFS algorithm implementation.
107-->111 ms-------P4 // Finally, P4 finished execution..........
Hence, this would be the Gantt chart diagram for time-quantum = 10 ms.
If you're left over with any doubt, please leave a comment below!
A process that arrives for queue 1 preempts a process in queue 2.(Operating System Concepts Book, International Student Version, 9th Edition, page 216)
So, I think P2 preempts P1 at 12th second and the suggestion above is not correct.
Execution of process seems to be wrong in this solution. So I have corrected it. Pls correct me if I am wrong.
Final Answer:- Queue Q1 is empty then Q2 is executing but at the time 12ms p2 comes in Q1 so Q2 stop executing that process and wait for empty Q1.
What will happen to messages posted to a virtual topic when there are no consumers listening ? Will the broker hold them for a certain while until a subscriber is available ?
More specifically :
At T0 and T1 messages M0 and M1 are posted. At T2, consumer C1 connects, will he receive M0 and M1 ? Obviously messages M2 and M3 posted at T3 and T4 will be received by C1, but what will a new Consumer, C2, that connects at T5 receice ? All messages, M2 and M3, or none ?
It depends on the nature of the topic:
if the topic is durable (has durable consumers subscribing to it), the broker will hold the messages in the topic until all the durable consumers consumes the messages.
if the topic is non-durable (no durable consumers), the message will not even be sent to the topic, as there will be no durable subscription.
For your example, I'll consider that you are using durable subscriptions / consumers:
Case 1:
T-2 C1 and C2 make durable subscription to the topic
T-1 C1 and C2 disconnect
T0: M0 is posted
T1: M1 is posted
T2: C1 connects. C1 receives M0 and M1
T3: M3 is posted. C1 receives M3
T4: M4 is posted. C1 receives M4
T5: C2 connects, C2 receives M0, M1, M2, M3, M4
That's because they are holding durable subscriptions
You need to be very careful when using durable topics / queues: if the consumer doesn't unsubscribe, the broker will hold the messages until the message store explodes. You will need to make sure it doesn't happen (by setting eviction policies and / or putting a Time to Live on the messages).
Of course the previous example will vary depending when the consumer does the durable subscription.
If you are using non-durable topics:
T-2 C1 and C2 make normal subscription to the topic
T-1 C1 and C2 disconnect
T0: M0 is posted
T1: M1 is posted
T2: C1 connects. C1 does not receive anything
T3: M3 is posted. C1 receives M3
T4: M4 is posted. C1 receives M4
T5: C2 connects, C2 does not receive anything
There are two ways to allow messages published to a virtual topic to suivive. The first one is through the durable subscriber and the other is that the publisher sends messages with delivery mode "PERSISTENT". When messages are published with the delivery mode of "PERSISTENT", the message will be saved on disk, otherwise, it will be save in-memory.
Why can't there be a observer/observable pattern - taking the example above:
When M0 is posted, C1 and C2 (consumers subscribed) are woken and can consume the event? I see this pattern better than the durable and non-durable - a hybrid approach.