ActiveMQ scheduled messages in master-slave - activemq

Are scheduled messages are also shared when using activemq broker master-slave? I successfully created master-slave by jdbc, but the scheduled messages do not appear in the database. This makes the master-slave broker configuration not really a 100% failover system. Or is there anything I should specificly set up to make this happen?
With this code do I usually create the broker:
BrokerService brokerService = new BrokerService();
brokerService.setBrokerName(brokerName);
brokerService.addConnector("tcp://" + host + ":" + port);
brokerService.setSchedulerSupport(true);
// Allow JMX monitoring
brokerService.setUseJmx(true);
ManagementContext managementContext = new ManagementContext();
managementContext.setConnectorPort(port + 10000);
managementContext.setRmiServerPort(port + 20000);
brokerService.setManagementContext(managementContext);
// Set temp and store limits to 512MB to avoid
// unrealistic-limit-warnings
brokerService.getSystemUsage().getStoreUsage().setLimit(512 * 1024 * 1024);
brokerService.getSystemUsage().getTempUsage().setLimit(512 * 1024 * 1024);
And with this addition I create the master-slave datasource:
Map<String, Object> configuration = entityFactory.getProperties();
BasicDataSource dataSource = new BasicDataSource();
dataSource.setDriverClassName((String) configuration.get("hibernate.connection.driver_class"));
dataSource.setUrl((String) configuration.get("hibernate.connection.url"));
dataSource.setUsername((String) configuration.get("hibernate.connection.username"));
dataSource.setPassword((String) configuration.get("hibernate.connection.password"));
which I use to set the master-slave jbdcPersistenceAdapter:
JDBCPersistenceAdapter adapter = new JDBCPersistenceAdapter();
adapter.setDataSource(dataSource);
brokerService.setPersistenceAdapter(adapter);
which is followed by starting the brokerService:
brokerService.start();
This code all works fine. The queue is shared between brokers successfully, and the consumers do their job. The consumers sometimes create a producer which uses the failover-URL successfully to find out which broker is in the air. That all works well.
But, the scheduled messages do not appear in the database, and scheduled messages just stop appearing when the broker that has the scheduled message, is shut down.
Thanks!

Given the following quotes from http://activemq.apache.org/persistence.html:
To achieve high performance of durable messaging in ACtiveMQ V4.x we
strongly recommend you use our high performance journal - which is
enabled by default.
and from http://activemq.apache.org/masterslave.html:
JDBC Master Slave - Requires a shared database. Also relatively slow as it cannot use the
high performance journal
and the following answer (http://bit.ly/1jobMO6):
The scheduler store will use KahaDB based store regardless of the
persistence adapter you're using for your messaging store.
it seems that high (and correct?) performance for scheduled message can only be achieved by using a KahaDB based store. To workaround this you might be able to use a shared file system to store the KahaDB database (see http://activemq.apache.org/shared-file-system-master-slave.html). If not, you should find a way to have brokers schedule messages when they are promoted to become the master.

Related

How to process messages in parallel from a Weblogic JMS queue?

I am new to JMS, and I am trying to understand if there is a way to consume messages in parallel from a JMS queue and process them using Spring JMS.
I checked a few answers on Stack Overflow, but I am still confused.
The application I am working on uses Spring Boot and Weblogic JMS as the messaging broker. It listens to a JMS queue from a single producer using the JmsListener class.
In the JMS ConnectionFactory configuration of the application the following parameter has been set:
DefaultJmsListenerContainerFactory.setConcurrency("6-10");
Does that mean if there are 100 messages currently in a queue then 10 messages will be consumed and processed in parallel? If so, can I increase the value to process more messages in parallel? If so, are there any limitations to it?
Also, I am confused about what DefaultJmsListenerContainerFactory.setConcurrency and setConcurrentConsumers does.
Currently the processing of JMS client app is very slow. So I need suggestions to implement parallel processing.
concurrentConsumers is a fixed number of consumers where as concurrency can specify a variable number which scale up/down as needed. Also see maxConcurrentConsumers.
The actual behavior also depends on prefetch; if each consumer prefetches 100 messages then only one consumer might get them all.
There is no limit (aside from memory/cpu constraints).

rabbitmq queue clear after restart

I have installed RabbitMQ on windows server 2012 64 Bit.
I Tested Publishing And Consuming Parts with Huge Data Everything is fine, the only problem i am facing is the messages in a queue are getting lost after RabbitMQServer restart.
I am using VB.Net SDK of RabbitMQ.
I am setting "Durable" property of Queue Declare to true, and DeliveryMode BasicQueueProperties to "2" to make the Messages persistent. But still the messages are getting lost after my server restart.
How can I overcome this?
https://www.rabbitmq.com/tutorials/tutorial-two-dotnet.html
In this page Message durability on RabbitMQ, it's explained good:
At this point we're sure that the task_queue queue won't be lost even if RabbitMQ restarts. Now we need to mark our messages as persistent - by setting IBasicProperties.Persistent to true.
var properties = channel.CreateBasicProperties();
properties.Persistent = true;
Note on message persistence
Marking messages as persistent doesn't fully guarantee that a message
won't be lost. Although it tells RabbitMQ to save the message to disk,
there is still a short time window when RabbitMQ has accepted a
message and hasn't saved it yet. Also, RabbitMQ doesn't do fsync(2)
for every message -- it may be just saved to cache and not really
written to the disk. The persistence guarantees aren't strong, but
it's more than enough for our simple task queue. If you need a
stronger guarantee then you can use publisher confirms.

Active MQ one message by a consumer

Can we configure ActiveMQ to send only one message per an instance of application ?
Actually i have tomcat installed in a cluster mode.
I'm using Spring JMS template as consumer.
You need to explain your question further; it's not clear what you are asking.
If you are talking about prefetch, IIRC ActiveMQ sets the prefetch to 1000 by default; set it to 0 to force messages to be distributed across all instances (at the cost of performance). Typically you will want to use prefetch, but you need to tune it for your needs.
Set the maxConcurrentConsumers property to 1. This should make it so that only one thread consumes from the queue per node.

ActiveMQ: Reject connections from producers when persistent store fills

I would like to configure my ActiveMQ producers to failover (I'm using the Stomp protocol) when a broker reaches a configured limit. I want to allow consumers to continue consumption from the overloaded broker, unabated.
Reading ActiveMQ docs, it looks like I can configure ActiveMQ to do one of a few things when a broker reaches its limits (memory or disk):
Slow down messages using producerFlowControl="true" (by blocking the send)
Throw exceptions when using sendFailIfNoSpace="true"
Neither of the above, in which case..I'm not sure what happens? Reverts to TCP flow control?
It doesn't look like any of these things are designed to trigger a producer failover. A producer will failover when it fails to connect but not, as far as I can tell, when it fails to send (due to producer flow control, for example).
So, is it possible for me to configure a broker to refuse connections when it reaches its limits? Or is my best bet to detect slow down on the producer side, and to manually reconfigure my producers to use the a different broker at that time?
Thanks!
Your best bet is to use sendFailIfNoSpace, or better sendFailIfNoSpaceAfterTimeout. This will throw an exception up to your client, which can then attempt to resend the message to another broker at the application level (though you can encapsulate this logic over the top of your Stomp library, and use this facade from your code). Though if your ActiveMQ setup is correctly wired, your load both in terms of production and consumption should be more or less evenly distributed across your brokers, so this feature may not buy you a great deal.
You would probably get a better result if you concentrated on fast consumption of the messages, and increased the storage limits to smooth out peaks in load.

create list of activemq queue

my existing code which uses BlockingQueue creates a list of BlockingQueue (like private List> queues;) in which I can put messages to process.
However due to persistence issue we plan to shift to activemq.
Can anyone please help me if we can get a list of activemq queue (in java program not from configuration file). I know that I can use createQueue on session to create a single instance of the queue but I want list of queue like done for BlockingQueue.
Any help would be much appreciated.
You can get a list of the available queues using DestinationSource from your connection.
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://localhost:61616");
ActiveMQConnection connection = (ActiveMQConnection)connectionFactory.createConnection();
DestinationSource ds = connection.getDestinationSource();
Set<ActiveMQQueue> queues = ds.getQueues();
edit:
to create a queue take a look at ActiveMQ Hello world sample link What the code does there is creating a connection to an activeMQ-broker embedded in the jvm
// Create a ConnectionFactory
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
// Create a Connection
Connection connection = connectionFactory.createConnection();
connection.start();
// Create a Session
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the destination (Topic or Queue)
Destination destination = session.createQueue("TEST.FOO");
The thing that might not be obvious with above code is that the line:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost");
will not only setup a connection to a broker, but embedded a broker inside the connection if there isn't one already. Explained at the bottom of this page
That feature can be turned of using (you need a broker, but if you want to set it up in other way):
ActiveMQConnectionFactory cf = new ActiveMQConnectionFactory("vm://localhost?create=false");
I really like ActiveMQ, but it offers a lot more than persistence, so things might seem a little overly complex when doing the simple things. But hope that will not scare you.
To create a list of queues, you have to create that list, then create each queue individually from the session object.
Queue q = session.createQueue("someQueueName")
This, however, does not really "create" a queue in that sense, since a queue is a persistent "thing" in the ActiveMQ process/server. This will only create a reference to an ActiveMQ queue given an identifier/name.
I'm not sure why you need ten queues right up. Typically, you have one queue per event type or use case (or similar), then use concurrent consumers to process in parallel.
But of course, you can always do somethings similar by a simple for loop, creating one queue at a time and attaching them to an arraylist. Note that you cannot get type safe queues with only Event objects in them.
You can send ObjectMessages with events though. Just create one:
Event event = createEvent(..); // Given "Event" is serializable (need to be able to persist it).
Message m = session.createObjectMessage(event);
// Send message as usual in ActiveMQ.
It might be that you need to rethink one or a few things in your code when converting from BlockingQueues to persistent ActiveMQ queues.