I need to start a queue in OpenEJB in a "paused" state so no messages are processed by the consumer until some related data is available. I can programmatically pause the queue as shown here, so if there was some initializer function that is called when a queue is created I could use that method. The queue configuration documentation does not seem to support setting the paused state. Any ideas on how to configure the queue upon creation?
If you read the thread you link you will see a queue is not paused but a broker can be.
In TomEE broker is created from a factory using a spi (in tomee classloader so tomee/lib by default) so you can write your own if that's an option starting programmatically when you are ready.
Now I suspect you don't want to start connectors with the container but it is not an issue to start the broker. Said otherwise you don't want to be connected to any other machine through JMS to not receive anything but if JMS is started and deployed it is ok.
In such a case you can just not configure any connector on the broker and add them when ready. You can find brokers doing:
new org.apache.openejb.resource.activemq.ActiveMQ5Factory().getBrokers()
Related
I'm doing a test to see how the flow control behaves. I created a fast producer and slow consumers and set my destination queue policy highwater mark to 60 percent..
the queue did reach 60% so messages now went to the store, now the store is full and blocking as expected..
But now i cannot get my consumer to connect and pull from the queue.. Seem that blocking is also blocking the consumer from getting in to start pulling from the queue..
Is this the correct behavior?
The consumer should not be blocked by flow-control. Otherwise messages could not be consumed to free up space on the broker for producers to send additional messages.
So this issues surfaced when I was using a on demand jms service. The service will queue or dequeue via a REST services. The consumers are created on demand.. If the broker is being blocked as im my case being out of resource, then you cannot create a new consumer.
I've since modified the jms service to use a consumer pool(implemented a object pool pattern). The consumer pool is initialized when the application starts and this resolved the blocking issue
I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration
I'm using celery 3.0.18 with RabbitMQ 3.0.2. I have a task sent to another application by using celery.send_task, and I can see the send_task call in my logs, I can see the packets leaving the worker instance, and I can see the packets reaching the RabbitMQ instance when I call tcpflow -ce -i any port 5672, however, only the first message gets to the queue. They all have the same routing key, I tried recreating the exchange and bindings, and even a new RabbitMQ instance, and nothing seems to work. This used to work fine for months, until we had to rebuild the RabbitMQ from scratch after a crash in our AWS infrastructure. Strangely, I have the exact same setup working on other application, using the same broker and the same exchange, binding and queue, and it works perfectly there. Also, it works when I send the messages to the same exchange using the same call from a management script, running from the shell on the same instance, but it doesn't work when it's sent from the celery task in the worker process.
Any ideas on what the problem might be?
Eventually, I figured what's wrong, but it's not clear if this is the expected behavior, a celery bug, or a RabbitMQ bug.
What happens is that besides our application tasks, I have a custom logging handler used to send logs to a central location using RabbitMQ, using celery.send_task. This logging handler sends messages to an exchange named application.logger, with a routing key like application.logger.info, application.logger.warning, etc, and have bindings to route some logging levels to specific queues. This exchange, bindings and queues were created directly in RabbitMQ and not defined in Celery routes.
When the worker tries to send a message to this exchange and it doesn't exist, Celery would log a 404 NOT_FOUND error. After that, tasks sent to other exchanges using the same connection weren't delivered. They were sent by the worker instance, we could see the packets arriving and the RabbitMQ management screen for that connection even shows the data arriving from the client in kb/s, but no messages were delivered.
I have a problem using Message Driven Beans in CLUSTERED Glassfish 3.1.1. The problem is with the queue in the Glassfish, the queue is not synchronized between the instances. I am trying by best to explain the scenario below.
I created 2 instances in a GlassFish cluster, created a JMS QueueConnectionFactory, created a JMS Queue. Their targets were made towards the cluster. Then I deployed the web application and the MessageDrivenBean module in the cluster. The web application sends a TextMessage to the JMS Queue.
Everything works well here, like the message is sent to the queue and served by the message driven beans in both the instances.
Then I disable the MessageDrivenBean module. Request the web application which sends the message to the JMS Queue in both the instances. Then I shutdown myInstance2. Re-deploy the MDB in the cluster. Now here is the problem, the MessageDrivenBean only receives the messages of myInstance1 and not the messages sent to myInstance2 queue. The messages in the queue of myInstance2 are only served when myInstance2 is started. Can anyone help me here with the settings that GlassFish uses to synchronize the queue in both the instance so that even for some reason when one instance is down and there are messages in that instance’s queue, the other instance will take the messages of that queue and serve them.
I am using OpenMQ, GlassFish 3.1.1 and I have turned on the HA(high availability) option in GlassFish, but still it does not work.
Thanks
The high-availability options for GlassFish and the high-availability options for Message Queue are configured separately. You need to configure your message queue cluster to be an "enhanced cluster" rather than a "conventional cluster". This is described in the GlassFish 3.1 High Availability Administration Guide.
I need help with the Embedded ActiveMq and Spring Framework.
Problem is :-
I am using Embedded ActiveMQ with Spring framework JMS Support.
As part of development I have a component which publish messages to Virtual Topic. And i have another componet which subscribes messages from the topic.
Now problem here is the application which is subscribing messages is running in cluster environment i.e one master instance and one slave instance. Events which i have published are going to either master instance or slave instance. But i want messages to be subscribed by only master instance. Is there any way i can block slave instance not to subscribe events?
We have a system property set to differentiate master and slave instance.I have tried add condition by overriding createConnection method of ActiveMqConnectionFactory class.
if(master) {
ActiveMqConnectionFactory.createConnection}
else
return null.
in this case, DefaultMessageListener of Spring framework which we configured to listen events always trying to refresh connection, since i am returning null for slave, it is failing to create connection. the thread is going to infinite loop with 5000MS interval..
Is there any way i can say MessageListener to stop refreshing connection..
Please kindly help me to resolve this issue..
if the cluster is master/slave...then only one should be active at a given time...sounds like you have dual masters from an AMQ client perspective...
regardless, you can always use ActiveMQ security to control access to a given topic/queue based on connection credentials (see http://activemq.apache.org/security.html)