I need help with the Embedded ActiveMq and Spring Framework.
Problem is :-
I am using Embedded ActiveMQ with Spring framework JMS Support.
As part of development I have a component which publish messages to Virtual Topic. And i have another componet which subscribes messages from the topic.
Now problem here is the application which is subscribing messages is running in cluster environment i.e one master instance and one slave instance. Events which i have published are going to either master instance or slave instance. But i want messages to be subscribed by only master instance. Is there any way i can block slave instance not to subscribe events?
We have a system property set to differentiate master and slave instance.I have tried add condition by overriding createConnection method of ActiveMqConnectionFactory class.
if(master) {
ActiveMqConnectionFactory.createConnection}
else
return null.
in this case, DefaultMessageListener of Spring framework which we configured to listen events always trying to refresh connection, since i am returning null for slave, it is failing to create connection. the thread is going to infinite loop with 5000MS interval..
Is there any way i can say MessageListener to stop refreshing connection..
Please kindly help me to resolve this issue..
if the cluster is master/slave...then only one should be active at a given time...sounds like you have dual masters from an AMQ client perspective...
regardless, you can always use ActiveMQ security to control access to a given topic/queue based on connection credentials (see http://activemq.apache.org/security.html)
Related
I am new to RabbitMQ and I am working on an application that will receive information from many devices and route all messages into a couple of queues depending on the MQTT topic. I was able to get all of this working easily, but now I am looking into how to push a message to a queue when a client connects or disconnects from RabbitMQ in order to update the current status of the client in my database. Is there a way to do this?
Event Exchange Plugin
Client connection, channels, queues, consumers, and other parts of the system naturally generate events. For example, when a connection is accepted, authenticated and access to the target virtual host is authorised, it will emit an event of type connection_created. When a connection is closed or fails for any reason, a connection_closed event is deleted.
Unfortunately the rabbitmq_event_exchange is created after importing bindings from definition.json. Which means that the amq.rabbitmq.event cannot be bound to a queue via the configuration and must be bound after the start.
I have rabbitMQ broker running on two nodes as a cluster. I have observed that if node, where queue have been created, goes down, then queue would not be available on other node. If I try to publish a message from other node it fails. Even if I remove the failed node from cluster(using forget cluster command) and try to publish message from other node, the behavior is same.
I dont want to enable mirroring of the queue for the simple reason that it would replicate the messages which would be additional load on inter-network.
Is there way available in rabbitMQ to achieve this?
The behaviour you are experiencing is the default behaviour of rabbitmq and its exactly what i supposed to happen.The node where you created the queue becomes the producer now and if this node goes down then any connection available to it or queues or exchanges associated with it will not work at all. There are two options to resolve this issue.
One option is that there is one separate queue for every node and any node that wants to listen to receive messages from a particular node can subscribe to that particular queue's exchange. This seems to be not a very good idea since you need to manage a lot of things for it.
Second option is to always declare a queue before you publish so if your queue is not available then a new queue would take its place and all the nodes subscribe to would be able to listen and any producer node will be able to post that queue. This option will resolve the problems of node getting down or node not available. from the docs
before sending we need to make sure the recipient queue exists. If we send a message to non-existing location, RabbitMQ will just drop the message. Let's create a hello queue to which the message will be delivered:
RabbitMQ lets you import and export definitions. Definitions are json files which contain all broker objects (queues, exchanges, bindings, users, virtual hosts, permissions and parameters). They do not include the messages of queues.
You can export definitions of the node who owns the queue and import them to the slave node of the cluster periodically. You have to enable the management plugin for this task.
More information here: https://www.rabbitmq.com/management.html#configuration
I need to start a queue in OpenEJB in a "paused" state so no messages are processed by the consumer until some related data is available. I can programmatically pause the queue as shown here, so if there was some initializer function that is called when a queue is created I could use that method. The queue configuration documentation does not seem to support setting the paused state. Any ideas on how to configure the queue upon creation?
If you read the thread you link you will see a queue is not paused but a broker can be.
In TomEE broker is created from a factory using a spi (in tomee classloader so tomee/lib by default) so you can write your own if that's an option starting programmatically when you are ready.
Now I suspect you don't want to start connectors with the container but it is not an issue to start the broker. Said otherwise you don't want to be connected to any other machine through JMS to not receive anything but if JMS is started and deployed it is ok.
In such a case you can just not configure any connector on the broker and add them when ready. You can find brokers doing:
new org.apache.openejb.resource.activemq.ActiveMQ5Factory().getBrokers()
how to use "activemq-admin" to view the list of queues; number of messages in the queue;
I read through the tutorial : http://activemq.apache.org/activemq-command-line-tools-reference.html
didn't find a working solution...
and my web console on the slave machine does not work... the web console seems always go with the master machine (in the master/slave structure)
I just want to test that if I send messages into queues on master, slave could update.
so I am trying to use activemq-admin.
The way it works is that Slave is waiting to get a lock to the DB (Kahadb by default) , you will not be able to check the slave , bring down the master and now the slave will become the master broker and you should be able to see all the queues and messages dropped in them (assuming you are using persistence)
You can use JMX, web console or programatically, as you can find here. The easiest solution, I think, is to use web console like here.
I can't understand why isn't accessible the web console. Check for ActiveMQ config xml.
Also you can connect via JMX like:
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
Go here for more information.
I am using Glassfish 3.1.2, and I set up a cluster with one node and two instances.
I have an message driven bean in my application that subscribes to a topic, which I deployed to the cluster.
When I publish a message to the topic I want both instances to receive the message.
However, in practice I am finding that only one instance receives the message.
I believe I am running into a feature called "shared subscriptions"
http://docs.oracle.com/cd/E18930_01/html/821-2438/gjzpg.html#MQAGgjzpg
The feature (which is enabled by default) says that beans in the cluster with the same client id are shared, and are effectively only one subscription.
It says that by default the client id of an MDB is its name, which means that both my instances are using the same client id.
So other than completely disabling this feature, I would like to know if it is possible to setup an MDB so that each instance subscribes with a different client ID? This seems a bit tricky since both instances are using the same WAR file. I think you can set the client ID in an annotation, but I'm not sure if that can be changed at runtime...
I'm not sure why you would completly disable this feature. In the link you provided, it states clearly that you configure this per ActivationSpec/MDB. So as far as I understand it, it would affect only the MDB you have at hand.
For an MDB, set the ActivationSpec property useSharedSubscriptionInClusteredContainer to false. Do this in exactly
the same way as with other ActivationSpec properties, using
annotations in the MDB itself or in the deployment descriptor
ejb-jar.xml or glassfish-ejb-jar.xml.
But you can of course set the client ID on a connection dynamically during runtime. Please note that you probably would have to handle the JMS connection yourself a bit more than relying on the features managed by the container.
http://docs.oracle.com/javaee/6/api/javax/jms/Connection.html#setClientID(java.lang.String)