One can create network connectors to exchange messages between two brokers with one of two ways:
Edit the conf/activemq.xml and adding network connectors inside <networkConnectors></networkConnectors>
Using the JMX API to add them programmatically via a BrokerViewMBean
When creating a network connector via JMX, this is not persistent, i.e. on broker restart it is not there anymore. Is this normal? Is there a way to create NCs using JMX that persist broker restart?
The connections created via JMX are temporary and are not written into the ActiveMQ configuration. You can view these as devops connections that can be used to test connectivity or to solve some messaging problems but for a permanent solution one needs to edit the ActiveMQ configuration file and add the desired connections there so that on each start they are recreated.
Related
I need your help to suggest me how best I can achieve load balancing using the below diagram. here I am trying to create 2 machines with Master and expecting that the consumer/publisher application will use one common URL( a load-balanced one), where I should not expose the individual VM machine info and port ID. just that load balancer should take care of routing..
this is typically what we do with help of F5 load balancer or HTTP load balancer ..just wondering can be achieved over ActiveMQ and its advisable..?
on other side, I also tried configuring this way on weblogic to consume data from ActiveMQ queue
failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true but this does not help.. or WebLogic is not understanding this format.
Messaging connections are stateful. They are not stateless like HTTP connections, and therefore cannot be load-balanced in the same way as HTTP connections. It may be possible to configure an F5 to deal with stateful messaging connections, but I can't say for sure. I'm not an expert on F5.
Both the ActiveMQ Artemis broker itself as well as the JMS client shipped with the broker have load-balancing functionality built in. There's too much to cover here so I recommend you review the clustering documentation for the relevant details.
You might also try using the broker balancer feature. It's currently experimental, but it should be ready to use in the 2.21.0 release coming in the March/April time-frame. It can act like an F5 for your messaging connections, but it can do some more intelligent things like always sending certain clients to the same node which can facilitate certain use-cases which are not possible in a traditional cluster.
The URL failover://(tcp://localhost:61616,tcp://localhost:61617)?randomize=true which are you using is for the OpenWire JMS client shipped with ActiveMQ 5.x. If you're using the core JMS client shipped with ActiveMQ Artemis then you should be using a URL like this instead:
(tcp://localhost:61616,tcp://localhost:61617)?ha=true
I have a weblogic cluster which has 4 nodes (managed servers). Today I found two of them are down, and I found in suprise that some JMS messages are not sent.
I wonder if it's the normal behaviour ? Shouldn't the cluster continue to deliver JMS using the two available nodes ?
In order to reach high availability for JMS you should configure two things
Migratable targets.
Persistance based either on shared storage or a database.
Why migratable targets? This is because messages produced by i.e. JMSServer01 can only be processed by JMSServer01. Thus, when you configure migratable targets the JMSServer01 will be migrated automatically into another Weblogic server.
Why persistance based on shared storage or a database? This is because once the JMS Server is migrated into another server, it will try to process the messages, which must be in a shared storage or database that can be seen by all your Weblogic servers.
You can find more information here https://docs.oracle.com/middleware/1213/core/ASHIA/jmsjta.htm#ASHIA4396
Is it possible to identify the "master" broker in an ActiveMQ/Fuse AMQ master/slave configuration, using JMX, or perhaps a different mechanism? We're creating a dashboard and want to show visually which server is actively handling traffic.
Figured it out :-)
ObjectName: org.apache.activemq:type=Broker,brokerName=amq
AttributeName: Slave
I have two nodes. They are configured in "Shared File System Master Slave".
JMX shows both node as Slave=false.
So I checked log file to see who made "lock" on shared file system.
Database /dir1/activemq-db/lock is locked by another server. This broker is now in slave mode waiting a lock to be acquired | org.apache.activemq.store.SharedFileLocker | main
how to use "activemq-admin" to view the list of queues; number of messages in the queue;
I read through the tutorial : http://activemq.apache.org/activemq-command-line-tools-reference.html
didn't find a working solution...
and my web console on the slave machine does not work... the web console seems always go with the master machine (in the master/slave structure)
I just want to test that if I send messages into queues on master, slave could update.
so I am trying to use activemq-admin.
The way it works is that Slave is waiting to get a lock to the DB (Kahadb by default) , you will not be able to check the slave , bring down the master and now the slave will become the master broker and you should be able to see all the queues and messages dropped in them (assuming you are using persistence)
You can use JMX, web console or programatically, as you can find here. The easiest solution, I think, is to use web console like here.
I can't understand why isn't accessible the web console. Check for ActiveMQ config xml.
Also you can connect via JMX like:
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
Go here for more information.
I'm working on a project where I have several war files inside a tomcat 7 have to communicate with a single embedded activeMQ (5.5.1) broker inside the same Tomcat.
I'm wondering what was the best practice to manage this and how to start and stop the broker properly.
Actually I try tu use a global JNDI entry in server.xml and in each war gets my activemq connection with a lookup. The first connection to the broker implicitly starts it. But with this method I run into various problems like instance already existing or locks in data store.
Should I use instead an additional war which uses a BrokerFactory to start the broker explicitly? In this case how to make sure that this war executes first in Tomcat ? And how do I stop my broker and where?
Thanks for the help.
from the docs...
If you are using the VM transport and wish to explicitly configure an
Embedded Broker there is a chance that you could create the JMS
connections first before the broker starts up. Currently ActiveMQ will
auto-create a broker if you use the VM transport and there is not one
already configured. (In 5.2 it is possible to use the waitForStart and
create=false options for the connection uri)
So to work around this if you are using Spring you may wish to use the
depends-on attribute so that your JMS ConnectionFactory depends on the
embedded broker to avoid this happening. e.g.
see these pages for more information...
http://activemq.apache.org/vm-transport-reference.html
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
http://activemq.apache.org/how-do-i-restart-embedded-broker.html