Get Number of connection from all host to my activemq broker - apache

ActiveMQ broker setup:
Broker is running on machine: hostA
Clients from different host can connect to my broker instance running on hostA, there can be any number of client from any host.
Is there a way to find out how many clients are connected to broker and also list which tell me how many connection from each host is there to my broker.
I want to do this without making assumption about number of hosts.
I can do this by using lsof command and some parsing over output, but I am in situation where I can not use this.
Is there any feature provided by ActiveMQ command line utility activemq-admin.

You can get to pretty much any Mbean attribute ActiveMQ exposes via the activemq-admin. There are no attributes or operations that give you a quick count of connections from specific clients. You will have to do some work on your end to get all the details you want, but all the raw data is there.
Examples:
Broker Stats:
activemq-admin query --objname type=Broker,brokerName=localhost
Connection Stats
activemq-admin query --objname type=Broker,brokerName=localhost,connector=clientConnectors,connectorName=<transport connector name>,connectionViewType=clientId,connectionName=*
See full doc here.
NOTE: Documentation as of this writting has not be updated to take into account the Mbean changes made in AMQ. References to Object names in examples are not correct.
You can get the object name (or example sytax) from JMX (using jconsole or visual vm for example) from the MBeanInfo. Each object name wills stat something like org.apache.activemq:type. For the script, remove the "org.apache.activemq:" and you should be in business for any thing you need from JMX via the script.
I think you may also look into using Jolokia with your broker. Although not compatible with the activemq-admin script, you can reach everything you can from the activemq-admin script, but also have access to all of the operations. In the past I've heavily used the activemq-admin script for local monitoring/command line administration of the broker, but have started converting everything to hit the Jolokia service. But again, activemq-admin will give you a way to access what you are looking for here.

Related

Using a web service to drop message onto an ActiveMQ Queue fails on failover

I have a two activeMQ(5.6.0) brokers. They use a shared kaha database so only one can be 'running' at once.
I have a (asp.net) webservice that puts a message on a queue, locally if I start and stop the brokers the webservice fails over correctly
when I test with the brokers on seperate machines it sometimes works but often I get "socketException: Connection reset" errors and the message is lost.
The connection string I am using is below. Note that I am aware NMS does not understand the priority backup command but I have left it there for the future.
failover:(tcp://MACHINE1:61616,tcp://MACHINE2:62616)?transport.initialReconnectDelay=1000&transport.timeout=10000&randomize=false&priorityBackup=true
How can I make my fail over between brokers fool proof?
The shared Kaha database was on a simple share. Currently activeMQ (or windows) cannot reliably get or release the lock in this configuration. The shared database must sit on a 'real' SAN so that both instances of the queue software see the database as being on a local filestore not a network location.
See this page for more info http://activemq.apache.org/shared-file-system-master-slave.html

Can't read from remote transactional private queue using WCF in workgroup mode (can do using System.Messaging !)

I have spent days reading MSDN, forums and article about this, and cannot find a solution to my problem.
As a PoC, I need to consume a queue from more than one machine since I need fault tolerance on the consumers side. Performance is not an issue since less than 100 messages a day should by exchanged.
I have coded two trivial console application , one as client, the other one as server. Using Framework 4.0 (tested also on 3.5). Messages are using transactions.
Everything runs fines on a single machine (Windows 7), even when running multiple consumers application instance.
Now I have a 2012 and a 2008 R2 virtual test servers running in the same domain (but don't want to use AD integration anyway). I am using IP address or "." in endpoint address attribute to prevent from DNS / AD resolution side effects.
Everything works fine IF the the queue is hosted by the consumer and the producer is submitting messages on the remote private queue. This is also true if I exchange the consumer / producer role of the 2012 and 2008 server.
But I have NEVER been able to make this run, using WCF, when the consumer is reading from remote queue and the producer is submitting messages localy. Submition never fails, my problem is on the consumer side.
My wish is to make this run using netMsmqBinding, but I also tried using msmqIntegrationBinding. For each test, I adapted code and configuration, then confirmed this was running ok when the consumer was consuming from the local queue.
The last test I have done is using WCF (msmqIntegrationBinding) only on the producer (local queue) and System.Messaging.MessageQueue on the consumer (remote queue) : It works fine ! => My goal is to make the same using WCF and netMsmqBinding on both sides.
In my point of view, I have proved this problem is a WCF issue, not an MSMQ one. This has nothing to do with security, authentication, firewall, transport, protocol, MSMQ version etc.
Errors info using MS Service Trace Viewer :
Using msmqIntegrationBinding when receiving the message (openning queue was ok) : An error occurred while receiving a message from the queue: The transaction specified cannot be imported. (-1072824242, 0xc00e004e). Ensure that MSMQ is installed and running. Make sure the queue is available to receive from.
Using netMsmqBinding, on opening the queue : An error occurred when converting the '172.22.1.9\private$\Test' queue path name to the format name: The queue path name specified is invalid. (-1072824300, 0xc00e0014). All operations on the queued channel failed. Ensure that the queue address is valid. MSMQ must be installed with Active Directory integration enabled and access to it is available.
If someone can help to find why my configuration cannot be handled by WCF, a much elegant and configurable way than Messaging, I would greatly appreciate !
Thank you.
You may need to post you consumer code and config to give more of an idea but it could be the construction of the queue name - e.g.
FormatName:DIRECT=TCP:192.168.0.2\SomeQueue
There are several different ways to connect to a queue and it changes when you are remote or local as well.
I have found this article in the past to help:
http://blogs.msdn.com/b/johnbreakwell/archive/2009/02/26/difference-between-path-name-and-format-name-when-accessing-msmq-queues.aspx
Also, MessageQueue Constructor on MSDN...
http://msdn.microsoft.com/en-us/library/ch1d814t.aspx

Understanding Apache ActiveMQ

I am confused about the function of Apache ActiveMQ.
I downloaded ActiveMQ from this link.
So I use it this way (environment: Windows 7): I start the bin/activemq.bat, then it works.
My question is: Does this mean I start a server on my machine? When I initialize the ActiveMQConnectionFactory, the broker URL is tcp://localhost:61616. But what if I want my machine to serve as a server and another machine to connect to my server?
Yes, you can use the primary box as a server and have consumers/subscribers running on other boxes (which will need to connect to the server - you will need to specify the server hostname & port for the connection to be established) - once in place, the messages on the server (topic or queue) can be consumed by the clients.
If you one have one producer and one consumer, you can look into using queues - if you have more than one consumer/subscriber, you can look into setting up a topic to which the consumers will subscribe to. Messages need to be inserted to the topic/queue as needed.
You can specify the server information in your code or preferably in the config file.
For reference to topologies:
http://activemq.apache.org/topologies.html
Also, you can choose to persist your messages or not based on your use case. Kaha DB is the preferred route (specially if performance is of concern).
Useful examples:
http://sujitpal.blogspot.com/2007/12/jms-patterns-with-activemq.html
http://vvratha.blogspot.com/2012/05/java-client-to-sendreceive-messages-for.html
Hope it helps.
Apache ActiveMQ ™ is the most popular and powerful open source messaging and Integration Patterns server
& it act like a third party server.
Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns and many advanced features while fully supporting JMS 1.1 and J2EE 1.4. Apache ActiveMQ is released under the Apache 2.0 License.
ActiveMQ have the capabilities to send 100 MB single message framework and maintain 1000 concurrent connection simultaneously , for the further information you can check activemq.xml in your documentation.
Further Info at here about the ActiveMQ

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.