What happened to HL7 proxy statistics in WSO2 ESB 4.8.1? - wso2-esb

In WSO2 ESB 4.7.0, Request Count would always increment for my auto-acknowledging HL7 proxy services every time a new message came in. After upgrading to 4.8.1, Request Count is always 0. I have verified through my downstream dependencies that messages are indeed coming through.
What happened to proxy level statistics in 4.8.1, and what do I need to do to re-enable them?

Thanks for pointing out this bug - there seem to be some changes in the carbon statistics modules between 4.1.0 and 4.2.0. I've created a JIRA to track the issue at https://wso2.org/jira/browse/ESBJAVA-2995

Related

How to use NServiceBus with MSMQ

I am experimenting the new version of NServicsBus. I find following step by step sample on particular site.
https://docs.particular.net/samples/step-by-step/
Can any one tell me how to configure MSMQ for Transport. Here is my scenario.
Client create message
Client message should be stored in MSMQ
Server Application running on same machine which subscribe the message.
Server handler get message from MSMQ and process it further. i.e Store in DB or send to other web service.
Retry to process message if it does not worked first time
after 3 retries send message to error queue
How do i configure this sample to use MSMQ for my scenario.
Helpful information to include
Product name:NServiceBus.Core
Version: 6.3.4
Stacktrace:
Description:
Did you know that we have released a LearningTransport and LearningPersistence just for purposes like these? Have a look at it here.
Having said that, the transport swapping should be rather seamless so even if you have setup a small PoC using this transport/persistence, you can change it to MSMQ or other production-ready transports/persistence when you go live.
Again, as stated in the documentation page and as the name suggests, this is not for use in production.
I would recommend you walk through this.
https://docs.particular.net/tutorials/intro-to-nservicebus/
Will answer your questions, and future ones you have.

Does Apache Apollo have failover support?

I'm looking to use a message queue system for an ongoing project, which now is relying on a custom (and brittle) message subsystem to interconnect multiple applications. Both the pub/sub and queue patterns are heavily used in my system.
Apache Apollo is one of the message queue systems I'm taking into account, but I don't find information about how can I handle (for instance) an Apollo server failure.
Is there a way to provide failover support in Apollo?
No, as of now this has not been resolved. Apollo is a very good broker, indeed, but lacks some production critical features like fail over. Apollo was an attempt to make a core for the next generation of ActiveMQ. However, the development is no loger active.
Have you considered other brokers like Apache Artemis? It's basically a new attempt to remake ActiveMQ with code from HornetQ, ActiveMQ and Apollo. Development is very active at the moment and there is support for fail over etc.

Mule ESB and Throttling

IN the following link:
http://www.mulesoft.org/documentation/display/current/Mule+ESB+3.4.0+Release+Notes
I see the following
EE-3141 When using a Throttling policy with throttling statics enabled, limit headers are swapped.
However, I can find no example of throttling policies within Mule ESB, but there is possibly a throttling policy within the Anypoint API Manager
Could someone please provide a link to how to use a Throttling policy within Mule ESB?
Thanks
To achieve correct Throttling behaviour you can follow below steps:
Configure a queue (for example, a persistent VM or a JMS queue to avoid message lost during the Mule server crash) after your inbound endpoint.
Configure scheduled delay, for example AMQ_SCHEDULED_DELAY in case of ActiveMQ, to a desired value. If the queue does not support scheduled delay, then we need to find our way to achieve the delay, probably using a Java component.
Finally, configure the outbound endpoint.
The Throttling module (which can be configured as throttler or rate limiter) comes out of the box with any Mule API Gateway distribution. Mule EE comes with a light weight version of it. If you are using Anypoint API Platform, then you don't need to pay attention to the internals of how it is being done: Simply apply/unapply the policy to your managed API and it will work like a charm.
Even i tried implementing the Throttling concept in mule flows. There is no exact way of implementation for this , but i was able get that nature into the flows using the receiver thread profiling at inbounds and dispatch threading profiles on outbound connectors.

NServiceBus - Messages are going to Error queue directly without processing

We have an issue with a windows service which uses nServiceBus. At some random moment, the nServiceBus stops processing messages and direct them directly to Error queue, and I have to restart the service. After the restart, the messages arrived in the input message queue are handled, and everything gets back to normal. If we re-drop the messages which were went to error queue, it is processing it successfully without any issue.
We are using log4net logs to audit the message flow and storing in DB. The NServiceBus Handler stops to log in log4net. After we restart the windows service (NServiceBus) then it start to log again. We are NOT able to redproduce this issue in development environment. We are suspecting this could be a NService Bus Memory Leak issue. But we don't know how to confirm this issue and resolve the same.
We are planning to move this Windows Service (NServiceBus) to different server as a trial and error basis. Did anyone face this issue ever and resolved it? Please help us to resolve this issue as it is causing more troubles in Production environment.
NServiceBus Version that we are using : 2.0.0.1329
Message queue and windows service are in the same machine.
I believe you're running on a version of NServiceBus that is about 5 years old and is no longer supported. While I could give you the standard recommendation of upgrading to a more current release, it could very well be that some of the configuration APIs that you're using have been made obsolete so you may need to make some modifications there and/or in the app.configs.
I'm sorry to say that there probably isn't a better solution for you at this time.
In general, I'd suggest trying to track the NServiceBus releases somewhat more closely. If you're within 6-12 months of the current release, you should generally be in good shape.

What is the best alternative way of monitoring apache Active MQ other than using JMX API

I have tried and tested the JMX API and it is pretty simple to use and provides a vast number of statistics required for monitoring ActiveMQ.
But the problem is, i dont want to monitor my ActiveMQ remotely and also i dont want to use another API.To be more precise, i want to use the JMS API itself to get statistics related to various destinations and the broker itself.
Advisory messages seem to be an alternative but they provide limited Amount of Administrative Messages to monitor.
Any input is highly appreciated...
There is no built-in support for this. But you can implement a JMS topic which publishes the monitoring data every few seconds. Make the connection non-persistent so that it doesn't pile up when there are no listeners or when they loose connection.
Now you can write a client that connects to this topic and it will receive updates.
AMQ-2379 resulted in a broker plugin for grabbing statistics from destinations by sending a simple JMS message. Check out the docs that show how to use it here:
http://activemq.apache.org/statisticsplugin.html
The statistics plugin is available in the 5.3 release.
You can checkout this http://issues.apache.org/activemq/browse/AMQ-2379, it will be avaiable in upcoming 5.3.0 release
There's a blog post queued up to go on http://issues.apache.org/activemq/browse/AMQ-2379 - will post it in a couple of days or so