I am trying to insert Open Telemetry inside my java application in order to verify if this solution can be used for requests tracking within a microservices architecture.
In my scenario, "Application A" produces a message on a RabbitMQ queue while "Application B" consumes the message from RabbitMQ (both applications are using Spring AMQP).
Within the messages published by "Application A", the traceparent is correctly present, which goes up to "Application B".
The message is consumed by the BlockingQueueConsumer, in whose log the trace_id and the span_id are correctly present and logged by slf4j.
BlockingQueueConsumer then delegates the processing of the message to the BlockingQueueConsumer which runs inside another Thread and from there on the trace_id and the span_id are lost.
Is it the OpenTelemetry normal behaviour?
What about context propagation from different threads?
Thanks
Damy
Related
I am working on integration tests for my project.
I have containerized RabbitMQ queue, containerized consumer of this queue (using MassTransit) and containerized API that this consumer calls during processing of the message.
My test pushes a message to the queue, it gets picked up by the consumer and here is where my problem comes in - is there a way to check when this consumer inside of container proccesed the message, from my test perspective?
For now I just used Thread.Sleep() for 10 seconds and run my assertions after that.
It works but, obviously, as the number of tests grow this is becoming tedious...
How about using the actual rabbitmq REST API for that? From an integration test, you could use basic authentication, and query the queue endpoint, e. g.
/api/queues/%2F/foo
to query the queue foo on the default virtual host / (url-encoded as %2F). This will give you back a JSON data structure with the same details that you can see via the UI (in fact the UI is using this API as well), like the below (heavily truncated).
{
"messages": 1,
"messages_ready": 1,
"messages_unacknowledged": 0
}
You can poll this endpoint until messages is equal to 0.
Here is how I eventually resolved this:
I didn't mention that I also have Seq server running as I didn't think it would be relevant in my case but turned out it was.
At the end of message consuming method I have put log that I have enriched with custom property that indicates that this log means that message has been processed. Then in my integration test I set Polly policy that asks Seq for this "processing finished" log (with correct message id of course) until it appears (or until set timeout). However, I feel like #Driedas answer is way simpler
Maybe this can help you
https://event-driven.io/en/testing_asynchronous_processes_with_a_little_help_from_dotnet_channels/
Allows for awaiting rabbitmq events from within the tests without doing thread.Sleep
I am using ActiveMQ with web console (activemq-web-console-5.16.4) in TomEE. The ActiveMQ-web-console-5.16.4.war was added to the TomEE webapps folder. Afterwards, I could access the web console. Currently, I want to view/monitor the content of enqueued/processed messages in the web console "Messages Enqueued". How can I manage that in my case? Should I bind the KahaDB message store or other databases?
In my application I use Apache Camel and send messages from one route to another by ActiveMQ.
I would appreciate any help.
Screenshots:
You can use the web console itself to view the content of the message assuming it fits into the narrow constraints of what the console can decode into human readable format.
First, click the "Browse" link.
Second, click the link for the actual message.
Third, see the "Message Details."
To be clear, you can only inspect the content of messages which are in the queue. This is represented by the "Number of Pending Messages." The "Messages Enqueued" is the number of messages sent to the queue (but not necessarily in the queue currently) since the broker was started. The "Messages Dequeued" is the number of messages consumed from the queue. In your case you have 66 messages which have been enqueued and dequeued (i.e. consumed) and therefore 0 pending messages.
If you want to keep a copy of every message sent to your queue for auditing purposes you can use a mirrored queue. As noted previously, you can only inspect messages which are in the queue and a mirrored queue will hold a copy of every message sent to the source queue allowing you to inspect those messages at your convenience.
I am using reliable delivery in mule flow. It is very simple case that takes message from JMS queue (ActiveMQ based), invokes several actions depending on it's content and, if everything is fine - delivers it into another JMS queue.
A flow is synchronized, both JMS queues are transactional (first BEGINS, second JOINS transaction), redelivery is used and DLQ for undelivered messages. Literally: I expect that all messages are properly either processed or delivered to DLQ.
For processing orchestration I am using Scatter/Gather flow control which works quite fine until I call external HTTP service using HTTP connector. When I use default threading profile it happens, that some messages are lost (like 3 of 5000 messages). They just disappear. No trace even in DLQ.
On the other hand, when I use custom profile (not utilizing thread) - all messages are getting processed without any problems.
What I have noticed is the fact, default threading profile utilizes 'ScatterGatherWorkManager', while custom uses 'ActiveMQ Session Task' threads.
So my question is: what is the possible cause of loosing these messages?
I am using Mule Server 3.6.1 CE Runtime.
by default scatter gather is setup for no failed routes you can define your own aggregation strategy to handle lost message
custom-aggregation-strategy
https://docs.mulesoft.com/mule-user-guide/v/3.6/scatter-gather
I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.
Suppose I have an application fed by a MQ queue. When the application receives a message that contains errors, the application itself pushes the received message to a certain invalid message queue.
My question is: what is the recommended way to have the receiving application append the failure/rejection reason to the message pushed on the invalid message queue? Some solutions come to mind, but I'm unsure which one is considered "best-practice":
(ab)using a standard header field
adding a custom header
encapsualting the message in another message
If all that you need is to place a reason code in the message, use the MQMD.Feedback field with one of the standard reason codes. In WMQ v7.0 or later, the application can set any number of message properties which are then readable both with JMS semantics and native WMQ API calls. It is up to you to define the taxonomy for naming the application-defined properties.
If the message is requeued to the Dead Letter Queue instead of an application-owned backout queue, it is customary to prepend a Dead Letter Header to it. The MQDLH structure contains a field for the reason code describing why that the message was requeued. As a rule, applications should avoid using the DLQ in favor of an application-owned queue. When applications do use the DLQ, it is normal that they should have access to put messages there but not to retrieve messages from that queue. This is because it is a system-wide resource and messages from different applications may land there. Normally, an admin application or person with elevated access is responsible for adjudicating and disposing of messages on the system DLQ.