MassTransit & RabbitMQ - How to verify that messaged got processed inside of containerized consumer? - rabbitmq

I am working on integration tests for my project.
I have containerized RabbitMQ queue, containerized consumer of this queue (using MassTransit) and containerized API that this consumer calls during processing of the message.
My test pushes a message to the queue, it gets picked up by the consumer and here is where my problem comes in - is there a way to check when this consumer inside of container proccesed the message, from my test perspective?
For now I just used Thread.Sleep() for 10 seconds and run my assertions after that.
It works but, obviously, as the number of tests grow this is becoming tedious...

How about using the actual rabbitmq REST API for that? From an integration test, you could use basic authentication, and query the queue endpoint, e. g.
/api/queues/%2F/foo
to query the queue foo on the default virtual host / (url-encoded as %2F). This will give you back a JSON data structure with the same details that you can see via the UI (in fact the UI is using this API as well), like the below (heavily truncated).
{
"messages": 1,
"messages_ready": 1,
"messages_unacknowledged": 0
}
You can poll this endpoint until messages is equal to 0.

Here is how I eventually resolved this:
I didn't mention that I also have Seq server running as I didn't think it would be relevant in my case but turned out it was.
At the end of message consuming method I have put log that I have enriched with custom property that indicates that this log means that message has been processed. Then in my integration test I set Polly policy that asks Seq for this "processing finished" log (with correct message id of course) until it appears (or until set timeout). However, I feel like #Driedas answer is way simpler

Maybe this can help you
https://event-driven.io/en/testing_asynchronous_processes_with_a_little_help_from_dotnet_channels/
Allows for awaiting rabbitmq events from within the tests without doing thread.Sleep

Related

ActiveMQ: How do I limit the number of messages being dispatched?

Let's say I have one ActiveMQ Broker and an undefined numbers of consumers.
Problem:
To process a message, consumers need an external service which is either "DATA1" or "DATA2" (specified in the message)
Each server, "DATA1" and "DATA2", can only handle 20 connections
So at most 20 "DATA1" and 20 "DATA2" messages must be dispatched at any time
Because of priorization, the messages must be enqueued in the same queue
Even if message A has a higher prio than message B, if A can't be processed because the external service has no free slots, message B needs to be processed instead
How can this be solved? As long as I was using message pulling (prefetch of 0), I was able to do this by using a BrokerPlugin that, on messagePull, achieved this by using semaphores and selectors. If the limits were reached, the pull returned null.
However, due to performance issues I had to set prefetch to 1 and use push instead. Therefore, my messagePull hack no longer works (it's never called).
So far I'm considering implementing a custom Cursor but I was wondering if someone knows a better solution.
Update the custom cursor worked but broke features like message removal. I tried with a custom Queue and QueueDispatchSelector (which is a pain to configure since there isn't a proper API to do so) and it mostly works but I still have synchronisation issues.
Also, a very suitable API seems to be DispatchPolicy, however, while it is referenced by Queue, it's never used.
Queues give you buffering for system processing time for free. Messages are delivered on demand. With prefetch=0 or prefetch=1, should effectively get you there. Messages will only be delivered to a consumer when the consumer is ready (ie.. during the consumer.receive() method).
consumer.receive() is a blocking call, so you should not need any custom plugin or other to delay delivery until the consumer process (and its required downstream services) are ready to handle it.
The behavior should work out-of-the-box, or there are some details to your use case that are not provided to shed more light on the scenario.

Why messages sometimes get lost using Scatter/Gather with default threading profile?

I am using reliable delivery in mule flow. It is very simple case that takes message from JMS queue (ActiveMQ based), invokes several actions depending on it's content and, if everything is fine - delivers it into another JMS queue.
A flow is synchronized, both JMS queues are transactional (first BEGINS, second JOINS transaction), redelivery is used and DLQ for undelivered messages. Literally: I expect that all messages are properly either processed or delivered to DLQ.
For processing orchestration I am using Scatter/Gather flow control which works quite fine until I call external HTTP service using HTTP connector. When I use default threading profile it happens, that some messages are lost (like 3 of 5000 messages). They just disappear. No trace even in DLQ.
On the other hand, when I use custom profile (not utilizing thread) - all messages are getting processed without any problems.
What I have noticed is the fact, default threading profile utilizes 'ScatterGatherWorkManager', while custom uses 'ActiveMQ Session Task' threads.
So my question is: what is the possible cause of loosing these messages?
I am using Mule Server 3.6.1 CE Runtime.
by default scatter gather is setup for no failed routes you can define your own aggregation strategy to handle lost message
custom-aggregation-strategy
https://docs.mulesoft.com/mule-user-guide/v/3.6/scatter-gather

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

Mule ESB: How to achieve typical ReTry Mechanism in MULE ESB

I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport

WCF Workflow Service breaking workflow yields timeout

I have a WCF Workflow Service (running on AppFabric) that accepts a Connect receive operation, and then move on to listen to a number of other operations.
When trying to break the workflow from my unit test, by invoking Connect twice, the service won't respond on my second request, but will wait until a timeout occurs.
I am expecting an error message such as this one:
How do I handle "Receive" calls being made out of order?
Operation 'AddQualification|{http://tempuri.org/}IZSalesFunnelService' on service instance with identifier '1984c927-402b-4fbb-acd4-edfe4f0d8fa4' cannot be performed at this time. Please ensure that the operations are performed in the correct order and that the binding in use provides ordered delivery guarantees
Note
The behaviour looks like in this question, but the current workflow does not use any delays.
I suspect you are still being bitten by the same issue as in the other question you are referring to. This is a bug in the workflow runtime scheduler.