Masstransit: PublishFault context.Message is null when broker not reachable - rabbitmq

I am implementing fail over solution for messages published with Masstransit when the actual broker (RabbitMQ) is down.
The idea is to grab failed messages store them somewhere and then republish when the broker is up and running.
Possible solution is to use PublishObserver with implementation of PublishFault method.
Version of Masstransit is 5.5.5
public Task PublishFault<T>(PublishContext<T> context, Exception exception) where T : class
{
var message = context.Message; // message is null
..... // logic to save fault message in persistent storage
}
Expected result is to have an access to context.Message
Actual result - the context.Message is null;

This has been fixed in the develop version of MassTransit, which should be released at some point (as 5.5.6).
https://github.com/MassTransit/MassTransit/pull/1546

Related

Non-blocking exponential backoff retry mechanism for Spring Cloud with RabbitMQ

I have a consumer:
#Bean
public Function<Flux<Message<byte[]>>, Mono<Void>> myReactiveConsumer() {
return flux ->
flux.doOnNext(this::processMessage)
.doOnError(this::isRepetableError, ?message -> sendToTimeoutQueue(message)?)
.doOnError(this::allOtherErrors, ?message -> sendToDlq(message)?)
.then();
}
In case of deterministic error I want the message to be sent to dead letter queue,
but if the error isn't deterministic, I want the message to be sent to specific timeout queue (depending on how many times it has failed).
I have tried configuring RetryTemplate but it doesn't seem to give me enough information to redirect the message to different queue
#StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
return new RetryTemplate(...init);
}
Also configuring it through yaml file allows me to almost do what is needed but not exactly.
A solution like this seems good but I was unable to get it to work as spring cloud uses different beans.
How can I implement this retry logic?

MassTransit generates _skipped queues which I want to ignore

Can anyone guess what the problem can be because I'm clueless on how to solve this. MassTransit generates _skipped queues and I don't have a clue why it is generating those. It is being generated when doing a publish request response.
Request Client is created using following method in MassTransit.RequestClientExtensions
public static IRequestClient<TRequest, TResponse> CreatePublishRequestClient<TRequest, TResponse>(this IBus bus, TimeSpan timeout, TimeSpan? ttl = null, Action<SendContext<TRequest>> callback = null) where TRequest : class where TResponse : class
{
return (IRequestClient<TRequest, TResponse>) new PublishRequestClient<TRequest, TResponse>(bus, timeout, ttl, callback);
}
And Request is done as follows:
TResponse response = TaskUtil.Await(() => requestClient.Request(request));
As you can see this is Request Response scenario where Request is being sent to all consumers. But because at the moment we have only one consumer it only is being sent to that consumer. deadletters appear easily if a publishrequestresponse is done to multiple consumers, once a consumer responds, the other consumer doesn't know where to respond and a deadletter is generated. But because we have one consumer here, we can eliminate this possibility.
So what could be other reasons for these skipped queues? Huge thanks for any help on how I can troubleshoot this...
I have to say, in the Consume method, in some condition, we raise a RequestTimeoutException and catch it in the requesting application. This is tested and this doesn't generate skipped queues.
Skipped queue is a dead letter queue. It means that your endpoint queue has a binding to some message exchange but there is no consumer for that message any longer. Maybe you change the topology and moved the consumer. You can go to the RMQ management UI and check the bindings for your endpoint exchange. If you look at messages that ended up in the skipped queue, you will find out what message types to look for.
Exchanges are named after message types so it will be easy to find the obsolete binding.
Then, in the management UI, you can manually remove the binding that is obsolete and there will be no more messages coming to the skipped queue.

MessageBus: wait when processing is done and send ACK to requestor

We work with external TCP/IP interfaces and one of the requirements is to keep connection open, wait when processing is done and send ACK with the results back.
What would be best approach to achieve that assuming we want to use MessageBus (masstransit/nservicebus) for communication with processing module and tracing message states: received, processing, succeeded, failed?
Specifically, when message arrives to handler/consumer, how it will know about TCP/IP connection? Should I store it in some custom container and inject it to consumer?
Any guidance is appreciated. Thanks.
The consumer will know how to initiate and manage the TCP connection lifecycle.
When a message is received, the handler can invoke the code which performs some action based on the message data. Whether this involves displaying an green elephant on a screen somewhere or opening a port, making a call, and then processing the ACK, does not change how you handle the message.
The actual code which is responsible for performing the action could be packaged into something like a nuget package and exposed over some kind of generic interface if that makes you happier, but there is no contradiction with a component having a dual role as a message consumer and processor of that message.
A new instance of the consumer will be created for each message
receiving. Also, in my case, consumer can’t initiate TCP/IP
connection, it has been already opened earlier (and stored somewhere
else) and consumer needs just have access to use it.
Sorry, I should have read your original question more closely.
There is a solution to shared access to a resource from NServiceBus, as documented here.
public class SomeEventHandler : IHandleMessages<SomeEvent>
{
private IMakeTcpCall _caller;
public SomeEventHandler(IMakeTcpCalls caller)
{
_caller = caller;
}
public Task Handle(SomeEvent message, IMessageHandlerContext context)
{
// Use the caller
var ack = _caller.Call(message.SomeData);
// Do something with ack
...
return Task.CompletedTask;
}
}
You would ideally have a DI container which would manage the lifecycle of the IMakeTcpCall instance as a singleton (though this might get weird in high volume scenarios), so that you can re-use the open TCP channel.
Eg, in Castle Windsor:
Component.For<IMakeTcpCalls>().ImplementedBy<MyThreadsafeTcpCaller>().LifestyleSingleton();
Castle Windsor integrates with NServiceBus

how to requeue a message using spring ampq

While requeuing the message we want the message to be placed at the start/front of the queue.
This means if I have in queue as "D,C,B,A" and process A and then want to put back in Queue at start, my queue should looks like this:-
"A,D,C,B".
So, I should be able to process B, and since A moved at start of the queue should process it at end.
Interestingly, when tried with native AMQP library of rabbitMQ it wworks as expected above.
However, when we do it through spring AMQP library , the message still remains whereever it was, does not go to the front of the queue.
Here is the code which we tried:
public void onMessage(Message message, com.rabbitmq.client.Channel channel) throws Exception {
if(new String(message.getBody()).equalsIgnoreCase("A")){
System.out.println("Message = =="+new String(message.getBody()));
channel.basicReject(message.getMessageProperties().getDeliveryTag(), true);
}else{
System.out.println("Message ==="+new String(message.getBody()));
channel.basicAck(message.getMessageProperties().getDeliveryTag(), true);
}
}
Any idea why it does not work in Spring but works in the rabbimq amqp native library ?
Spring AMQP version : spring-amqp-1.4.5.RELEASE
Rabbitmq amqp client version : 3.5.1
RabbitMQ was changed in version 2.7 to requeue messages at the head of the queue. Previous versions requeued at the tail of the queue - see here.
Your observation is because Spring AMQP sets the prefetch to 1 by default (by calling basicQos) - which only allows 1 message to be outstanding at the client. If basicQos is not called, the broker will send all four messages to the client so it will appear that the rejected message went to the back of the queue if the queue is empty due to prefetch.
If you set the prefetchCount to at least 4, you will see the same behavior.
EDIT
If you really want to queue at the beginning you can use retry together with a RepublishMessageRecoverer.

Clear existing messages in activemq queue, just after starting application using activemq-cpp

I have an application which is acting as a consumer for a queue in activemq. This application is written on c++ and using activemq-cpp to get the services of activemq.
I want to achieve is when my application goes down and again comes up, it should first delete all the messages which gets populated in queue during the time my application is down that is it should first delete all the old messages in queue and then starts receiving new messages.
Is there any way to achieve this using activemq-cpp?
If you cast your Connection instance to an ActiveMQConnection there is a destroyDestination method that will remove the destination from the broker and all messages if there are no active subscriptions when called, otherwise it will throw an exception so be prepared for that. A small code snippet follows.
ActiveMQConnection* connection =
dynamic_cast<ActiveMQConnection*>( cmsConnection );
try {
connection->destroyDestination(destination);
} catch(Exception& ex) {
}