ActiveMQ sometimes loses message property when using broker redelivery plugin - activemq

We are using ActiveMQ 5.8.0 in combination with Spring JMS 3.2.3.RELEASE. The application listens to a queue using the Spring DefaultMessageListenerContainer, and objects are stored in an ActiveMQTextMessage using the Spring MappingJackson2MessageConverter.
We also use the ActiveMQ broker redelivery plugin, which is configured in activemq.xml. Messages are persisted in an Oracle database, also configured in activemq.xml.
Now we have the following problem: in about 60% of the messages that are redelivered, the javaClass property that is set on the message by the MappingJackson2MessageConverter is missing. This causes the following stack trace:
org.springframework.jms.support.converter.MessageConversionException: Could not find type id property [javaClass]
at org.springframework.jms.support.converter.MappingJackson2MessageConverter.getJavaTypeForMessage(MappingJackson2MessageConverter.java:360)
at org.springframework.jms.support.converter.MappingJackson2MessageConverter.fromMessage(MappingJackson2MessageConverter.java:176)
When examining the persisted messages in our database we see that the javaClass property is indeed missing. But the redeliveryDelay and AMQ_SCHEDULED_DELAY properties that are set by the RedeliveryPlugin are present.
Could this be caused by a bug in ActiveMQ? I see that RedeliveryPlugin.scheduleRedelivery() sets the marshalledProperties to null, which should be correct behaviour to ensure that all properties are serialised in Message.beforeMarshall(). But if the properties are created lazily and no method that creates them (i.e. getProperty() or some such) has been called yet, then nulling the marshalledProperties wipes out properties that have been set before the redelivery plugin.
The following unit test proves that this scenario is possible with the current implementation of org.apache.activemq.command.Message:
#Test
public void testGetPropertyAfterUnmarshallAndNullMarshalledProperties() throws IOException {
ActiveMQTextMessage msg = new ActiveMQTextMessage();
msg.setProperty(TYPEID_PROPERTY, TEST_CLASS);
OpenWireFormat wireFormat = new OpenWireFormat(CommandTypes.PROTOCOL_VERSION);
msg.beforeMarshall(wireFormat);
ByteSequence byteSequence = wireFormat.marshal(msg);
ActiveMQTextMessage unmarshalled = (ActiveMQTextMessage) wireFormat.unmarshal(byteSequence);
// simulate RedeliveryPlugin behaviour
unmarshalled.setMarshalledProperties(null);
assertNull(unmarshalled.getProperty(TYPEID_PROPERTY));
}
Has anyone seen this kind of behaviour when using the broker redelivery plugin? We started using the broker plugin because we were experiencing problems with the client redelivery mechanism, so going back to that is not an option.

Related

Execute Spring Integration flows in parallel

If I have a simple IntegrationFlow like this:
#Bean
public IntegrationFlow downloadFlow() {
return IntegrationFlows.from("rabbitQueue1")
.handle(longRunningMessageHandler)
.channel("rabbitQueue2")
.get();
}
... and if the rabbitQueue1 is filled with messages,
what should I do to handle multiple messages at the same time? Is that possible?
It seems that by default, the handler executes one message at a time.
Yes, that's true, by default endpoints are wired with DirectChannel. That's like performing a plain Java instructions one by one. So, to do some parallel job in Java you need an Executor to shift a call to the separate thread.
The same is possible with Spring Integration via an ExecutiorChannel. You can make that rabbitQueue1 as an ExecutorChannel bean or use this instead of that plain name:
IntegrationFlows.from(MessageChannels.executor("rabbitQueue1", someExecturorBean)
and all the messages arriving to this channel are going to be paralleled in the threads provided by an executor. That longRunningMessageHandler are going to process your messages in parallel.
See more info in the Reference Manual: https://docs.spring.io/spring-integration/docs/current/reference/html/#channel-implementations

Kafka Error handling : Processor.output().send(message, kafkaTimeoutInMS) always returns true and its async

May be this issue is already reported and resolved .I didn't find the solution and any open issues which talk about this, so creating new one.
I am trying to handle error while publishing data to kafka topic.
With kafka spring steam we are pushing to kafka by using this
if (processor.output().send(messsage , kafkaTimeoutInMS) && acknowledgment != null)
{
LOGGER.debug("Acknowledgment provided");
LOGGER.info("Sending to Kafka successful");
acknowledgment.acknowledge();
}
else
{
LOGGER.error("Sending to Kafka failed", message);
}
Send() method always returns true, I tried stopping kafka manual while running in debug mode, but still it returns true. I have that read it is asynchronous.
I Tried setting
bindings: output: producer: sync: true
This didnt help.
But I see some error which I cant use in my logic to decide whether there is failure or success.
We are manually acknowledging hence we are only supposed to acknowledge when its sent to topic successfully and we need to log all failed messages.
Any suggestions?
I believe you've misinterpreted on how spring-cloud-stream works.
As a framework there is certain contract between the user and the framework and when it comes to messaging the acks, retries, DLQ and many more aspects are handled automatically to ensure the user doesn't have to be exposed to this manually (as you are trying to do).
Consider spending a little time and going through the user guide - https://docs.spring.io/spring-cloud-stream/docs/Fishtown.M3/reference/htmlsingle/
Also, here is the very basic example that will demonstrates a typical interaction of user(developer) with the framework. As you can see, all you're doing is implementing a simple handler which receives and returns a piece of data. The rest (the actual receive from Kafka and send to Kafka or any other messaging system) is handled by the framework provided binders.
#SpringBootApplication
#EnableBinding(Processor.class)
public class ProcessorApplication {
public static void main(String[] args) {
SpringApplication.run(ProcessorApplication.class);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String echo(String message) {
return message;
}
}

Override SimpleMessageListenerContainer.setDefaultRequeueRejected(false) behavior

We have been using a framework to use Spring AMQP , where the framework has set the SimpleMessageListenerContainer.setDefaultRequeueRejected(false),
Which means default messages will not be Requeued if throwing an exception from consumer .
Is there any way i can change this behavior without changing the SimpleMessageListenerContainer.setDefaultRequeueRejected(true)
If you mean can you set the container to not requeue by default but requeue for some exception, the only way you can do that is to set defaultRequeueRejected to true (the default) and use a custom error handler.
The default ConditionalRejectingErrorHandler is configured with a default FatalExceptionStrategy that treats certain unrecoverable exceptions as fatal (message conversion exceptions etc). When these exceptions are thrown the message is rejected and not requeued.
You can provide a custom FatalExceptionStrategy to the error handler and (since version 1.6.3) inject an instance of a subclass of ConditionalRejectingErrorHandler.DefaultExceptionStrategy and implement isUserCauseFatal() - this allows you to decide which exceptions are fatal (reject and don't requeue) and which should be requeued. The error handler achieves this by throwing AmqpRejectAndDontRequeueException which is a signal to the container to not requeue the message.
Prior to 1.6.3, you had to inject a complete implementation of FatalExceptionStrategy.

How to invoke a web service after redeliveries exhausted in Apache Camel?

I have failed to find an enterprise integration pattern or recipe that promotes a solution for this problem:
After the re-delivery attempts have been exhausted, I need to send a web service request back to the originating source, to notify the sender of a failed delivery.
Upon exhaustion of all re-delivery attempts, should I move the message to a dead letter queue? Then create a new consumer listening on that DL queue? Do I need a unique dead letter queue for each of my source message queues? Should I add a message header, noting the source queue, before I move it to the dead letter queue? If all messages go to a single dead letter queue, how should my consumer know where to send the web service request?
Can you point me to a book, blog post, or article? What is the prescribed approach?
I'm working in a really old version of Fuse ESB but I expect that solutions in ServiceMix to be equally applicable.
Or maybe, what I'm asking for is an anti-pattern or code-smell. Please advise.
If you are new to Camel and really want to get an in-depth knowledge of it, I would recommend Camel in Action, a book by Claus Ibsen. There's a second edition in the works, with 14 out of 19 chapters already done so you may also give that a shot.
If that's a bit too much, online documentation is pretty okay, you can find out the basics just fine from it. For error handling I recommend starting with the general error handling page then moving on to error handler docs and exception policy documentation.
Generally, dead letter channel is the way to go - Camel will automatically send to DLC after retries have been exhausted, you just have to define the DLC yourself. And its name implies, it's a channel and doesn't really need to be a queue - you can write to file, invoke a web-service, submit a message to a message queue or just write to logs, it's completely up to you.
// error-handler DLC, will send to HTTP endpoint when retries are exhausted
errorHandler(deadLetterChannel("http4://my.webservice.hos/path")
.useOriginalMessage()
.maximumRedeliveries(3)
.redeliveryDelay(5000))
// exception-clause DLC, will send to HTTP endpoint when retries are exhausted
onException(NetworkException.class)
.handled(true)
.maximumRedeliveries(5)
.backOffMultiplier(3)
.redeliveryDelay(15000)
.to("http4://my.webservice.hos/otherpath");
I myself have always preferred having a message queue and then consuming from there for any other recovery or reporting. I generally include failure details like exchange ID and route ID, message headers, error message and sometimes even stacktrace. The resulting message, as you can imagine, grows quite a bit but it tremendously simplifies troubleshooting and debugging, especially in environments where you have quite a number of components and services. Here's a sample DLC message from one my projects:
public class DeadLetterChannelMessage {
private String timestamp = Times.nowInUtc().toString();
private String exchangeId;
private String originalMessageBody;
private Map<String, Object> headers;
private String fromRouteId;
private String errorMessage;
private String stackTrace;
#RequiredByThirdPartyFramework("jackson")
private DeadLetterChannelMessage() {
}
#SuppressWarnings("ThrowableResultOfMethodCallIgnored")
public DeadLetterChannelMessage(Exchange e) {
exchangeId = e.getExchangeId();
originalMessageBody = e.getIn().getBody(String.class);
headers = Collections.unmodifiableMap(e.getIn().getHeaders());
fromRouteId = e.getFromRouteId();
Optional.ofNullable(e.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class))
.ifPresent(throwable -> {
errorMessage = throwable.getMessage();
stackTrace = ExceptionUtils.getStackTrace(throwable);
});
}
// getters
}
When consuming from the dead letter queue, route ID can tell you where the failure originated from so you can then implement routes that are specific for handing errors coming from there:
// general DLC handling route
from("{{your.dlc.uri}}")
.routeId(ID_REPROCESSABLE_DLC_ROUTE)
.removeHeaders(Headers.ALL)
.unmarshal().json(JsonLibrary.Jackson, DeadLetterChannelMessage.class)
.toD("direct:reprocess_${body.fromRouteId}"); // error handling route
// handle errors from `myRouteId`
from("direct:reprocess_myRouteId")
.log("Error: ${body.errorMessage} for ${body.originalMessageBody}");
// you'll probably do something better here, e.g.
// .convertBodyTo(WebServiceErrorReport.class) // requires a converter
// .process(e -> { //do some pre-processing, like setting headers/properties })
// .toD("http4://web-service-uri/path"); // send to web-service
// for routes that have no DLC handling supplied
onException(DirectConsumerNotAvailableException.class)
.handled(true)
.useOriginalMessage()
.removeHeaders(Headers.ALL)
.to({{my.unreprocessable.dlc}}); // errors that cannot be recovered from

Sending Files using Active MQ with BlobMessage

I have an requirement in my application to send files from one application to another over HTTP/FTP protocol. I found following link which tells that the same can be done using Active MQ with supoort of Blob messages:
activemq.apache.org/blob-messages.html
I configured ActiveMq 5.8 on my windows machine, included required dependency for ActiveMQ lib in my pom.xml and i am able to send the simple javax.jms.TextMessage and javax.jms.MapMessage with org.springframework.jms.core.JmsTemplate
But while i moved to send BlobMessage using following method, a compile time error arises while creating the BlobMessage object from javax.jms.Session object which says
The method createBlobMessage(File) is undefined for the type Session
Here is the method i am using:
public void sendFile(){
jmsTemplate.send(
new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
BlobMessage message = session.createBlobMessage(new File("/foo/bar"));
return jmsTemplate.send(message);
}
}
);
}
Please help to resolve this compile time error.
Regards,
Arun
The BlobMessage methods are not JMS spec methods so they won't appear in the javax.jms.Session interface, you need to cast to org.apache.activemq.ActiveMQSession in order to use the BlobMessage specific functionality.