Springboot RabbitListener rollback after queue TTL - rabbitmq

Is there some way to rollback a transaction executed into a RabbitListener event? I have something like below
#Autowired private SomeService service;
#RabbitListener(queues = {"some-queue-ttl-5s"})
#Transactional
public SomeOutputMessage handleMessageQueue(SomeInputMessage inputMessage){
//do something
return service.processInput(inputMessage); //JPA update transaction
}
I tried using Transactional but even after the message gets expired x-message-ttl the processInput will commit the transaction. I wonder if we can rollback the process and send the inputMessage to a DLX.

Related

Repository save not rollbacked if send message fails

Given the following code :
#RabbitListener
public void process(Message myMessage) {
Event event = ..get event from myMessage
handleMessage(event);
}
#Transactional
public void handleMessage(Event event) {
ObjectToSend objecToSend = ...get objectToSend from event
rabbitTemplate.convertAndSend(exchange1, routingKey1, objectToSend); // line 1 : supposte that at this point rabbit is still connected
persistService.save(new MyEntity()); // line 2
doSomethingElse(); // line 3 : suppose that at this point rabbit is disconnected (network failure)
}
I notice that if persistService.save fails then :
objecToSend is not sent (and this is fine)
the original myMessage in the RabbitListener is sent to DLQ (and this is fine)
but if persistService.save succeed and convertAndSend fails (because of a rabbit server connection failure after persistService.save ), original myMessage go back in DLQ (this is ok) but the problem is that MyEntity is not rollbacked.
What am I doing wrong ?
persistService.save(myEntity) should be executed ONLY IF rabbitTemplate.convertAndSend is REALLY sent.
The only solution I found is to use "publish confirms" and BLOCK after convertAndSend(message, correlationData) and then using a correlationData.getFuture() with future.get (possibly with a Timeout) and only after "positive" confirm received confirm I can proceed invoking method persistService.save()
Is it the right solution ? (I suspect this could be "slow)
Consider even that if the publish of objectToSend fail I must reject myMessage to the DLQ.
Thank you
You can also set channelTransacted on the template but the performance will be similar to waiting for the confirmation because it requires a round trip to the broker.
That will use a local transaction which will commit when the send completes.
Or, add your transaction manager to the listener container and it will synchronize the rabbit transaction with the DB transaction.

Java EE 7 - will new threads join ongoing transaction and commit/rollback at end of starting thread?

doStuff(tasks);
...
#Transactional
public void doStuff(List<Task> tasks) {
// run async
for(... task ...)
new Thread(() -> localServiceB.handle(task)).start();
}
....
#Transactional
public class LocalServiceB {
public void handle() ....
}
? when will the outer transaction end?
? cand "handle() join the transaction from "doStuff"?
? how can a single transaction span the doStuff and all forked threads until the last "handle" has finished, even if "doStuff" has finished already. (hint: I prefer doStuff NOT to wait for the threads)
All tasks run without an explicit transaction (they do not enlist in the application component's transaction).
https://docs.oracle.com/javaee/7/api/javax/enterprise/concurrent/ManagedExecutorService.html

handle separate transaction in java batch (JSR-352)

I'm using jberet implementation of JSR 352 java batch specs.
Actually I need a separate transaction for doing a singular update, something like this:
class MyItemWriter implements ItemWriter
#Inject
UserTransaction transaction
void resetLastProductsUpdateDate(String uidCli) throws BusinessException {
try {
if (transaction.getStatus() != Status.STATUS_ACTIVE) {
transaction.begin();
}
final Customer customer = dao.findById(id);
customer.setLastUpdate(null);
customer.persist(cliente);
transaction.commit();
} catch (RollbackException | HeuristicMixedException | HeuristicRollbackException | SystemException | NotSupportedException e) {
logger.error("error while updating user products last update");
throw new BusinessException();
}
}
I first tried marking resetLastProductsUpdateDate methoad as #Transactional(REQUIRES_NEW), however it didn't worked.
My question is:
Is there any more elegant way to achieve this singular transaction without manually handle of transaction?
While does UserTransation works, EntityManager.transaction doesn't. I don't get it why.
Class below, which is injected from a Batchlet, works properly; Why I can't get to make work the #Transactional annotation on resetLastProductsUpdateDate method instead?
public class DynamicQueryDAO {
#Inject
EntityManager entityManager;
#Inject
private Logger logger;
#Transactional(Transactional.TxType.REQUIRED)
public void executeQuery(String query) {
logger.info("executing query: {}", query);
final int output = entityManager.createNativeQuery(query).executeUpdate();
logger.info("rows updated: {}", output);
}
}
EDIT
Actually I guess neither usertransaction is a good solution, because it affects entire itemwriter transaction management. Still Don't know how to deal with transaction isolation :(
In general the batch application should avoid directly handling transaction. You can have your batch component to throw some business exceptions upon certain conditions, and configure your job.xml to trigger retry upon this business exception. During retry, each individual data will be processed and committed in its own chunk.

Don't requeue if transaction fails

I have a job with the following config:
#Autowired
private ConnectionFactory connectionFactory;
#Bean
Step step() {
return steps.get("step")
.<~>chunk(chunkSize)
.reader(reader())
.processor(processor())
.writer(writer())
.build();
}
#Bean
ItemReader<Person> reader() {
return new AmqpItemReader<>(amqpTemplate());
}
#Bean
AmqpTemplate amqpTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setChannelTransacted(true);
return rabbitTemplate;
}
Is it possible to change behavior of RabbitResourceHolder to not requeue the message in case of a transaction rollback? It makes sense in Spring Batch?
Not when using an external transaction manager; the whole point of rolling back a transaction is to put things back the way they were before the transaction started.
If you don't use transactions (or just use a local transaction - via setChannelTransacted(true) and no transaction manager), you (or an ErrorHandler) can throw an AmqpRejectAndDontRequeueException (or set defaultRequeueRejected to false on the container) and the message will go to the DLQ.
I can see that this is inconsistent; the RabbitMQ documentation says:
On the consuming side, the acknowledgements are transactional, not the consuming of the messages themselves.
So rabbit itself does not requeue the delivery but, as you point out, the resource holder does (but the container will reject the delivery when there is no transaction manager and one of the 2 conditions I described is true).
I think we need to provide at least an option for the behavior you want.
I opened AMQP-711.

What happens to messages sent/published in a Handler that fails with an exception?

I've read that each message handler is wrapped in an "ambient transaction", and that database access is automatically enlisted in that transaction when possible. Does NServiceBus do anything else with that transaction? Specifically, I'm wondering if it can somehow cancel any messages that a handler sends/publishes in the case of an exception.
In the code below, does the bus Send the ArchiveMessage as soon as the Send method is called, or does it queue it up and only send it if the handler executes successfully?
public class BadHandler
{
public IBus Bus { get; set; }
public void Handle(MyMessage msg)
{
Bus.Send(new ArchiveMessage(msg.MessageId)); //does this message send?
throw new Exception("Something terrible happened, maybe my database connection failed!");
}
}
I this case the message would not be sent. MyMessage will be retried the configured number of times and them moved to the designated error queue. You can have greater control over that process if you wish, you would need to create a custom FaultManager.