Smallrye Kafka synchronous producer - kotlin

I'm running a Quarkus Kafka producer inside a lambda. The thing is, that I want to block the main thread until all messages have been produced (and acknowledged) until I terminate the lambda execution. I see, that I can normally use the CompletionStage<Void> send(T msg); of the microprofile Emitter, however, it only accepts a payload and not a Message, which I need to send the Metadata of the outgoing Kafka messages. Could you think of a way around that?

Related

ActiveMQ CMS: Can messages be lost between creating a consumer and setting a listener?

Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.

Consume message from queue after service complete the processing of previous message

I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.

Message-completion results from a consumer back to a producer

We are building an application with a microservice architecture.
The microservice architecture will follow a message-oriented pattern, with AWS SQS.
We would like to return completion results from the consumer service back to the producer service.
This is the algorithm we are considering:
Producer creates a message with a unique id
Producer subscribes to a Redis channel that is named with the message id
Producer places the message onto the SQS queue
Consumer removes the message from the SQS queue and performs an operation
Consumer publishes the results of the operation to the Redis channel that is named with the message id
Producer recieves the completion results and resumes execution
Is this a reasonable way to pass message-completion results from a consumer back to a producer?
After continued research, it became apparent that message queues are not part of the solution. Point #5 in this article, "...or doesn’t even care about the result..." suggests (by implication) that we are simply using the wrong approach.
We changed our design so that request ordering is not important, and will make direct calls to AWS Lambda functions using the invoke api.

How to hold Mule process until JMS consume complete processing

I have JMS in my mule flow where producer reads records from cache, put in queue and consumer consumes messages and do further processing. Following is the flow for understanding.
Service 1 (Read data from file) -> Service 2 (put each line in cache)
-> JMS Service 3 (Producer Read data from cache line by line and put in queue) and Consumer read from queue -> Service 4
In above flow, from JMS component, flow becomes asynch hence as soon as producer puts all records in queue response goes back to client saying process completed but it is possible that consumer still going to consume messages.
I want to hold process from producer to send back response until consumer consumes all the messages.
Any idea on this how to achieve?
Since the async takes the copy of the exact thread and process independently, it may be possible that the producer putting the message in the queue as fast as before the consumer actually able to consume it.
One way I can think to hold the process of putting the message into the queue is by putting a sleep() before it.
You can use a Groovy component and use sleep() in it to hold the flow or slow down the process.
for example, if you put the following:
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[
sleep(10000);
return message.payload;]]>
</scripting:script>
</scripting:component>
before the putting the message into the queue, the process will slow down a bit and will hold the flow for 10000 ms till on the other side the consumer actually consume it.
Polling for completion status as described above may work OK but there's still a risk of some transactions not being completed after wait time, or waiting long after all messages have been processed.
Depending on the end goal of this exercise, you could perhaps leverage Mule batch, which already implements the splitting of the inbound request into individual messages, processing the messages in one or multiple consumer threads, keeping track of the chunks processed and remaining, and reporting the results / executing final steps once all data is processed.
If you can't use batch and need to reassemble the processed messages into a single list or map, you may be able to get the Collection Aggregator do the job of tracking the messages by correlation ID and setting the timeouts.
The crude DIY way to implement it is to build some sort of dispatcher logic for the JMS publishing component. It will submit all messages to JMS then wait for each consumer / worker thread to respond back (via a separate JMS queue) with completion message with the same correlation ID. The dispatcher will then track all submitted / processed messages in the in-memory or persistent storage and respond back once the last message in the batch has been acknowledged, or by pre-defined timeout. Which is very close to what Mule batch already does.
Cheers!
Dima
You can use exchange pattern value as request-response so that flow will wait for response from JMS.

Instruct RabbitMQ to resend undelivered messages periodically

Background
We're using langohr to interact with RabbitMQ. We've tried two different approaches to let RabbitMQ resend messages that has not yet been properly handled by our service. One way that works is to send a basic.nack with requeue set to the true but this will resend the message immediately until the service responds with a basic.ack. This is a bit problematic if the service for example tries to persist the message to a datastore that is currently down (and is down for a while). It would be better for us to just fetch the undelivered messages say every 20 seconds or so (i.e. we neither do a basic.ack or basic.nack if the datastore is down, we just let the messages be retained in the queue). We've tried to implement this using an ExecutorService whose gist is implemented like this:
(let [chan (lch/open conn)] ; We create a new channel since channels in Langohr are not thread-safe
(log/info "Triggering \"recover\" for channel" chan)
(try
(lb/recover chan)
(catch Exception e (log/error "Failed to call recover" e))
(finally (lch/close chan))))
Unfortunately this doesn't seem to work (the messages are not redelivered and just remains in the queue). If we restart the service the queued messages are consumed correctly. However we have other services that are implemented using spring-rabbitmq (in Java) and they seem to be taking care of this out of the box. I've tried looking in the source code to figure out how they do it but I haven't managed to do so yet.
Question
How do you instruct RabbitMQ to (re-)deliver messages in the queue periodically (preferably using Langohr)?
I am not sure what you are doing with your Spring AMQP apps, but there's nothing built into RabbitMQ for this.
However, it's pretty easy to set up dead-lettering using a TTL to requeue back to the original queue after some period of time. See this answer for examples, links etc.
EDIT
However, Spring AMQP does have a retry interceptor which can be configured to suspend the consumer thread for some period(s) during retry.
Stateful retry rejects and requeues; stateless retry handles the retries internally and has no interaction with the broker during retries.
See this answer which has instructions: we Nack the message, the nack puts the message into a holding queue for N seconds, then it TTLs out of that queue and into another queue that puts it back in the original queue.
It took a little bit of work to setup, but it works great!