Using ActiveMQ v 5.8
I am using javax.jms.MessageProducer.send() to send messages from my producer to ActiveMQ.
I want to know whether this sending is synchronous or asynchronous? And what will be the behavior if I make "useAsyncSend" flag to true ?
Thanks,
Anuj
ActiveMQ sends message in async mode by default in several cases. It is only in cases where the JMS specification required the use of sync sending that we default to sync sending. The cases that we are forced to send in sync mode are when persistent messages are being sent outside of a transaction.
If you are not using transactions and are sending persistent messages, then each send is synch and blocks until the broker has sent back an acknowledgement to the producer that the message has been safely persisted to disk. This ack provides that guarantee that the message will not be lost but it also costs a huge latency penalty since the client is blocked.
See the documentation on this at the ActiveMQ s.
yes, by default a send() is synchronous (for persistent queue/topic, async otherwise) and will block until an ACK has been received...
with useAsyncSend=true will not block...
per http://activemq.apache.org/connection-configuration-uri.html
Async Sends adds a massive performance boost; but means that the send() method will return immediately whether the message has been sent or not which could lead to message loss.
Related
Setting up a CMS consumer with a listener involves two separate calls: first, acquiring a consumer:
cms::MessageConsumer* cms::Session::createConsumer( const cms::Destination* );
and then, setting a listener on the consumer:
void cms::MessageConsumer::setMessageListener( cms::MessageListener* );
Could messages be lost if the implementation subscribes to the destination (and receives messages from the broker/router) before the listener is activated? Or are such messages queued internally and delivered to the listener upon activation?
Why isn't there an API call to create the consumer with a listener as a construction argument? (Is it because the JMS spec doesn't have it?)
(Addendum: this is probably a flaw in the API itself. A more logical order would be to instantiate a consumer from a session, and have a cms::Consumer::subscribe( cms::Destination*, cms::MessageListener* ) method in the API.)
I don't think the API is flawed necessarily. Obviously it could have been designed a different way, but I believe the solution to your alleged problem comes from the start method on the Connection object (inherited via Startable). The documentation for Connection states:
A CMS client typically creates a connection, one or more sessions, and a number of message producers and consumers. When a connection is created, it is in stopped mode. That means that no messages are being delivered.
It is typical to leave the connection in stopped mode until setup is complete (that is, until all message consumers have been created). At that point, the client calls the connection's start method, and messages begin arriving at the connection's consumers. This setup convention minimizes any client confusion that may result from asynchronous message delivery while the client is still in the process of setting itself up.
A connection can be started immediately, and the setup can be done afterwards. Clients that do this must be prepared to handle asynchronous message delivery while they are still in the process of setting up.
This is the same pattern that JMS follows.
In any case I don't think there's any risk of message loss regardless of when you invoke start(). If the consumer is using an auto-acknowledge mode then messages should only be automatically acknowledged once they are delivered synchronously via one of the receive methods or asynchronously through the listener's onMessage. To do otherwise would be a bug in my estimation. I've worked with JMS for the last 10 years on various implementations and I've never seen any kind of condition where messages were lost related to this.
If you want to add consumers after you've already invoked start() you could certainly call stop() first, but I don't see any problem with simply adding them on the fly.
From my understanding RabbitMQ producers require acknowledgment when sending messages to the broker which provides a delivery-guarantee. Kafka producers does not require acknowledgement from the broker. Does that mean there’s no delivery-guarantee with Kafka? If not, how does Kafka provide delivery guarantee without acknowledgement?
Is my understanding correct? Please correct any misunderstandings that I have as I’m still learning about these systems.
Kafka is actually flexible about this.
The number of acknowledgements for producers is configurable. The configuration is called RequiredAcks. In fact, required acks is set on ProduceRequest level, but I've never seen implementations where a single producer instance allows producing messages with different required acks settings.
RequiredAcks is an integer value which means "how many acknowledgements the broker should wait before responding to a produce request".
Having RequiredAcks set to 0 (VERY not recommended for production) means "fire and forget", e.g. broker will respond immediately without waiting until data is written to log. This is the case where you could lose messages without even knowing about that.
Having RequiredAcks set to 1 means "wait until data is written to local log", where local log is log of the broker that received the request. Once your data is written to local log, broker responds.
Having RequiredAcks set to -1 means "wait until the data is written to local log AND replicated by all ISRs".
Each ProduceRequest also has a Timeout field, which means "maximum time to wait for necessary amount of acknowledgements".
So Kafka supports acknowledging requests but allows turning acknowledgements off.
In 0.9.0.0 and above, the producer#send has a return value Future you can get the offset of the message in the broker's partition. Meantime, you can implement Callback, if there is no exception, it's mean that the message has been sent to the correct broker.
I want to ensure that certain kind of messages couldn't be lost, hence I should use Confirms (aka Publisher Acknowledgements).
The broker loses persistent messages if it crashes before said
messages are written to disk. Under certain conditions, this causes
the broker to behave in surprising ways.
For instance, consider this scenario:
a client publishes a persistent message to a durable queue
a client consumes the message from the queue (noting that the message is persistent and the queue durable), but doesn't yet ack it,
the broker dies and is restarted, and
the client reconnects and starts consuming messages.
At this point, the client could reasonably assume that the message
will be delivered again. This is not the case: the restart has caused
the broker to lose the message. In order to guarantee persistence, a
client should use confirms.
But what if, using confirms, the Publisher goes down before receive the ack and the message wasn't delivery to the queue for some reason (i.e. network failure).
Suppose we have a simple REST endpoint where we can POST new COMMENTS and, when a new COMMENT is created we want to publish a message in a queue. (Note: it doesn't matter if I send a message of a new COMMENT that at the end isn't created due to a rollback for example).
CommentEndpoint {
Channel channel;
post(String comment) {
channel.publish("comments-queue",comment) // is a persistent queue
Comment aNewComment = new Comment(comment)
repository.save(comment)
// what happens if the server where this publisher is running terminates here ?
channel.waitConfirmations()
}
}
When the server restarts the channel is gone and the message could never be delivered.
One solution that comes to my mind is that after a restart, query the recent comments (¿something like the comments created between the last 3 min before the crash?) in the repository and send one message for each one and await confirmations.
What you are worried about is really no longer RabbitMQ only issue, it is a distributed transaction issue. This discussion gives one reasonable lightweight solution. And there are more strict solutions, for instance, two-phase commit, three-phase commit, etc, to ensure data consistent when it is really necessary.
I would like to know if there is any implication when using concurrentStoreAndDispatchQueues = true with persistent messages and needed guaranty order.
We are using Kaha, with persistent messages and we need guaranty order, we re using also JMSXGroupID.
Is there any implication setting this to true, is it possible lost of messages?
Any help or clarification about the property concurrentStoreAndDispatchTopics will be helpful.
Thanks .
I think concurrentStoreAndDispatchQueues option improves the performance of message consumption from ActiveMQ queues. But it is less reliable than synchronous store and dispatch.
In case of concurrent store and dispatch broker does not wait for acknowledgments from consumer or message storage. It dispatches the message to consumers and to message storage thread in parallel and immediately sends back the acknowledgment to message producers.
So there is chance of messages being lost in case of message storage disk issues.
Please refer the documentation from Fuse ESB which explains the similar concept -
https://access.redhat.com/documentation/en-US/Fuse_ESB/4.4.1/html/ActiveMQ_Tuning_Guide/files/PersTuning-SerialToDisk.html
I'm using ActiveMQ for C++.
In our planned design, we're going to consume messages, pass them on to some asynchronous processing and only then the message is considered as handled.
We'd like to process more than one message in parallel - each will finish its processing in a different time - and ack only those that finished processing. This, in order to avoid losing messages when server goes down, process crashes etc.
From both documentation and testing, I understand that in both CLIENT_ACKNOWLEDGE and SESSION_TRANSACTED modes, there's no way to ack only one message.
Is there a best practice for such cases? Should I hold a "session pool", each session handles one message at a time synchronously and then acks it?
Thanks.
When you create a Session you can use the cms::Session acknowledgement mode INDIVIDUAL_ACKNOWLEDGE which allows you to ack a single Message. The cms::Message object has an acknowledge method you can use to ack each message.
cms::Session* session = connection.createSession(cms::Session::INDIVIDUAL_ACKNOWLEDGE);
To ack the Message:
cms::Message* message = consumer.receive();
message->acknowledge();
Although I have never really implemented a concurrent consumer in C++ for ActiveMQ, it's the way you normally handle such cases in Java.
Create a number of different threads with a session and a message listener each that reads messages of the queue, does the processing and then commits the transaction (or acks if you don't want transactions).