ActiveMQ persistent store is full and consumer is blocked - activemq

I'm doing a test to see how the flow control behaves. I created a fast producer and slow consumers and set my destination queue policy highwater mark to 60 percent..
the queue did reach 60% so messages now went to the store, now the store is full and blocking as expected..
But now i cannot get my consumer to connect and pull from the queue.. Seem that blocking is also blocking the consumer from getting in to start pulling from the queue..
Is this the correct behavior?

The consumer should not be blocked by flow-control. Otherwise messages could not be consumed to free up space on the broker for producers to send additional messages.

So this issues surfaced when I was using a on demand jms service. The service will queue or dequeue via a REST services. The consumers are created on demand.. If the broker is being blocked as im my case being out of resource, then you cannot create a new consumer.
I've since modified the jms service to use a consumer pool(implemented a object pool pattern). The consumer pool is initialized when the application starts and this resolved the blocking issue

Related

Distribute work with RabbiMQ

I'm using RabbitMQ and need a message to be queued and consumed by multiple servers, but only one of the servers could confirm this message and only it would remove this message from the queue, even if other servers have consumed the message, but do not have removed it from the queue. I wonder how RabbitMQ can do this scenario ?

Logstash with rabbitmq cluster

I have a 3 node cluster of Rabbitmq behind a HAproxy Load Balancer. When I shut down a node, Rabbitmq successfully switches the queue to the other nodes. However, I notice that Logstash stops pulling messages from the queue unless I restart it. Is this a problem with the way rabbitmq operates? i.e. it deactivates all active consumers. I am not sure if log stash has any retry capability. Anyone run into this issue?
Quoting rabbit mq documentation, page for clustering first
What is Replicated? All data/state required for the operation of a
RabbitMQ broker is replicated across all nodes. An exception to this
are message queues, which by default reside on one node, though they
are visible and reachable from all nodes.
and high availability
Clients that are consuming from a mirrored queue may wish to know that
the queue from which they have been consuming has failed over. When a
mirrored queue fails over, knowledge of which messages have been sent
to which consumer is lost, and therefore all unacknowledged messages
are redelivered with the redelivered flag set. Consumers may wish to
know this is going to happen.
If so, they can consume with the argument x-cancel-on-ha-failover set
to true. Their consuming will then be cancelled on failover and a
consumer cancellation notification sent. It is then the consumer's
responsibility to reissue basic.consume to start consuming again.
So, what does all this mean:
You have to mirror queues
The consumers should use manual ACK
The consumers should reconnect on their own
So the answer to your question is no, it's not a problem with rabbitmq, that's simply how it works. It's up to clients to reconnect.

RabbitMQ & Spring amqp retry without blocking consumers

I'm working with RabbitMQ and Spring amqp where I would prefer not to lose messages. By using exponential back off policy for retrying, I'm potentially blocking my consumers which they could be working off on messages they could handle. I'd like to give failed messages several days to retry with the exponential back off policy, but I don't want a consumer blocking for several days and I want it to keep working on the other messages.
I know we can achieve this kind of functionality with ActiveMQ(Retrying messages at some point in the future (ActiveMQ)), but could not find a similar solution for RabbitMQ.
Is there a way to achive this with Spring amqp and RabbitMQ?
You can do it via the dead letter exchange. Reject the message and route it to the DLE/DLQ and have a separate listener container that consumes from the DLQ and stop/start that container as needed.
Or, instead of the second container you can poll the DLQ using the RabbitTemplate receive (or receiveAndConvert) methods (on a schedule) and route the failed message(s) back to the primary queue.

ActiveMQ redelivery at application level

I use ActiveMQ as a job dispatcher. Which means one master sends job messages to ActiveMQ, and multiple slaves grab job messages from ActiveMQ and process them. When slaves finish one job, they send a message with job_id back to ActiveMQ.
However, slaves are unreliable. If one slave doesn't respond before a period of time, we can assume the slave is down, and try redeliver the sent job message.
Are there any good ideas to realize this re-delivery?
Typically a consumer handles redelivery so that it can maintain message order while a message appears as inflight on the broker. This means that redelivery is limited to a single consumer unless that consumer terminates. In this way the broker is unaware of redelivery.
In ActiveMQ v5.7+ you have the option of using broker side redelivery, it is possible to have the broker redeliver a message after a delay using a resend. This is implemented by a broker plugin that handles dead letter processing by redelivery via the scheduler. This is useful when total message order is not important and where through put and load distribution among consumers is. With broker redelivery, messages that fail delivery to a given consumer can get immediately re-dispatched.
See the ActiveMQ documentation for an example of setting this up in the configuration file.

MSMQ, WCF and robustness

Not being an expert on MSMQ or WCF, I have read up a fair bit about it and it sounds and looks great.
I am trying to develop something, eventually but first some theory, which needs to be robust and durable.
MSMQ I guess will be hosted on a seperate server.
There will be 2 WCF services. One for incoming messages and the other for outgoing messages (takes a message, does some internal processing/validation then places it on the outgoing messages queue or maybe sending an email/text message/whatever)
I understand with the right configuration, we can have the system so that it can be transactional (no messages are ever lost) and can be sent exactly once, so no chance of duplication of messages.
The applications/services will be multithreaded to process messages, which there will be hundreds and thousands of them.
BUT during the processing of a message or through the services lifetime, what if the server crashes? What if the server reboots? What if the service throws an exception for whatever reason? How is it possible to not lose that message but some how to put it back on the queue waiting for it to be processed again?
Also how is it possible to make sure that the service is robust in such a way that it will spawn itself again?
I'd appreciate any advice and details here. There is quite alot to take in and WCF/MSMQ exposes quite alot of options.
Your assumption:
MSMQ I guess will be hosted on a seperate server.
is incorrect. MSMQ is installed on all machines which want to participate in message queuing.
There will be 2 WCF services. One for incoming messages and the other
for outgoing messages
In the most typical configuration, the destination queues are local to the listening service.
For example, your ServiceA would have a local queue from which it reads. ServiceB also has a local queue from which it reads. If ServiceA wants to call ServiceB it will put a message into ServiceB's local queue.
I understand with the right configuration, we can have the system so
that it can be transactional (no messages are ever lost)
This is correct. This is because MSMQ uses a messaging pattern called store-and-forward. See here for an explanation.
Essentially the reason it is safe to assume no message loss is because the transmission of a message from one machine to another actually takes place under three distinct transactions.
The first transaction: ServiceA writes to it's own temporary local queue. If this fails the transaction rolls back and ServiceA can handle the exception.
Second transaction: Queue manager on ServiceA machine transmits message to Queue manager on ServiceB machine. If failure then message remains on temporary queue.
Third transaction: ServiceB reads the message off local queue. If ServiceB message handler method throws exception then transaction rolls message back to local queue.
The applications/services will be multithreaded to process messages
This is fine except if you require order to be preserved in the message processing chain. If you need ordered processing then you cannot have multiple threads without implementing a re-sequencer to reapply order.
I thought that MSMQ can be hosted seperately and have x servers share
that queue?
All servers which want to participate in the exchange of messages have MSMQ installed. Each server can then write to any queue on any other server.
The reason for my thinking was because what if the server goes down?
Then how will the messages get sent/received into MSMQ
If the queues are transactional then that means messages on them are persisted to disk. If the server goes down then when it comes back up the messages are still there. While a server is down it obviously cannot participate in the exchange of messages. However, messages can still be "sent" to that server - they just remain local to the sender (in a temporary queue) until the destination server comes back on-line.
so by having one central MSMQ server (and having it mirrored/failover)
then there will be guarentee of uptime
The whole point of using message queueing is it's a fault-tolerant transport, so you don't need to guarantee uptime. If you have a 100% availability then there would be little reason to use message queuing.
how will WCF be notified of messages that are incoming?
Each service will listen on its own local queue. When a message arrives, the WCF runtime causes the handling method to be called and the message to be handled.
how will the service be notified of failures of sending messages
If ServiceA fails to transmit a message to ServiceB then ServiceB will never be notified of that failure. Nor should it be. ServiceA will handle the failure to transmit, not ServiceB. Your expectation in this instance creates a hard coupling between the services, something which message queueing is supposed to remove.
MSMQ can store messages even if temporary shutdown the service or reboot computer.
Main goal of WCF is transport message from source to destination. Doesn't matter what is the transport. In your case MSMQ is transport for WCF and not obvious to have online / available both client and service simultaneously. But when message is received, it's your responsibility to correctly process it, despite what transport was used to send message.