I open 2 rpc client and 20 rpc servers in localhost, like the example in the following rabbitmq official website.
http://www.rabbitmq.com/tutorials/tutorial-six-python.html
Assume a simple example that if I send x="0" from the rpc client, the rpc server will get the message x and then compute x/x.
The rpc server will crash because of divide 0 by 0.Then the message x="0" will deliver to the next rpc server ,which server will get the message x="0" and crash down.Like dominoes game,the other 18 servers,will crash one by one.
(The real crash error is very complex in the project ,can't be catched ,and the crash reason is not because of rabbitmq)
One error message,and all the 20 rpc server will crash down one by one.
Is there any policy to avoid this happened in rabbitmq?Just like,it will deliver three times to the servers(also crash down 3 servers),then stop delivering and drop the message?
Related
We have 3 JBoss EAP 7 servers which are configured to consume messages from ActiveMQ queues. The IN queue is for the request message and OUT queue is for the response message. There are multiple IN queues and corresponding OUT queues.
Assume a scenario where we want to take 1st application server down for maintenance. There can be N number of messages it has consumed from IN queue and in the process of doing the business logic.
How do we instruct the 1st application server not to pick any new messages, complete whatever it already picked from IN, and respond to OUT so that it can be taken for maintenance?
I am using graphql-apollo. The client subscribes to some messages and the server, using redis, sends them to the client.
If in the client updateQuery an error is thrown and not catched, can that, somehow, affect the running of the server function publishing the message? Could that server function crash or otherwise not finih correctly?
Thanks.
It should not affect the sender's push/publish capabilities. A message published via PUB/SUB is not persisted so once you consume you have to consume no matter what happens to the consumer, it can't be put back.
This also means, if you're using Redis PUB/SUB to send/receive messages than messages can be lost due to consumer connectivity, if a consumer is down for some time than all messages sent in that window would be lost.
My Setup
I have a situation where i am sending some 15 messages in a loop from one machine to another machine via rabbitmq.
There is NAT setup between sending and recieving machine.
I am using spring rabbitmq for all rabbitmq operations.
On the receiving machine, i am losing 2 messages sometimes which is never received even after waiting for a long time.
And also i don't see any messages accumulated in the queue (in both sending machine and receiving machine).
And also there is only listener for the queue in the receiving machine.
My Question
if i send messages in a loop to rabbitmq, is there any chance that it rejects some messages if it cant handle? The overall size of 15 messages in close to 8mb.
I don't see any exceptions even after i perform send message to rabbitmq.
SENDING MACHINE CODE
#Override
public boolean send(final Message message, final String routingKey)
throws SinecnmsMessagingException {
private RabbitTemplate rabbitTemplate = null;
rabbitTemplate.send(routingKey, message);
}
RECIEVING MACHINE CODE
<rabbit:listener-container
connection-factory="connectionFactory">
<rabbit:listener ref="onMessageCommand"
queue-names="TestQueue" />
</rabbit:listener-container>
<bean id="onMessageCommand"
class="com.test.OnMessageListner">
<property name="callBackObject" ref="callbackEvent" />
<property name="template" ref="amqpTemplate" />
</bean>
<bean id="callbackEvent" class="com.test.SettingsListener"></bean>
OnMessageListner implements MessageListener.
In SettingsListener class, i recieve the messages. This is working fine for all me in other code that i have developed. Only in this use case which i have mentioned, i am observing this issue.
So does it mean that publisher confirms concept was introduced because some times rabbitmq may "reject/not accept" messages. With publisher confirms we can know if the first message was recieved by rabbitmq broker and then send the second message.
Can we conclude this?
No you cannot; waiting for each confirmation would slow down the publishing; confirms are designed so you send a bunch of messages and then wait for the confirms.
It was not >introduced because sometimes rabbitmq may "reject/not accept" messages; publishing with RabbitMQ is asynchronous; so a publish is generally successful - but anything can happen between sending the message and it arriving at the broker - if the connection is lost, the client is told, but that's too late for the publisher since he has already completed successfully.
NAT should make no difference but, perhaps, some flaky network router might be the problem.
You can use a network monitor (e.g. WireShark) to see what's happening.
I am facing a weird problem with connecting Rabbitmq from UI. I use 'xhr-polling' only due to some reason to connect rabbitmq queue from UI and it works fine for quite sometime when user lands on page but it disconnects with rabbitmq at some point without any error.
I have put in some logic to reconnect, once it is disconnected and as per the log it seems it connects but when I look at the RabbitMQ there is no client connected to it. However browser console (connected to server RabbitMQ/3.6.10) and keeps sending calling xhr_send?t=[random-key] and xhr?=[random-key] gets response 204 or 200 (As per developer tool).
When I refresh the whole page, it connects back again fine and see rabbitmq client queue as well something like (connected to server RabbitMQ/3.6.10).
Technology stacks are : Sockjs + Stompjs + RabbitMQ with Stomp plugin
So in summary reconnect logic shows it is connected but as per rabbitmq there is no subscribed client. Normally I see something like this stomp-subscription-rIUXo4Yvmilga2w3g5Lu6g as queue name when connected.
I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.