ActiveMQ Durable consumer is in use for client and subscriptionName via STOMP - apache

I have an iOS client that connects to several ActiveMQ topics and queues via STOMP protocol. When I connect to the server, I send the following message:
2012-10-30 10:19:29,757 [MQ NIO Worker 2] TRACE StompIO
CONNECT
passcode:*****
login:system
2012-10-30 10:19:29,758 [MQ NIO Worker 2] DEBUG ProtocolConverter
2012-10-30 10:19:29,775 [MQ NIO Worker 2] TRACE StompIO
CONNECTED
heart-beat:0,0
session:ID:mbp.local-0123456789
server:ActiveMQ/5.6.0
version:1.0
And then, I subscribe to several topics using the following message:
2012-10-30 10:19:31,028 [MQ NIO Worker 2] TRACE StompIO
SUBSCRIBE
activemq.subscriptionName:user#mail.com-/topic/SPOT.SPOTCODE
activemq.prefetchSize:1
activemq.dispatchAsync:true
destination:/topic/SPOT.SPOTCODE
client-id:1234
activemq.retroactive:true
I'm facing two problems with the ActiveMQ server. Each time I connect, the Number of Consumers column in the web interface gets incremented, so I have just one real consumer but the count is around 50 consumers. But the most problematic issue is that when I plug another iOS device into my laptop to test the messaging environment, i get the following error when connecting to ActiveMQ:
WARN | Async error occurred: javax.jms.JMSException: Durable consumer is in use for client: ID:mbp.local-0123456789 and subscriptionName: user#mail.com-/topic/SPOT.SPOTCODE
This seems to be that disconnecting from ActiveMQ via STOMP is not working propertly, because this logging attempt is made when the other device is not running the app. I've tried the following things in order to solve the issue:
Always logoff when attempting to subscribe to the topics.
Subscribe
I'm currently using v5.6.0 executing the server on my laptop.

IF you read the STOMP page on the ActiveMQ site you will notice that client-id and activemq-subscriptionName must match in order to use STOMP durable subscribers. These value should be different for each of you client's otherwise you will see the same errors because of the name clashes.

Related

Spring JMS consumer does not have parent span Id rightly set

My sample application makes use of
spring-boot-starter-activemq (2.3.1)
spring-boot-starter-web (2.3.1)
pring-cloud-starter-sleuth (2.2.3)
On the published message I see the X-Parent-Span-Id, X-Trace-Id and b3 properties rightly set. However the logs in the JMS Listener (AMQ consumer) have a different parentId (X-B3-ParentSpanId).
Why do the logs on the AMQ consumer side not have the same parentId as present in the message?
Note - The traceId shows up fine.
Flow
Server 1 --> Server 2 (AMQ producer) ---> AMQ consumer
Server1 logs
{"#timestamp":"2020-07-26T18:42:19.258+01:00","#version":"1","message":"Received greet request","logger_name":"com.spike.server.HelloController","thread_name":"http-nio-8080-exec-9","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"e4ac9c00ba990cf2","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"e4ac9c00ba990cf2","X-B3-TraceId":"e4ac9c00ba990cf2"}
Server 2 (AMQ producer) logs
{"#timestamp":"2020-07-26T18:42:19.262+01:00","#version":"1","message":"Received audit request","logger_name":"com.spike.upstream.AuditController","thread_name":"http-nio-8081-exec-9","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"aed4a3863c141dde","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"aed4a3863c141dde","X-B3-ParentSpanId":"e4ac9c00ba990cf2","X-B3-TraceId":"e4ac9c00ba990cf2","parentId":"e4ac9c00ba990cf2"}
AMQ consumer logs
{"#timestamp":"2020-07-26T18:42:19.270+01:00","#version":"1","message":"Received message: hello world","logger_name":"com.spike.consumer.Consumer","thread_name":"DefaultMessageListenerContainer-1","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"9f7928f65ee2479d","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"9f7928f65ee2479d","X-B3-ParentSpanId":"70f25884c40b2dc7","X-B3-TraceId":"e4ac9c00ba990cf2","parentId":"70f25884c40b2dc7"}
I think I got what you're asking about. So the mismatch in parent Ids is related to the fact that between the consumer and the producer we introduce a "jms" service and if the broker was correlated you would have seen how long it takes exactly in the broker. Check out the following images in Imgur https://imgur.com/a/rpcFz4l

RabbitMQ Multiple Consumers with Single connection Object

I have 35 queues in my application. I have created a single connection object and 35 unique channel to consume the data from the dedicated queue.
Some time (after running 6 hrs and 12 hrs), I'm not able to receive any messages from the RabbitMQ. In the rabbitMQ management portal the consumers are not available.
Exception in the RabbitMQ loggers:
closing AMQP connection <0.10985.3> (127.0.0.1:63478 ->
127.0.0.1:5672, vhost: '/', user: 'guest'): client unexpectedly closed TCP connection
Is this fine with one connection for the all the consumers? Or is something wrong?
I am using RabbitMQ 3.6.12 and amqp-client-5.5.0.jar for the Java client.

Network issues and Redis PubSub

I am using ServiceStack 5.0.2 and Redis 3.2.100 on Windows.
I have got several nodes with active Pub/Sub Subscription and a few Pub's per second.
I noticed that if Redis Service restarts while there is no physical network connection (so one of the clients cannot connect to Redis Service), that client stops receiving any messages after network recovers. Let's call it a "zombie subscriber": it thinks that it is still operational, but never actually receives a message: client thinks it has a connection, the same connection on server is closed.
The problem is no exception is thrown in RedisSubscription.SubscribeToChannels, so I am not able to detect the issue in order to resubscribe.
I have also analyzed RedisPubSubServer and I think I have discovered a problem. In the described case RedisPubSubServer tries to restart (send stop command CTRL), but "zombie subscriber" does not receive it and no resubscription is made.

RabbitMQ durable queue losing messages over STOMP

I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.

Spring Cloud Stream : Sink loses RabbitMQ connectivity

I see that my custom Spring cloud stream sink with log sink stream app dependency loses RabbitMQ connectivity during RabbitMQ outage, tries making a connection for 5 times and then stops its consumer. I have to manually restart the app to make it successfully connect once the RabbitMQ is up. When I see the default properties of rabbitMQ binding here, it gives interval time but there is no property for infinite retry(which i assume to be default behaviour). Can someone please let me know what I might be missing here to make it try connecting infinitely ?
Error faced during outage triggering consumer retry :
2017-08-08T10:52:07.586-04:00 [APP/PROC/WEB/0] [OUT] Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - home node 'rabbit#229ec9f90e07c75d56a0aa84dc28f602' of durable queue 'datastream.dataingestor.datastream' in vhost '8880756f-8a21-4dc8-9b97-95e5a3248f58' is down or inaccessible, class-id=50, method-id=10)
It appears you have a RabbitMQ cluster and the queue in question is hosted on a down node.
If the queue was HA, you wouldn't have this problem.
The listener container does not (currently) handle that condition. It will retry for ever if it loses connection to RabbitMQ itself.
Please open a JIRA Issue and we'll take a look. The container should treat that error the same as a connection problem.