Spring JMS consumer does not have parent span Id rightly set - spring-cloud-sleuth

My sample application makes use of
spring-boot-starter-activemq (2.3.1)
spring-boot-starter-web (2.3.1)
pring-cloud-starter-sleuth (2.2.3)
On the published message I see the X-Parent-Span-Id, X-Trace-Id and b3 properties rightly set. However the logs in the JMS Listener (AMQ consumer) have a different parentId (X-B3-ParentSpanId).
Why do the logs on the AMQ consumer side not have the same parentId as present in the message?
Note - The traceId shows up fine.
Flow
Server 1 --> Server 2 (AMQ producer) ---> AMQ consumer
Server1 logs
{"#timestamp":"2020-07-26T18:42:19.258+01:00","#version":"1","message":"Received greet request","logger_name":"com.spike.server.HelloController","thread_name":"http-nio-8080-exec-9","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"e4ac9c00ba990cf2","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"e4ac9c00ba990cf2","X-B3-TraceId":"e4ac9c00ba990cf2"}
Server 2 (AMQ producer) logs
{"#timestamp":"2020-07-26T18:42:19.262+01:00","#version":"1","message":"Received audit request","logger_name":"com.spike.upstream.AuditController","thread_name":"http-nio-8081-exec-9","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"aed4a3863c141dde","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"aed4a3863c141dde","X-B3-ParentSpanId":"e4ac9c00ba990cf2","X-B3-TraceId":"e4ac9c00ba990cf2","parentId":"e4ac9c00ba990cf2"}
AMQ consumer logs
{"#timestamp":"2020-07-26T18:42:19.270+01:00","#version":"1","message":"Received message: hello world","logger_name":"com.spike.consumer.Consumer","thread_name":"DefaultMessageListenerContainer-1","level":"INFO","level_value":20000,"traceId":"e4ac9c00ba990cf2","spanId":"9f7928f65ee2479d","spanExportable":"true","X-Span-Export":"true","X-B3-SpanId":"9f7928f65ee2479d","X-B3-ParentSpanId":"70f25884c40b2dc7","X-B3-TraceId":"e4ac9c00ba990cf2","parentId":"70f25884c40b2dc7"}

I think I got what you're asking about. So the mismatch in parent Ids is related to the fact that between the consumer and the producer we introduce a "jms" service and if the broker was correlated you would have seen how long it takes exactly in the broker. Check out the following images in Imgur https://imgur.com/a/rpcFz4l

Related

Is there a way to gracefully stop polling new messages in ActiveMQ

We have 3 JBoss EAP 7 servers which are configured to consume messages from ActiveMQ queues. The IN queue is for the request message and OUT queue is for the response message. There are multiple IN queues and corresponding OUT queues.
Assume a scenario where we want to take 1st application server down for maintenance. There can be N number of messages it has consumed from IN queue and in the process of doing the business logic.
How do we instruct the 1st application server not to pick any new messages, complete whatever it already picked from IN, and respond to OUT so that it can be taken for maintenance?

ServiceStack Redis Mq: is eventual consistency an issue?

I'm looking at turning a monolith application into a microservice-oriented application and in doing so will need a robust messaging system for interprocesses-communication. The idea is for the microserviceprocesses to be run on a cluster of servers for HA, with requests to be processed to be added on a message queue that all the applications can access. I'm looking at using Redis as both a KV-store for transient data and also as a message broker using the ServiceStack framework for .Net but I worry that the concept of eventual consistency applied by Redis will make processing of the requests unreliable. This is how I understand Redis to function in regards to Mq:
Client 1 posts a request to a queue on node 1
Node 1 will inform all listeners on that queue using pub/sub of the existence of
the request and will also push the requests to node 2 asynchronously.
The listeners on node 1 will pull the request from the node, only 1 of them will obtain it as should be. An update of the removal of the request is sent to node 2 asynchronously but will take some time to arrive.
The initial request is received by node 2 (assuming a bit of a delay in RTT) which will go ahead and inform listeners connected to it using pub/sub. Before the update from node 1 is received regarding the removal of the request from the queue a listener on node 2 may also pull the request. The result being that two listeners ended up processing the same request, which would cause havoc in our system.
Is there anything in Redis or the implementation of ServiceStack Redis Mq that would prevent the scenario described to occur? Or is there something else regarding replication in Redis that I have misunderstood? Or should I abandon the Redis/SS approach for Mq and use something like RabbitMQ instead that I have understood to be ACID-compliant?
It's not possible for the same message to be processed twice in Redis MQ as the message worker pops the message off the Redis List backed MQ and all Redis operations are atomic so no other message worker will have access to the messages that have been removed from the List.
ServiceStack.Redis (which Redis MQ uses) only supports Redis Sentinel for HA which despite Redis supporting multiple replicas they only contain a read only view of the master dataset, so all write operations like List add/remove operations can only happen on the single master instance.
One notable difference from using Redis MQ instead of specific purpose MQ like Rabbit MQ is that Redis doesn't support ACK's, so if the message worker process that pops the message off the MQ crashes then it's message is lost, as opposed to Rabbit MQ where if the stateful connection of an un Ack'd message dies the message is restored by the RabbitMQ server back to the MQ.

RabbitMQ durable queue losing messages over STOMP

I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.

How can MCollective replace a dead subscriber from an ActiveMQ queue?

I have a problem using direct addressing with MCollective via ActiveMQ 5.8. (http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html)
The problem arises when one of the nodes subscribed to the nodes queue via MCollective crashes and doesn't unsubscribe. When the host boots and subscribes again, there are now two subscribers with the same identity, because ActiveMQ doesn't recognize that the pre-crash one is no longer listening. This is a problem with direct addressing because it goes in the queue, ActiveMQ sends the message to only one subscriber, and it always seems to pick the one that's not listening; so the message is never delivered to the actual node. I can observe this happening if I have ActiveMQ log the message frames.
This may be related to the ActiveMQ concept of a "durable subscriber" (where a subscriber of the same identity unsubscribes any existing one) but I don't have any idea how that is configured from MCollective.
What I want is that either the new subscriber bumps the old, or that the dead subscriber is removed when a message is sent to it and the connection is dead (with Wireshark I can see the packets aren't ACKed, instead an ICMP packet returns "Destination unreachable").
Apparently, according to http://projects.puppetlabs.com/issues/23365, the solution is to use MCollective 2.3 (I was using 2.2) and Stomp 1.1 keepalives.

ActiveMQ Durable consumer is in use for client and subscriptionName via STOMP

I have an iOS client that connects to several ActiveMQ topics and queues via STOMP protocol. When I connect to the server, I send the following message:
2012-10-30 10:19:29,757 [MQ NIO Worker 2] TRACE StompIO
CONNECT
passcode:*****
login:system
2012-10-30 10:19:29,758 [MQ NIO Worker 2] DEBUG ProtocolConverter
2012-10-30 10:19:29,775 [MQ NIO Worker 2] TRACE StompIO
CONNECTED
heart-beat:0,0
session:ID:mbp.local-0123456789
server:ActiveMQ/5.6.0
version:1.0
And then, I subscribe to several topics using the following message:
2012-10-30 10:19:31,028 [MQ NIO Worker 2] TRACE StompIO
SUBSCRIBE
activemq.subscriptionName:user#mail.com-/topic/SPOT.SPOTCODE
activemq.prefetchSize:1
activemq.dispatchAsync:true
destination:/topic/SPOT.SPOTCODE
client-id:1234
activemq.retroactive:true
I'm facing two problems with the ActiveMQ server. Each time I connect, the Number of Consumers column in the web interface gets incremented, so I have just one real consumer but the count is around 50 consumers. But the most problematic issue is that when I plug another iOS device into my laptop to test the messaging environment, i get the following error when connecting to ActiveMQ:
WARN | Async error occurred: javax.jms.JMSException: Durable consumer is in use for client: ID:mbp.local-0123456789 and subscriptionName: user#mail.com-/topic/SPOT.SPOTCODE
This seems to be that disconnecting from ActiveMQ via STOMP is not working propertly, because this logging attempt is made when the other device is not running the app. I've tried the following things in order to solve the issue:
Always logoff when attempting to subscribe to the topics.
Subscribe
I'm currently using v5.6.0 executing the server on my laptop.
IF you read the STOMP page on the ActiveMQ site you will notice that client-id and activemq-subscriptionName must match in order to use STOMP durable subscribers. These value should be different for each of you client's otherwise you will see the same errors because of the name clashes.