I have jetstream with two publishers for different subjects:
subject.a
subject.b
And one consumer for subject.a
I have some problems with checking lag of consumer for definite subject.
/jsz?accounts=1&streams=1&consumers=1 shows sequence of all stream.
"delivered": {
"consumer_seq": 1108,
"stream_seq": 2216,
"last_active": "2022-09-27T15:11:29.952186581Z"
}
So how can I see the sequence only for subject.a ?
I use sync Publish for publishers and QueueSubscribe for consumer.
Related
I am trying to make an SSE Spring application, using Webflux. According to the documentation, the message is not sent to the sink if there is no subscriber. In my use case, I would like that the subscriber would receive the last message when calling for subscription. I have found that Sink can be configured in following way:
Sinks.many().replay().latest();
And when I have both publisher and subscriber, and the next subscriber calls for subscription, he receives the last sent message, which is great. However if I don't have any subscribers, publisher sends the message and then first subscriber comes in, it receives none. Which is just as documentation above says actually, but I am thinking how to solve that issue to meet my needs. As a workaround I did something like this:
if (shareSinks.currentSubscriberCount() == 0) {
shareSinks.asFlux().subscribe();
}
shareSinks.tryEmitNext(shareDTO);
But subscribing the publisher to its own subscription doesn't sound like a clean way to do this...
This is a matter of hot and cold publishers. Currently, your publisher (Sinks.many().replay().latest()) is a cold publisher. Events that are being emitted while there is no subscriber, will just vanish.
What you need is a so called hot publisher. Hot publishers cache the events and a new subscriber will receive all previously cached events.
This will do the trick:
final Sinks.Many<String> shareSinks = Sinks.many()
.replay()
.all(); // or .limit(10); to keep only the last 10 emissions
final Flux<String> hotPublisher = shareSinks.asFlux()
.cache(); // .cache() turns the cold flux into a
// hot flux
shareSinks.tryEmitNext("1");
shareSinks.tryEmitNext("2");
shareSinks.tryEmitNext("3");
shareSinks.tryEmitNext("4");
hotPublisher.subscribe(message -> System.out.println("received: " + message));
The console print out would be:
received: 1
received: 2
received: 3
received: 4
The Reactor docu also has a chapter on hot vs. cold.
I am following this guide- https://spring.io/guides/gs/messaging-jms/
I have few messages with higher priority that needs to be sent before any other message.
I have already tried following -
jmsTemplate.execute(new ProducerCallBack(){
public Object doInJms(Session session,MessageProducer producer){
Message hello1 =session.createTextMessage("Hello1");
producer.send(hello1, DeliveryMode.PERSISTENT,0,0); // <- low priority
Message hello2 =session.createTextMessage("Hello2");
producer.send(hello1, DeliveryMode.PERSISTENT,9,0);// <- high priority
}
})
But the messages are sent in order as they are in the code.What I am missing here?
Thank you.
There are a number of factors that can influence the arrival order of messages when using priority. The first question would be did you enable priority support and the second would be is there a consumer online at the time you sent the message.
There are many good resources for information on using prioritized messages with ActiveMQ, here is one. Keep in mind that if there is an active consumer online when you sent those messages then the broker is just going to dispatch them as they arrive since and the consumer will of course process them in that order.
Situation is the following.
We have setup SSL + ACLs in Kafka Broker.
We are setting up stream, which reads messages from two topics:
KStream<String, String> stringInput
= kBuilder.stream( STRING_SERDE, STRING_SERDE, inTopicName );
stringInput
.filter( streamFilter::passOrFilterMessages )
.map( processor )
.to( outTopicName );
It is done like two times (in the loop).
Then we are setting general error handler:
streams.setUncaughtExceptionHandler( ( Thread t, Throwable e ) -> {
synchronized ( this ) {
LOG.fatal( ... );
this.stop();
}
}
);
Problem is the following. If for example in one topic certificate is no more valid. The stream is throwing exception Not authorized to access topics ...
So far so good.
But the exception is handled by general error handler, so the complete application stops even if the second topic has no problems.
The question is, how to handle this exception per topic?
How to avoid situation that at some moment complete application stops due to the problem that one single topic has problems with authorization?
I understand that if Broker is not available, then complete app may stop. But if only one topic is not available, then single stream shall stop, and not complete application, or?
By design, Kafka Streams treats the topology a one and cannot distinguish between both parts. For your specific case, as you loop and build to independent pipelines, you could run two KafkaStreams instances in parallel (within the same application/JVM) to isolate both from each other. Thus, if one fails, the other one is not affected. You would need to use two different application.id for both instances.
I'm new to ActiveMQ and trying to find anything that explicitly outlines how JMSMessageID behaves with durable subscribers and selectors, however, I am struggling to find much.
As an example: JMSType = 'car' AND color = 'blue' AND weight > 2500 as a selector. Each subscriber will only receive messages from the topic where the criteria match. When each receives said messages are the JSMMessageID unique for each subscriber or are they unique for the entire topic before it was filtered by the selector for the subscriber.
If not is there a way that I can get the JSMessageID to be unique for each subscriber so that it can be used as a form of sequence number using custom messageID layout: 1, 2, 3... ad infinitum.
The Message ID is set by the producer at the time of send, the broker passes along a copy of the message to each Topic subscription (durable or not) with the message ID that it was sent with. You cannot alter the ID the broker uses that value to track the message and ensure that it is preserved until each subscription that it was dispatched to or stored for has acknowledged it.
I have the following scenario:
There are 3 rabbitmq queues to which producers push their messages based on the priority of the message.(myqueue_high, myqueue_medium, myqueue_low)
I want to have a single consumer which can pull from these queues in order or priority i.e. it keeps pulling from high queue as long as messages are there. o/w it pulls from medium. If medium is also empty it pulls from low.
How do i achieve this? Do i need to write a custom component?
It would be easier to put all the messages to one queue but with different priorities. That way, the priority sorting would be done in the broker and the Camel consumer would get the messages already sorted by priority. However, RabbitMQ implements the FIFO principle and does not support priority handling (yet).
Solution 1
Camel allows you to reorganise messages based on some comparator using a Resequencer: https://camel.apache.org/resequencer.html:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("direct:messageProcessing");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("direct:messageProcessing");
// sort by priority by allowing duplicates (message can have same priority)
// and use reverse ordering so 9 is first output (most important), and 0 is last
// (of course we could have set the priority the other way around, but this way
// we keep align with the JMS specification...)
// use batch mode and fire every 3th second
from("direct:messageProcessing")
.resequence(header("priority")).batch().timeout(3000).allowDuplicates().reverse()
.to("mock:result");
That way, all incoming messages are routed to the same sub route (direct:messageProcessing) where the messages are reordered according the priority header set by the incoming routes.
Solution 2
Use SEDA with a prioritization queue:
final PriorityBlockingQueueFactory<Exchange> priorityQueueFactory = new PriorityBlockingQueueFactory<Exchange>();
priorityQueueFactory.setComparator(new Comparator<Exchange>() {
#Override
public int compare(final Exchange exchange1, final Exchange exchange2) {
final Integer prio1 = (Integer) exchange1.getIn().getHeader("priority");
final Integer prio2 = (Integer) exchange2.getIn().getHeader("priority");
return -prio1.compareTo(prio2); // 9 has higher priority then 0
}
});
final SimpleRegistry registry = new SimpleRegistry();
registry.put("priorityQueueFactory", priorityQueueFactory);
final ModelCamelContext context = new DefaultCamelContext(registry);
// configure and start your context here...
The route definition:
from("rabbitmq://hostname[:port]/myqueue_high")
.setHeader("priority", constant(9))
.to("seda:priority?queueFactory=#priorityQueueFactory"); // reference queue in registry
from("rabbitmq://hostname[:port]/myqueue_medium")
.setHeader("priority", constant(5))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("rabbitmq://hostname[:port]/myqueue_low")
.setHeader("priority", constant(1))
.to("seda:priority?queueFactory=#priorityQueueFactory");
from("seda:priority")
.to("direct:messageProcessing");
Solution 3
Use JMS such as Camel's ActiveMQ component instead of SEDA if you need persistence in case of failures. Just forward the incoming messages from RabbitMQ to a JMS destination with setting the JMSPriority header.
Solution 4
Skip the RabbitMQ entirely and just use a JMS broker such as ActiveMQ that supports prioritization.