I am continuously listening on redis streams using the spring reactive api(using lettuce driver). I am using a standalone connection. It seems like the reactor's event loop opens a new connection every time it reads the messages instead of keeping the connection open. I see a lot of TIME_WAIT ports in my machine when i run my program. Is this normal? Is there a way to let lettuce know to re-use the connection instead of reconnecting every time?
This is my code:
StreamReceiver<String, MapRecord<String, String, String>> receiver = StreamReceiver.create(factory);
return receiver
.receive(Consumer.from(keyCacheStreamsConfig.getConsumerGroup(), keyCacheStreamsConfig.getConsumer()),
StreamOffset.create(keyCacheStreamsConfig.getStreamName(), ReadOffset.lastConsumed()))//
// flatMap reads 256 messages by default and processes them in the given scheduler
.flatMap(record -> Mono.fromCallable(() -> consumer.consume(record)).subscribeOn(Schedulers.boundedElastic()))//
.doOnError(t -> {
log.error("Error processing.", t);
streamConnections.get(nodeName).setDirty(true);
})//
.onErrorContinue((err, elem) -> log.error("Error processing message. Continue listening."))//
.subscribe();
Looks like the spring-data-redis library re-uses the connection only if the poll timeout is set to '0' in the stream receiver options and pass it as the second argument in StreamReceiver.create(factory, options). Figured by looking into spring-data-redis' source code.
Related
I am developing a system whereby I have to communicate with 18-different subsystems.
All 18-subsystems are UDP clients. I have created UDP server.
I'm using recvfrom to receive data these 18-subsystems.
char buf[1000];
int buf_len = 1000;
int sockfd;
//Code to configure socket
//Code to create Socket
//Code to bind socket
FOREVER
{
bytes_read = recvfrom(sockfd, (void *)buf, buf_len, 0,
(struct sockaddr *)&client_addr, &sock_addr_size);
//Spawn New Task to process data
}
I have three options process received data
Process the data immediately after receiving new data. This approach is not feasible as it will increase the latency in processing message and system will loose its deterministic hard-real time capabilities.
Spawn a new task after receiving new data. This new task will process incoming data and forward the processed data to appropriate task that will consume this data.
Create multiple task each running recvfrom on the same socket and will process data immediately after receiving the new data and forward the processed data to appropriate task that will consume this data.
I am more inclined towards Option 3. I wish to know is it allowed in vxWorks to call recvfrom multiple times from different task (disjoint task) on the same server socket or will it cause some complication.
I'm running a quarkus server that streams large datasets to clients. During processing of a dataset, an error can occur, and I'm unsure of how to best handle the situation.
#GET
#Path("{fileName}/example")
#Produces(MediaType.APPLICATION_JSON)
fun example(#PathParam("fileName") fileName: String): Multi<Int> {
return Multi.createFrom().iterable((0 .. 10)).map { if (it != 4) it else throw IllegalArgumentException() }
}
Without any changes, this will stream "[1,2,3" and then stop without closing the connection (CURLs hang). I can handle the issue with ".onFailure().recoverWithCompletion()", but that closes the stream in a healthy fashion (resulting in [1,2,3]). Is there any way to close the connection, but leave the response as malformed? I need a way to communicate to downstream clients that the stream of data is not healthy.
I'm quite new into the reactive world and using Spring Webflux + reactor Kafka.
kafkaReceiver
.receive()
// .publishOn(Schedulers.boundedElastic())
.doOnNext(a -> log.info("Reading message: {}", a.value()))
.concatMap(kafkaRecord ->
//perform DB operation
//kafkaRecord.receiverOffset.ackwnowledge
)
.doOnError(e -> log.error("Error", e))
.retry()
.subscribe();
I understood that in order to parallelise message consumption, I have to instantiate one KafkaReceiver for each partition but is it possible/recommended for a partition to read messages in a synchronous manner and process them async (including the manual acknowledge)?
So that this is the desired output:
Reading message:1
Reading message:2
Reading message:3
Reading message:4
Stored message 1 in DB + ack
Reading message:5
Stored message 2 in DB + ack
Stored message 5 in DB + ack
Stored message 3 in DB + ack
Stored message 4 in DB + ack
In case of errors, I'm thinking of publishing the record to a DLT.
I've tried with flatMap too, but it seems that the entire processing happens sequentially on a single thread. Also if I'm publishing to a new scheduler, the processing happens on a new single Thread.
If what I'm asking is possible, can someone please help me with a code snippet?
What's the output of your current code log ?
I have a common rest controller:
private final KafkaReceiver<String, Domain> receiver;
#GetMapping(produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<Domain> produceFluxMessages() {
return receiver.receive().map(ConsumerRecord::value)
.timeout(Duration.ofSeconds(2));
}
What I am trying to achieve is to collect messages from Kafka topic for a certain period of time, and then just stop consuming and consider this flux completed. If I remove timeout and open this in a browser, I am getting messages forever, downloading never stops. And with this timeout consuming stops after 2 seconds, but I'm getting an exception:
java.util.concurrent.TimeoutException: Did not observe any item or terminal signal within 2000ms in 'map' (and no fallback has been configured)
Is there a way to successfully complete Flux after timeout?
There's multiple overloads of the timeout() method - you're using the standard one that throws an exception on timeout.
Instead, just use the overloaded timeout method to provide an empty default publisher to fallback to:
timeout(Duration.ofSeconds(2), Mono.empty())
(Note in a general case you could explicitly capture the TimeoutException and fallback to an empty publisher using onErrorResume(TimeoutException.class, e -> Mono.empty()), but that's much less preferable to using the above option where possible.)
To build a reliable message queue using redis streams, i am using spring-boot-starter-data-redis-reactive and lettuce dependency to process the messages from redis stream. Though i am able to add, read, ack and delete message through the api available in ReactiveRedisOperations.opsForStream() in the form of consumer group, i couldn't find an api to claim a pending message which are not acknowledged for 5mins though its available under this.reactiveRedisConnectionFactory
.getReactiveConnection()
.streamCommands()
.xClaim(). But i don't want to have a boilerplate code to manage the exceptions, serialization, etc. Is there a way to claim a message using ReactiveRedisOperations.opsForStream()
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveStreamOperations.html
Without spring data redis, using lettuce client library directly i am able to get the pending message as well claim a message as below
public Flux<PendingMessage> getPendingMessages(PollMessage pollMessage, String queueName) {
Predicate<PendingMessage> poisonMessage = pendingMessage -> (pendingMessage.getTotalDeliveryCount()<=maxRetries);
Predicate<PendingMessage> nackMessage = pendingMessage -> (pendingMessage.getElapsedTimeSinceLastDelivery().compareTo(Duration.ofMillis(ackTimeout)) > 0 );
return statefulRedisClusterConnection.reactive()
.xpending(queueName, pollMessage.getConsumerGroupName(), Range.unbounded(), Limit.from(1000))
.collectList()
.map((it) -> ((PendingMessages)PENDING_MESSAGES_CONVERTER
.apply(it, pollMessage.getConsumerGroupName()))
.withinRange(org.springframework.data.domain.Range.unbounded()))
.flatMapMany(Flux::fromIterable)
.filter(nackMessage)
.filter(poisonMessage)
.limitRequest(pollMessage.getBatchSize());
}
In order to claim the message, again i have used the api available in lettuce library
public Flux<StreamMessage<String, String>> claimMessage(PendingMessage pendingMessage, String queueName, String groupName, String serviceName) {
return statefulRedisClusterConnection.reactive()
.xclaim(queueName, Consumer.from(groupName, serviceName), 0, pendingMessage.getIdAsString());
}
At the moment, getting pending message from redis through spring-data has issues hence i have used lettuce library directly to get a pending message and claim it.
https://jira.spring.io/browse/DATAREDIS-1160