google pub sub delay message per ordering key - kotlin

I have successfully subscribe the message from Google Pub/Sub.
In my Pub/Sub topic, I have the ordering key.
what I want to achieve is that, I should only read 1 message per ordering key, and then add a delay for subscribing it.
here is what I have written
import com.google.cloud.pubsub.v1.MessageReceiver
import com.google.cloud.pubsub.v1.Subscriber
val receiver = MessageReceiver { message, consumer ->
GlobalScope.launch {
delay(10 * 1000L)
println(message.data.toStringUtf8())
consumer.ack()
}
}
subscriber = Subscriber.newBuilder(subscriptionName, receiver).build()
subscriber!!.startAsync().awaitRunning()
When I publish 5 message with the same ordering key at once to the PubSub, here is the result.
after 60 seconds, it will print the 1st message
after the next 60 seconds, it prints the next 4 message at once
how should I change it so that in the next 60 seconds, it also only read 1 message per ordering key?
P.S. Seems like changing GlobalScope.launch and delay to Thread.sleep can solve the issue. but Can't I use a non-blocking delay/sleep?

Related

Spring Webflux - initial message without subscriber

I am trying to make an SSE Spring application, using Webflux. According to the documentation, the message is not sent to the sink if there is no subscriber. In my use case, I would like that the subscriber would receive the last message when calling for subscription. I have found that Sink can be configured in following way:
Sinks.many().replay().latest();
And when I have both publisher and subscriber, and the next subscriber calls for subscription, he receives the last sent message, which is great. However if I don't have any subscribers, publisher sends the message and then first subscriber comes in, it receives none. Which is just as documentation above says actually, but I am thinking how to solve that issue to meet my needs. As a workaround I did something like this:
if (shareSinks.currentSubscriberCount() == 0) {
shareSinks.asFlux().subscribe();
}
shareSinks.tryEmitNext(shareDTO);
But subscribing the publisher to its own subscription doesn't sound like a clean way to do this...
This is a matter of hot and cold publishers. Currently, your publisher (Sinks.many().replay().latest()) is a cold publisher. Events that are being emitted while there is no subscriber, will just vanish.
What you need is a so called hot publisher. Hot publishers cache the events and a new subscriber will receive all previously cached events.
This will do the trick:
final Sinks.Many<String> shareSinks = Sinks.many()
.replay()
.all(); // or .limit(10); to keep only the last 10 emissions
final Flux<String> hotPublisher = shareSinks.asFlux()
.cache(); // .cache() turns the cold flux into a
// hot flux
shareSinks.tryEmitNext("1");
shareSinks.tryEmitNext("2");
shareSinks.tryEmitNext("3");
shareSinks.tryEmitNext("4");
hotPublisher.subscribe(message -> System.out.println("received: " + message));
The console print out would be:
received: 1
received: 2
received: 3
received: 4
The Reactor docu also has a chapter on hot vs. cold.

performance test on 1000 messages in rabbitmq queue using Gatling

I have 2 rabbitmq queues, one is message queue and other is result queue. I want to test the performance of message queue
When i send list of json to message queue one by one, this message is processed and then i get response in result queue in this format
for one message 5 response messages
{id:1234,status:processing}
{id:1234,status:processing}
{id:1234,status:processing}
{id:1234,status:processing}
{id:1234,status:complete}
Now I want to check the complete status and find the time required to get a response with complete status
this is the scenario
val scn: ScenarioBuilder = scenario("test1")
.exec {
s: Session =>
val list = data.toList *// data is the list of messages(they can me from 100 to 1000 or more)*
s.set("list", list)
}
.foreach(s=>s("list").as[Seq[String]],"message") {
exec(
amqp("publish message to exchange").requestReply
.queueExchange("message")
.replyExchange("result")
.textMessage("${message}")
.priority(0)
.contentType("application/json")
.headers("test" -> "performance")
.check(
jsonPath("$.id").exists,
jsonPath("$.id").saveAs("co-id"),
jsonPath("$.status").is("Complete"), *//this one fails with error msg- found processing*
jsonPath("$.status").saveAs("status")
)
).pause(20)
.exec({session =>
val status = session("status").as[String]
val coId = session("co-id").as[String]
println("Response body:::",coId,status)
session
}).pause(5)
}.pause(10)
Take a look at https://github.com/TinkoffCreditSystems/gatling-amqp-plugin/blob/master/src/test/scala/ru/tinkoff/gatling/amqp/examples/RequestReplyWithOwnMatchingExample.scala . There you will find an example how to write a test with your own matching method.
If I understand your question correct you could return from the method "matchByMessage"
for sending methods something like "messageId+"_"+complete"
for receiving message you have to extract the state from the message
If you don't want to run the plugin in debug-mode locally you you could print the message parameter from the method.

SpringAMQP delay

I'm having trouble to identify a way to delay message level in SpringAMQP.
I call a Webservice if the service is not available or if it throws some exception I store all the requests into RabbitMQ queue and i keep retry the service call until it executes successfully. If the service keeps throwing an error or its not available the rabbitMQ listener keeps looping.( Meaning Listener retrieves the message and make service call if any error it re-queue the message)
I restricted the looping until X hours using MessagePostProcessor however i wanted to enable delay on message level and every time it tries to access the service. For example 1st try 3000ms delay and second time 6000ms so on until i try x number of time.
It would be great if you provide a few examples.
Could you please provide me some idea on this?
Well, it isn't possible the way you do that.
Message re-queuing is fully similar to transaction rallback, where the system returns to the state before an exception. So, definitely you can't modify a message to return to the queue.
Probably you have to take a look into Spring Retry project for the same reason and poll message from the queue only once and retries in memory until successful answer or retry policy exhausting. In the end you can just drop message from the queue or move it into DLQ.
See more info in the Reference Manual.
I added CustomeMessage delay exchange
#Bean
CustomExchange delayExchange() {
Map<String, Object> args = new HashMap<>();
args.put("x-delayed-type", "direct");
return new CustomExchange("delayed-exchange", "x-delayed-message", true, false, args);
}
Added MessagePostProcessor
if (message.getMessageProperties().getHeaders().get("x-delay") == null) {
message.getMessageProperties().setHeader("x-delay", 10000);
} else {
Integer integer = (Integer) message.getMessageProperties().getHeaders().get("x-delay");
if (integer < 60000) {
integer = integer + 10000;
message.getMessageProperties().setHeader("x-delay", integer);
}
}
First time it delays 30 seconds and adds 10seconds each time till it reaches 600 seconds.This should be configurable.
And finally send the message to
rabbitTemplate.convertAndSend("delayed-exchange", queueName,message, rabbitMQMessagePostProcessor);

Query for Number of Messages in Mule ESB VM Queue

In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);

RabbitMQ Wait for a message with a timeout

I'd like to send a message to a RabbitMQ server and then wait for a reply message (on a "reply-to" queue). Of course, I don't want to wait forever in case the application processing these messages is down - there needs to be a timeout. It sounds like a very basic task, yet I can't find a way to do this. I've now run into this problem with Java API.
The RabbitMQ Java client library now supports a timeout argument to its QueueConsumer.nextDelivery() method.
For instance, the RPC tutorial uses the following code:
channel.basicPublish("", requestQueueName, props, message.getBytes());
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
if (delivery.getProperties().getCorrelationId().equals(corrId)) {
response = new String(delivery.getBody());
break;
}
}
Now, you can use consumer.nextDelivery(1000) to wait for maximum one second. If the timeout is reached, the method returns null.
channel.basicPublish("", requestQueueName, props, message.getBytes());
while (true) {
// Use a timeout of 1000 milliseconds
QueueingConsumer.Delivery delivery = consumer.nextDelivery(1000);
// Test if delivery is null, meaning the timeout was reached.
if (delivery != null &&
delivery.getProperties().getCorrelationId().equals(corrId)) {
response = new String(delivery.getBody());
break;
}
}
com.rabbitmq.client.QueueingConsumer has a nextDelivery(long timeout) method, which will do what you want. However, this has been deprecated.
Writing your own timeout isn't so hard, although it may be better to have an ongoing thread and a list of in-time identifiers, rather than adding and removing consumers and associated timeout threads all the time.
Edit to add: Noticed the date on this after replying!
There is similar question. Although it's answers doesn't use java, maybe you can get some hints.
Wait for a single RabbitMQ message with a timeout
I approached this problem using C# by creating an object to keep track of the response to a particular message. It sets up a unique reply queue for a message, and subscribes to it. If the response is not received in a specified timeframe, a countdown timer cancels the subscription, which deletes the queue. Separately, I have methods that can be synchronous from my main thread (uses a semaphore) or asynchronous (uses a callback) to utilize this functionality.
Basically, the implementation looks like this:
//Synchronous case:
//Throws TimeoutException if timeout happens
var msg = messageClient.SendAndWait(theMessage);
//Asynchronous case
//myCallback receives an exception message if there is a timeout
messageClient.SendAndCallback(theMessage, myCallback);