Will Ignite Queue's removeAll work properly in multithreading? - ignite

We are trying to remove data from IgniteQueue in parallel using multithreading.
Below is the code that is called in individual threads.
def dequeue(array: Array[Row]): Unit = {
val icfg= new IgniteConfiguration
icfg.setClientMode(true)
val ignite = Ignition.getOrStart(icfg)
val igniteQueue:IgniteQueue[Row] = ignite.queue("Ignite", 0, new CollectionConfiguration)
igniteQueue.removeAll(array)
}
Dequeueing is not happening properly.
Do I need to change the configuration?
Testing:
Queue count: 7588
Instead of multithreading we implemented looping for testing the removeAll sequentially.
dataFrame.take(3).foreach(row => {
var array = /*array forming logic*/
dequeue(array)
})
Iteration 1 : 166
Post removeAll Queue Size : 7422
Iteration 2 : 192
Post removeAll Queue Size : 7230
Iteration 2 : 185
Post removeAll Queue Size : 7045
Now in Multithreading
dataFrame.take(3).foreach(row => {
var array = /*array forming logic*/
val dequeueThread: ThreadDequeue = new ThreadDequeue(array)
pool.submit(dequeueThread)
})
ThreadDequeue class will have the dequeue method.
Thread 1 : 166
Post removeAll Queue Size : 7416
Thread 2 : 192
Post removeAll Queue Size : 7414
Thread 2 : 185
Post removeAll Queue Size : 7259
When executed sequentially the dequeue is happening fine. But in multithreading the count dequeued is not matching.
Thanks in Advance.

Related

How to send a message with priority to RabbitMQ with StreamBridge

I'm using RabbitMQ. I've defined a queue with priority, and I can send messages to this queue with some priority value using RMQ GUI, and consumers also get the messages in sorted order, but when I try to send the message from my java code using Stream bridge, I don't know how to specify the priority with the message.
Here's what I have tried :
I have added x-max-priority: 10 to the queue while creating the queue.
Consumer example =
#Bean
public Consumer<Message<String>> testListener() {
return (m) -> {
System.out.println("inside consumer with message : " + m);
System.out.println("headers : " + m.getHeaders());
System.out.println("payload : " + m.getPayload());
};
}
Producer example =
#GET
#Path("test/")
public void test(#Context HttpServletRequest request) {
System.out.println("inside test");
try {
String payload = "hello world";
logger.info("going to send a message : {}", payload);
int priority = 5;
Message<String> message = MessageBuilder.withPayload(payload)
.setHeader("priority", priority)
.build();
boolean res = STREAM_BRIDGE.send("testWriter-out-0", message);
System.out.println(message);
System.out.println(res);
} catch (Exception e) {
logger.error(e);
}
}
The output of the Producer =
-> inside test
-> GenericMessage [payload=hello world, headers={priority=5, id=some_id, timestamp=epoch}]
-> true
The output of the Consumer =
-> inside consumer with message : GenericMessage [payload=hello world, headers={amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=some_tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=some_tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}]
-> headers : {amqp_receivedDeliveryMode=PERSISTENT, amqp_receivedExchange=test_exchange, amqp_deliveryTag=1, deliveryAttempt=1, amqp_consumerQueue=test_exchange.ats, amqp_redelivered=false, amqp_receivedRoutingKey=test_exchange, amqp_timestamp=date_time, amqp_messageId=some_id, id=some_id, amqp_consumerTag=tag, sourceData=(Body:'hello world' MessageProperties [headers={}, timestamp=date_time, messageId=some_id, contentType=application/json, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=test_exchange, receivedRoutingKey=test_exchange, deliveryTag=1, consumerTag=tag, consumerQueue=test_exchange.ats]), contentType=application/json, timestamp=epoch}
-> payload : hello world
So the message goes to RMQ and the consumer also gets the message, but on RMQ GUI when I perform Get-message operation on the Queue, I get this result =>
Message 1
The server reported 0 messages remaining.
Exchange test_exchange
Routing Key test_exchange
Redelivered ○
Properties
timestamp: timestamp
message_id: some_id
priority: 0
delivery_mode: 2
headers:
content_type: application/json
Payload hello world
11 bytes
Encoding: string
As we can see in the above result, priority is set to 0 by RMQ (and hence in the Consumer, I get the messages in the FIFO manner, not in a priority-based manner) and inside headers : only one header is present "content_type: application/json", so I think the priority is not a part of the header but is a part of properties, then how to set message properties using StreamBridge?
To conclude, I am trying to figure out how to set the priority of a message dynamically while sending it using StreamBridge, any help would be appreciated, thanks in advance !
Please, consider to use the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn.
Apparently your spring-cloud-starter-stream-rabbit = 3.0.3.RELEASE is old enough to suffer from the issue https://github.com/spring-cloud/spring-cloud-stream/issues/1931.
Have just tested with the latest one and I got the proper priority property on the message posted into RabbitMQ queue by the mentioned StreamBridge.

Retry is executed only once in Spring Cloud Stream Reactive

When I try again in Spring Cloud Stream Reactive, a situation that I don't understand arises, so I ask a question.
In case of sending String type data per second, after processing in s-c-stream Function, I intentionally caused RuntimeException according to conditions.
#Bean
fun test(): Function<Flux<String>, Flux<String>?> = Function{ input ->
input.map { sellerId ->
if(sellerId == "I-ZICGO")
throw RuntimeException("intentional")
else
log.info("do normal: {}", sellerId)
sellerId
}.retryWhen(Retry.from { companion ->
companion.map { rs ->
if (rs.totalRetries() < 3) { // retrying 3 times
log.info("retry!!!: {}", rs.totalRetries())
rs.totalRetries()
}
else
throw Exceptions.propagate(rs.failure())
}
})
}
However, the result of running the above logic is:
2021-02-25 16:14:29.319 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 0 subscriber(s).
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] k.c.m.c.service.impl.ItemServiceImpl : retry!!!: 0
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 1 subscriber(s).
Retry is processed only once.
Should I change from reactive to imperative to fix this?
In short, yes. The retry settings are meaningless for reactive functions. You can see a more details explanation on the similar SO question here

How to recover from akka.stream.io.Framing$FramingException

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.

[play-framework : 2.1.2-java : thread pool configuration and AsyncResult

We have a typical web-service which serves JSON data read from a remote database. I was trying out returning Result and AsyncResult, each with the following configuration:
play {
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 1
}
}
}
}
}
and one with
parallelism-factor = 1.0
parallelism-max = 5
Following are observations where the time taken to complete 500 requests is given (average of 5 readings):
1. parallelism-max=1 and parallelism-factor=1.0
Result :
Completion time = 291662 ms.
AsyncResult:
Completion time = 55601 ms
2. parallelism-max=5 and parallelism-factor=1.0
Result :
Completion time = 46419 ms.
AsyncResult:
Completion time = 46977 ms
We can see that with parallelism-max=1, AsyncResult clearly takes very less time compare to Result. However, with parallelism-max=5, Result and AsyncResult give very similar timings.
Shouldn't the time required become less as the number of threads increases, for AsyncResult also ?
Requesting for help to understand the reasons behind this observation.

How to resend from Dead Letter Queue using Redis MQ?

Just spent my first few hours looking at Redis and Redis MQ.
Slowly getting the hang of Redis and was wondering how you could resend a message that is in a dead letter queue?
Also, where are the configuration options which determine how many times a message is retried before it goes into the dead letter queue?
Currently, there's no way to automatically resend messages in the dead letter queue in ServiceStack. However, you can do this manually relatively easily:
To reload messages from the dead letter queue by using:
public class AppHost {
public override Configure(){
// create the hostMq ...
var hostMq = new RedisMqHost( clients, retryCount = 2 );
// with retryCount = 2, 3 total attempts are made. 1st + 2 retries
// before starting hostMq
RecoverDLQMessages<TheMessage>(hostMq);
// add handlers
hostMq.RegisterHandler<TheMessage>( m =>
this.ServiceController.ExecuteMessage( m ) );
// start hostMq
hostMq.Start();
}
}
Which ultimately uses the following to recover (requeue) messages:
private void RecoverDLQMessages<T>( RedisMqHost hostMq )
{
var client = hostMq.CreateMessageQueueClient();
var errorQueue = QueueNames<T>.Dlq;
log.Info( "Recovering Dead Messages from: {0}", errorQueue );
var recovered = 0;
byte[] msgBytes;
while( (msgBytes = client.Get( errorQueue, TimeSpan.FromSeconds(1) )) != null )
{
var msg = msgBytes.ToMessage<T>();
msg.RetryAttempts = 0;
client.Publish( msg );
recovered++;
}
log.Info( "Recovered {0} from {1}", recovered, errorQueue );
}
Note
At the time of this writing, there's a possibility of ServiceStack losing messages. Please See Issue 229 Here, so, don't kill the process while it's moving messages from the DLQ (dead letter queue) back to the input queue. Under the hood, ServiceStack is POPing messages from Redis.