Akka custom load balancing - load-balancing

I'm using Akka 2.10. I have a scheduler which sends a bunch of messages (crawl tasks) to worker routers (8 workers) every N seconds (e.g. 5 secs).
I want to somehow track the load on the worker actors (e.g. number of messages in mailbox of actors) and if they exceed some threshold, decrease the period of scheduler.
The question is what kind of actor statistic I can use? Are there any built-in metrics?

Related

Spring AMQP Rabbit: Global prefetch count

I have a setup of multiple queues to distribute different types of memory and time intensive tasks to workers.
While the worker instances are intended to listen to different subsets of the queues, they should only consume a limited amount of messages (usually 1) at a time per worker to prevent them from running out of memory. We are using a SimpleMessageListenerContainer for the queues with a concurrency and prefetch count of e.g. 1.
The problem is that the prefetch count seems to be set on the channel with the global option set to false, so it is not applied per channel, but per consumer / queue (see docs).
As a result, a single worker that processes a long running task blocks a message on every queue it listens to, while the other workers idle.
Because of the requirement that the workers can be configured for changing subsets of task types, we cannot route all tasks to a single queue or worker specific ones.
I couldn't find a way to change the qos setting on the channel before the consumers are subscribed, because everything happens within the start method of the BlockingQueueConsumer.

Why send rate is lower than configured rate in config.yaml (hyperledger caliper) even after use of only one client?

I configured send rate at 500 tps and I am using only one client so send rate should be around 500tps but in generated report send rate is around 130-40 tps. Why there is so much deviation?
I am using fabric ccp version of caliper.
I expect the send rate around 450-480 but the actual send rate is around 130-40 tps.
Node.js is a single-threaded framework (async/await just means deferred execution, not parallel execution). Caliper runs a loop with the following step:
Waiting for the rate controller to enable the next TX
Creates an async operation in which the user module will call the blockchain adapter.
All of the pending TXs eat up some CPU time (when not waiting for I/O), plus other operations are also scheduled (like sending updates about TXs to the master process).
To reach 500 TPS, the rate controller must enable a TX every 2ms. That's not a lot of time. Try spawning more than 1 local clients, so the load will be shared among them (100 TPS/client for 5 clients, 50 TPS/client for 10 clients, etc).

Why Redis Publish operation's time complexity is O(M+N)

As per Redis website, Time complexity of a publish is O(N+M).
Time complexity: O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).
I might be wrong, but why can't it send message to all the client asynchronously using some asyn API (as writing message to network stream is I/O bound operation)?

RabbitMQ: basic ack takes very long time and blocks publishing

I'm using the Java Client 3.5.6 for RabbitMQ.
My use case is this:
I have 10-15 Channels to one queue (mostly the same connection, one connection per channel makes no difference).
I get them without autoAck. Every Channel has a prefetch / QoS size of 5000. So let's just assume i have 30 channels, so i can get 150000 messages.
Every full minute, i compute some things and when successful, i use basicAck to acknowledge these messages.
However, the management webinterface shows in that phase that 0 messages are delivered, which is not realistic unless those are somehow "blocked".
I'm using this queue on 3-node-cluster as a HA-queue with TTL set to 1800 seconds. The nodes are connected via internal LAN and the machines are really powerful with plenty RAM.
My Question:
Why does this basicAck operation block the rest of the operations like publishing or delivering new messages?

How to limit throughput with RabbitMQ?

Where did the question:
We are using RabbitMQ as task queue. One of specific tasks - sending notices to Vkontakte social network. They api has limit to request per seconds and this limit based on your application size. Just 3 calls for app with less then 100k people and so on. So we need to artificially limit out request to they service. Now this logic is application based. It simple while you can use just one worker per such queue, just set something like sleep(300ms) and be calm. But when your should use N workers this synchronization becomes not trivial.
How to limit throughput with RabbitMQ?
Based on story above. If it were possible set prefetch size not only message based but time based to this logic can be much simple. For example, "qos to 1 message per fetch not faster then 1 time in seconds" or so on.
Is there something like this?
May be other strategy about this?
This is not possible out of the box with RabbitMQ.
You're right, with distributed consumers this throttling becomes a difficult exercise. I would suggest having a look at ZooKeeper which would allow you to synchronize all consumers and throttle processing of messages by leveraging it's Znodes / Watches
for throttled yet scalable solution.