Multiple clients load distribution with redis - redis

We are using redis as a queue for asynchronous processing of jobs. One application pushes jobs to redis (lpush), other application reads the redis queue (blpop) and processes the same. We wanted to scale the processing application so we ran two different instances on 2 different machines to process the jobs from queue, but we observed that one instance is taking 70% of the load from queue while other instances processes only a meagre amount. Is there any well defined strategy or configuration in using multiple clients with redis and proper load sharing? Or we have to maintain separate queues for the two instances and push the requests in a round robin manner?

Related

Multiple service instances using Hangfire (shared tasks/objects), is it possible?

I need to run multiple instances of the same service, with the same database, for redundancy reason.
I found some question about "Hangfire multiple instances" but for a differenct purpose then mine: usually about running multiple instances for different tasks on the same database, or similar to this.
I need to know if there are problems of concurrency when 2 or more instances of Hangfire use the same Database (we want to use MongoDB) and if this is the solution to make the service resilient.
The goal is to have instance that take care of all the jobs when another instance goes down.
Any suggestion wellcome for covering this scenario.
In our environment, we have a replica set used by about 10 Hangfire servers. If there are multiple Hangfire servers servicing the same queue, it means they will share the load and whichever Hangfire server checks the queue first, picks up the job and continues. If you remove all but 1 server, the jobs will continue (as long as there are enough workers otherwise they will remain queued until a worker is available).
To answer your question, yes, you can have 2 or more Hangfire servers using the same MongoDB. MongoDB provides multi-threading support so its safe to have various servers accessing the same database backend. If you have two servers, both will be active and if one instance goes off line, other instance (based on queues) will continue to process the jobs in queue.
Keep in mind, Hangfire servers processes the jobs in Specific Queues. If both servers are part of the same queue then you are load balancing the jobs among the two servers. If they are part of different queues, then you read about that scenario where each Hangfire instance processes different jobs (because they are part of different queues).
Read about configuring Job Queues here

Why does celery need a message broker?

As celery is a job queue/task queue, name illustrates that it can maintain its tasks and process them. Then why does it need a message broker like rabbitmq or redis?
Celery is a Distributed Task Queue that means that the system can reside across multiple computers (containers) across multiple locations with a single centralise bus
the basic architecture is as follows:
workers - processes that can take jobs (data) from the bus (task queue) and process it
*it can put the result back into the bus for farther processing by a different worker (create a processing flow)
bus - task queue, this is basically a db that store the jobs as messages, so the workers can retrieve them,
it's important to implement a concurrent and non blocking db, so when one process takes or puts job from/on the bus, it doesn't block other workers from getting/putting theirs jobs.
RabbitMQ, Redis, ActiveMQ Kafka and such are best candidates for this sort of behaviour
the bus has an api which let to submit jobs for workers and retrieve them (among more complex features)
most buses implement an ack/fail feature so workers can ack their job being done or if not ack (or report failure) this message can be served again to another worker, and might get processed successfully this time, thus no data is lost...(this depends highly on the fail over logic and the context of data as an input to a task)
Celery include a scheduler (beat) that periodically put specific jobs on the bus and thus create a periodically tasks
lets work with a scrapping example, you want to scrap the world, but china can only allow traffic from it's region and so is Europe and the USA
so you can build a workers and place them all over the world
you can use only one bus, lets say it's located in the usa, all other workers know this bus and can connect to it, so by placing a specific job (scrap china) on the bus located in the US, a process in china can work on it, hence distributed
of course, workers will increase the throughput of the system, only due to parallelism, unrelated to their geo location and this is the common case of using an event-driven architecture (i.e central bus, consumers and producers)
I suggest read the formal docs, it's pretty straight forward

Is it possible to define priorities for Celery workers consuming from the same queue?

I have two machines on my network running Celery workers that process tasks from a common queue (the messaging back-end is RabbitMQ).
One machine is much more powerful and processes the tasks faster (which is important). If there is only one task in the queue, I always want it to run on this machine. If the queue is full, I want the less powerful machine to start accepting tasks as well.
Is there a recommended, elegant way to do this? Or do I have to set up two queues ("fast" and "slow") and implement some kind of router that sends tasks to the "slow" queue only when the "fast" queue is full?

Consume objects from S3 balancing between Mule servers

Scenario is this:
S3 Bucket full of csv files with hundreds of formatted lines each.
N number of Mule servers. Clustered or not. Both options available.
One unique Mule flow installed in all mule server.
Mule flow behavior is simple. Polls S3 to lazy fetch available files, retrieve each single file contents, transform csv lines into sql statements and insert in DB.
Problems:
All Flows from different Mule server successfully polls s3, retrieves files, process them, and insert in DB. So files and registries are processed several times.
Wish List:
load balance is done between all active servers.
flows installed in different mule servers are equal (we don't modify flow to get different files)
files and registries inside them are not processed twice
Failed Approach:
We tried a processed/non processed mechanism common to all mule servers, in clustered mode. We used Mule's 3.5 Object Store to keep a list of the files that has been processed, visible to all servers. Problem here is, we are not balancing, all workload its on one servers, rest are idle almost all time.
Questions:
Which could be best architecture design is we want load balancing?
Maybe we need an specific mule app to do s3 file download, and let this
app to divide equally the work load between the Mule servers?
Here is an schema of scenario:
Configure your S3 bucket to push events to a SQS queue (see here), and have your mule servers pull events from that queue, instead of polling S3. This way, each event will be pulled by one worker only.
It works as follows: In each worker, you need to repeatedly call ReceiveMessage() to get the next message in the queue. Once a worker gets a message, that message becomes invisible to other workers for a certain amount of time (which you can control by setVisibilityTimeout()). After a worker processes a message, it should call deleteMessage() to remove it completely from the queue. In case of failure in the worker, deleteMessage() is not called, and so after the visibility timeout period, another worker will pick up that message.
In other words, the Queue in SQS doesn't deal with distributing the work. The workers pull messages from the queue when they are ready, and this is what creates the load balancing.

Load Balancing job queue among disproportionate workers

I'm working on a tool to automatically manage a job queue (in this case, Beanstalkd). Currently, you must manually set the number of available workers to pull jobs off the queue, but this does not allow for spikes in jobs, or it wastes resources during low job times.
I have a client/server set up that runs on the job queue server and the workers. The client connects to the server and reports available resources (CPU/Memory) as well as what types of jobs it can run. The server then monitors queues and dictates to the connected clients how many workers to run to process that queue once a second. There are currently a hundred or so different worker types and they all use very different amounts of CPU/memory, and the worker servers themselves have different levels of performance.
I'm looking for techniques to balance the workload most effectively based on job queue length and the resource requirement of each worker - for example, some workers use 100% of a core for 5s, while others take microseconds to complete. Also, some jobs are higher priority than others.