I need to run multiple instances of the same service, with the same database, for redundancy reason.
I found some question about "Hangfire multiple instances" but for a differenct purpose then mine: usually about running multiple instances for different tasks on the same database, or similar to this.
I need to know if there are problems of concurrency when 2 or more instances of Hangfire use the same Database (we want to use MongoDB) and if this is the solution to make the service resilient.
The goal is to have instance that take care of all the jobs when another instance goes down.
Any suggestion wellcome for covering this scenario.
In our environment, we have a replica set used by about 10 Hangfire servers. If there are multiple Hangfire servers servicing the same queue, it means they will share the load and whichever Hangfire server checks the queue first, picks up the job and continues. If you remove all but 1 server, the jobs will continue (as long as there are enough workers otherwise they will remain queued until a worker is available).
To answer your question, yes, you can have 2 or more Hangfire servers using the same MongoDB. MongoDB provides multi-threading support so its safe to have various servers accessing the same database backend. If you have two servers, both will be active and if one instance goes off line, other instance (based on queues) will continue to process the jobs in queue.
Keep in mind, Hangfire servers processes the jobs in Specific Queues. If both servers are part of the same queue then you are load balancing the jobs among the two servers. If they are part of different queues, then you read about that scenario where each Hangfire instance processes different jobs (because they are part of different queues).
Read about configuring Job Queues here
Related
I have a distributed system of producers and consumers across several servers, with redundant nodes—both for failover and load-balancing. The nodes communicate via RabbitMQ messages.
Each producer runs its own scheduler to invoke jobs, which one of the consumers should run. This works by publishing the appropriate RabbitMQ message, that one of the consumers will process.
Now, the tricky part is, each job should be run only once. In short, my requirements are:
Only one invoke message per scheduled job should be processed (by any of the consumer instances)
If any of the procuders goes down, the job should still be invoked by the other instances
I can't figure out how to implement this without relying on anything else but RabbitMQ. I could make it work if there was such a thing as an "exclusive exchange", which only one producer can connect to at a time. I thought about making the consumers ignore any duplicate invokes for the same job, but this will not work, because due to the load-balancing, subsequent messages may be received by any of the other instances. Another idea was implementing a mechanism to declare one of the producers the "principal" node, so only this one is allowed to send invokes, but this basically presented the same problem of coordinating between instances.
Any ideas? Thanks in advance.
Can someone give me the clarity of the advantages of using RabbitMQ(message queue) instead of Delayed Job(background processing) ?
Basically I want to know when to use background processing and messaging queue ?
My web application has 3 components one main server which will handle all user requests and 2 app servers where all the background jobs(like es reindex, es record update, sending emails, crons) should be run.
I saw articles which say Database as a queue(delayed job) is very bad as the consumers will be polling the database for new jobs and updating the statuses of jobs which will lock the tables. Then how does rabbit MQ or other messaging queues store to avoid this problem.
There are other alternatives for delayed job like sidekiq which will run over redis instead of mysql. It is better to use sidekiq instead of rabbitmq?
And are there any advantages of using sidekiq over delayed job?
You have 2 workers and 1 web server: I guess your web app dispatches some delayed jobs to your workers. So you need a way to store the data related to those background jobs.
For that, you can use both a database (like Redis, this is what sidekick is doing) or a message queue (like RabbitMQ). A message queue is a specialized system that is very efficient for this use case (allowing a much higher throughput). A database would let you have a better introspection (as you can request the jobs table to see what your current situation is), while the queuing system would be more efficient but also is more a black box and will require new skills.
If you do not have performance issues, the simpler the better, even a simple mysql database should be enough. If you want a more powerful system or need a lot of monitoring you can also consider using a specialized hosted service such as zenaton (I'm founder) that will do all the heavy lifting for you, including scheduling or more sophisticated orchestration of your background jobs.
Both perform the same task, i.e executing jobs in the background, but go about it differently.
With delayed job one uses some sort of a database for storage, queries for the jobs thereafter then processes them. It's simple to set up but the performance and scalability aren't great.
RabbitMQ or its alternatives Redis e.t.c are harder to set up but their performance, flexibility and scalability is great, we are talking in the upwards of 5000 jobs per second besides you have tend to use less code.
Another option is to use task orchestration system like Cadence Workflow. It supports both delayed execution and queueing, but provides higher level programming model and tons of features that neither queues or delayed execution frameworks.
Cadence offers a lot of advantages over using queues for task processing.
Built it exponential retries with unlimited expiration interval
Failure handling. For example it allows to execute a task that notifies another service if both updates couldn't succeed during a configured interval.
Support for long running heartbeating operations
Ability to implement complex task dependencies. For example to implement chaining of calls or compensation logic in case of unrecoverble failures (SAGA)
Gives complete visibility into current state of the update. For example when using queues all you know if there are some messages in a queue and you need additional DB to track the overall progress. With Cadence every event is recorded.
Ability to cancel an update in flight.
Built in distributed CRON
See the presentation that goes over Cadence programming model.
I am making one session per connection per thread to activeMQ cluster. But I want to consume from hundreds of destinations. I do understand that if I only have one thread ( one session ), I can't consume messages from these destinations concurrently. I don't want to do that either. But I want to have hundreds of consumers per session which will in-turn be associated to hundreds of different destinations, is this a viable approach? Please also provide the reason of viability or non-viability.
PS : I don't want to do any heavy processing on the messages, so that's why only 1 thread.
A session is not bound to a single thread - threading is a separate chapter. You can use a session in multiple threads (not recommended) and multiple sessions in a single thread. The session construct is more a thing to control transactions - i.e. commit and rollback messages in a transaction.
Anyway, you can use a single consumer to read multiple destinations. Simply put the destinations in a list like: "my.first.queue,my.other.queue,my.last.queue". You can also read a queue using wildcards - "my.>". would use all queues above.
This way, you can use a single thread and a single session to read from a large number of queues.
Does WebLoogic WorkManager have the ability to execute jobs on other servers on the cluster to effectively parallelize jobs?
There are two Work Managers - One on the server side that handles thread prioritization/queueing and the CommonJ Work Manager that can be used through the CommonJ API.
Within your application, you can define priorities within the container and also pursue parallel execution on the same server. However, if you are looking to process workload in parallel across multiple servers by having a single application server splitting up its current workload and redistributing it across the cluster, the bulk of the logic will have to be written into your application.
WebLogic does provide other mechanisms to make this easier (For example, you could have a primary node process the workload into units of work and put it on a durable distributed topic that the other servers read from) but it would be easier to use an existing product, such as Terracotta's EhCache or a compute cluster on Oracle's Coherence Grid.
We are using redis as a queue for asynchronous processing of jobs. One application pushes jobs to redis (lpush), other application reads the redis queue (blpop) and processes the same. We wanted to scale the processing application so we ran two different instances on 2 different machines to process the jobs from queue, but we observed that one instance is taking 70% of the load from queue while other instances processes only a meagre amount. Is there any well defined strategy or configuration in using multiple clients with redis and proper load sharing? Or we have to maintain separate queues for the two instances and push the requests in a round robin manner?