Hangfire recurring job - hangfire

i have the following scenario:
i have 2 hangfire server instances serverA and serverB connecting to the same hangfire DB.
serverA adds a recurring job to a queue called "recurring", which is being picked up by serverB to execute, I dont want serverB to execute the recurring job, is this be design?
When i instantiate serverB, i explicitly initialized the "QueueNames" property to NOT include "recurring", but serverB still seems to pick this job up?
Am i missing something here? any help is appreciated. thanks

Related

Run a Job in ALL distributed server instances

Today we are using HangFire to distribute the load between different servers, and works ok with BackgroundJob.Enqueue()
But I need a way to run a Job in all Hangfire servers such as the following
BackgroundJob.Enqueue(() => refreshCachedVariables());
Is there a way to do this?

Hangfire jobs stuck in processing state

We are using Hangfire to download data from Azure. We are using Hangfire 1.7.6. However, after running for some time, Hangfire is having a deadlock and seems stuck in processing the job. We had to restart the service to keep it working.
There is a recurring job which is adding jobs to the other background server. Mostly the jobs are stuck when it is downloading a big file.
Has anyone faced this type of problem of hangfire jobs stuck in processing?
Please let me know if any further information is required. Any help/guidance is appreciated.
Is this not caused by the length of time it takes to complete the download from azure?
You could try testing this with large files, and see how it handles it.
Also, like #jbl asked, how is your Hangfire Server hosted? If it is hosted in IIS then remember that the Hangfire Server may lose its heartbeat if IIS shuts down the application process due to it being idle for a given period of time.
I came across this issue in the past and ended up running the application as a process on the server.
IIS is optimised to save resources, so it will shutdown processes that aren't being used. When a request is made to your application, then it fires the process back up. This will also cause any scheduled background jobs not to fire.

Amazon EMR managing my spark cluster

I have a spark setup on Amazon EC2 machines with 2 worker machines running. It reads data from cassandra, do some processing and write to sql server. I have heard about amazon EMR and read about it. I want a managed system where my worker machines are automatically added to my cluster if my job is taking more time and shutdown when my job gets completed.
Can I achieve this through Amazon EMR?
The requirements are:
My worker machines are automatically added to my cluster if my job is taking more time.
Shutdown when my job gets completed.
No. 2 is definitely possible if your job is launched from the steps. There is an option that auto-terminates cluster after the last step is completed. Alternatively, this could also be done programatically with the SDK.
No. 1 is a little more difficult but EMR has three classes of nodes; master, core, and task. Task nodes can be added after cluster creation. The trigger for that would probably have to be done programatically or utilizing another Amazon service, like Lambda.

Multiple Brokers and Failover in ActiveMQ

I have two questions regarding ActiveMQ.
On my environment, I set up 3 ActiveMQ in 3 servers and share one Database. Is it possible to run the 3 ActiveMQ in the 3 servers to share the same database? I tried to set it up. However, it looks like 3 brokers cannot share the same database. Is it correct?
Also, I did some Failover testing, it looks like the ActiveMQ cannot guarantee the message order. e.g. I set up the 3 ActiveMQ into ServerA,ServerB and ServerC. And then, I published MessageA,MessageB into ServerA and published MessageC into ServerB. The ServerA ServerB and ServerC had been set up as Failover servers. When I shut down ServerA, the only MessageC can be consumed. However, the consumed message order should be MessageA,MessageB and MessageC. I need to keep this message order even through the ServerA is down. Is it possible to configure ActiveMQ to guarantee the message order for Failover?
Thank you!
You can set all 3 to the same DB. They will act like a master-slave failover. Only one instance will run and other two will be waiting for a lock from the DB to takeover.
If you follow #1, it will guarantee the order, but you'll be using one server at a time (and the centralized DB as storage)

Amazon EMR how to find out when job is finish?

I'm using Amazon Elastic MapReduce Ruby (http://aws.amazon.com/developertools/2264) to run my hive job. Is there a way to know when the job is done? Right now all I could think of is the keep running emrclient with "--list --active" but I'm hoping there is a better way to do this.
Thank you
You may also get to know this from the aws console's EMR section.
If your concern is to terminate the cluster once your job is done then while launching the cluster don not use the option --stay-alive. Or alternatively, you can have a script which would poll for the current status of the running cluster and terminate it once it gets to 'waiting' state.
I do not think there is another way.