Retry Hangfire failed jobs from SQL? - hangfire

For security or simplicity our Hangfire dashboard UI is not present on production. So we can't just go on and click "Retry". But I'd like to retry some failed jobs manually using SQL. In the DB there are stateid and statename fields. Is it possible to refresh a job by changing its stateid?
I got this idea from the Enqueue function (in SqlServerJobQueue.cs on GitHub) for SQL Server (I'm using postgresql).
insert into hangfire.jobqueue (jobid, queue)
select id, 'default' from hangfire.job where statename = 'Failed';
But it doesn't seem to do anything.

Related

Triggering a scheduled job from multiple hangfire servers

we are triggering a scheduled job in a configured interval and it is getting fired. This is working in a single hangfire server environment.
In some of our environment, we will be having more than one servers. So, at this scenario, all the servers will trigger that job at that particular interval. Can we restrict this job to be triggered from only one server?
string cronExpressionCleanupJob = "0 0 0/{2} ? * *";
RecurringJob.AddOrUpdate<CleanUpJobTriggerJob>(nameof(CleanUpJobTriggerJob),
job => job.ExecuteJob(null, null), cronExpressionCleanupJob, TimeZoneInfo.Local);
var hangFireJobId = BackgroundJob.Enqueue<CleanUpJobTriggerJob>(x => x.ExecuteJob(null, null));
According to the documentation, you should not worry as long as you specify the same identifier for your recurring job.
The call to AddOrUpdate method will create a new recurring job or
update existing job with the same identifier.

Can we check if table in bigquery is in locked or DML operation is being performed

There is a BQ table which has multiple data load/update/delete jobs scheduled in. Since this is automated jobs many of it are failing due to concurrent update issue.
I need to know if we have a provision in BigQuery to check if the table is already locked by DML operation and can we serialize the queries so that no job fails
You could use the job ID generated by client code to keep track of the job status and only begin the next query when that job is done. This and this describe that process.
Alternatively you could try exponential backoff to retry the query a certain amount of times to prevent automatic failure of the query due to locked tables.

Using powershell - how to prevent SQL from access the same record

I want to run multiple instances of powershell to collect data from Exchange. In powershell, I use invoke-sqlcmd to run various SQL commands from powershell.
SELECT TOP 1 SmtpAddress FROM [MMC].[dbo].[EnabledAccounts]
where [location] = '$Location' and Script1 = 'Done' and (Script2 = '' or
Script2 is null)
When running more than one script, I see both scripts accessing the same record. I know there's a way to update the record, to lock it, but not sure how to write it out. TIA:-)
The database management system (I'll assume SQL Server) will handle contention for you. Meaning, if you have two sessions trying to update the same set of records, SQL Server will block one session while the other completes. You don't need to do anything to explicitly make that happen. That said, it's a good idea to run your update in a transaction if you are applying multiple updates as a single unit; a single change occurs in an implicit transaction. The following thread talks more about transactions using Invoke-SqlCmd.
Multiple Invoke-SqlCmd and Sql Server transaction

Locking database rows

I have a table in my database with defined jobs. One of jobs attribute is status, which can be [waiting, in_progress, done]. To process jobs i have defined master-worker relation between two servers, they work in the following way:
master looks for the first record in database with status 'waiting' and triggers a worker server to process the job.
Triggered worker sets status on the job to 'in_progress' in one transaction and starts executing the job (in the meantime server is looking for next job with status 'waiting' and triggers another worker)
when the job is done worker sets status on the job to 'done'
This is the perfect case scenario, however it might happen that during job execution one of the workers dies. In this case job needs to be restarted, however master has no way of verifying if job was done other than checking its status in database ('done'). Therefore if worker dies, in database the job has still status 'in_progress' and server has no idea that worker died and that job needs to be restarted. (we can't get any feedback from worker, we can not ping it and we can not get information what job is he currently working on)
My idea to solve this problem would be:
After worker changed job's status to 'in_progress' (transaction committed) he would open a new transaction with lock on the particular job.
Master, while looking for jobs to start would look for both 'waiting' and 'in_progress' which are not locked
If worker dies, transaction would break, releasing the lock from job record in database and master would reprocess it.
now i'm looking for a way to verify it this would indeed work. Possibly with SQL sctipt in SQL developer (two instances), i would like this to work in the following way:
Instance 1:
open transaction and create a row lock on row_1
sleep for 5 min
release lock
Instance 2:
Open transaction and look for row matching criteria (row_1 and row_2 match criteria)
return selected row
then i would kill instance 1 to simulate worker death and run instance 2 again.
Please advise if my approach is correct
P.S
Also could you point me to some good explanation how can i create script form instance 1?

Activiti : Last completed task

Problem
I want to get the last task completed for the process instance. I am able to get the last completed human task, but not a Service task.
What i have tried
I have written a SQL query, i am using MySQL, to find out what is last task that is completed. Here it goes :
SELECT * FROM act_hi_taskinst
where PROC_INST_ID_= '1234' and END_TIME_ IS NOT NULL
order by END_TIME_ desc;
act_hi_taskinst is the table that gets updated as and when the process instance progresses.
The process flow goes something like this :
A human Task (Leave request) -> Service task( Check availability of leave) -> service task(Check feasibility) -> A human task(Manager task)
When the task comes to Manager Task the last completed is Check feasibility, but its not reflecting in database.
Can you please help
Does activiti provide any such API to get the last completed service task? Can you suggest some SQL query to solve the problem.
The information you are looking for is stored in act_hi_actinst table. It contains information about every activity that’s being executed as part of a process instance.
SELECT * FROM act_hi_actinst WHERE proc_inst_id_ = '1929'
AND end_time_ IS NOT NULL ORDER BY end_time_ DESC