resque scheduler not enqueue jobs - ruby-on-rails-3

Schedules exist to be executed at certain times as defined in the schedule, but the tasks are not executed.
There is an option in the schedule to manually start that specific scheduled task and that will execute and perform the tasks. This shows that the task can be performed, it is just not starting automatically.

There were 2 issues here:
There are some resque and resque scheduler processes are running from old release path (Capistrano create folder for each release and do symlink the “current” to latest release path). This leads to incorrect jobs picking for some schedules
The current upstart script for resque scheduler doesn’t include dynamic schedules, so the dynamic schedules don’t be picked.

Related

Pentaho Task Locked

I have some Tasks in pentaho, and for some reason some times, some tasks simply stall with message Carte - Installing timer to purge stale objects after 1440 minutes. task. For example, I scheduled one task to run at 05h00 AM and this task usually runs in 10 minutes, but some times it never ends. The task stalls with aforementioned message. However, when I run the task run on the Pentaho Data Integration Canvas, the job works.
The .exe that I use to run is:
cd c:\data-integration
kitchen.bat /rep:repo /job:jobs//job_ft_acidentes /dir: /level:Minimal
Picture of the message
Hoe can I prevent this error?

mulesoft batch job is not executed

I work in Runtime 4.1.5 and batch job undertakes the work of synchronizing data
If the batch job is completed normally, the log should look like this:
Created instance 'dc97a040-009e-11ec-a7bf-00155d801499' for batch job 'sendFlow_Job'
splitAndLoad: Starting loading phase for instance 'dc97a040-009e-11ec-a7bf-00155d801499' of job 'sendFlow_Job'
Finished loading phase for instance dc97a040-009e-11ec-a7bf-00155d801499 of job sendFlow_Job. 1 records were loaded
Started execution of instance 'dc97a040-009e-11ec-a7bf-00155d801499' of job 'sendFlow_Job'
batch step customer log ....
Finished execution for instance 'dc97a040-009e-11ec-a7bf-00155d801499' of job 'sendFlow_Job'. Total Records processed: 1. Successful records: 1. Failed Records: 0
=================end=======================
The log in question is as follows:
Created instance 'dc97a040-009e-11ec-a7bf-00155d801499' for batch job 'sendFlow_Job'
splitAndLoad: Starting loading phase for instance 'dc97a040-009e-11ec-a7bf-00155d801499' of job 'sendFlow_Job'
Finished loading phase for instance dc97a040-009e-11ec-a7bf-00155d801499 of job sendFlow_Job. 1 records were loaded
Started execution of instance 'dc97a040-009e-11ec-a7bf-00155d801499' of job 'sendFlow_Job'
=================end===================
It can be clearly seen that the log shows that the batch job has only completed the first stage of work. After that, the batch job is as if it has never existed, there is no log output, and no errors are thrown.And from the target database, the data is indeed not synchronized
I tested it in the local environment and the problem was reproduced. Use kill -9 to kill the process while the batch step is executing, then the process will restart, and then all batch jobs will have problems
I found the queue file used by batch job in the .mule folder. It is similar to BSQ-batch-job-flow-name-dc97a040-009e-11ec-a7bf-00155d801499-XXX
Under normal circumstances, each batch job will create three BSQ file and delete it at the mplete.
In my question, the BSQ file will be created but not deleted
I looked up some posts and they suggested deleting the .mule folder and restarting. In the actual environment, I don’t know when there will be a problem and deleting the .mule folder does not completely solve the problem of batch job not being executed.
Is anyone proficient in mule batch job? Can you give me some suggestions, thanks
You should not delete the .mule directory. There is other information in there unrelated to batch that would be lost, like clustering configurations, persistent object stores, other applications batches and queues. It may be ok to delete it inside the Studio embedded runtime because that just your development environment and you probably are not losing production data, but in any case is not a solution just to delete information.
There are too many possible causes to identify the right one, and you should provide a lot more information. My first recommendation is to ensure your Mule 4.1.5 has the latest cumulative patch to ensure all known issues are resolved. Note that Mule 4.1.5 has been released almost 3 years ago. If possible at all migrate to the latest Mule 4.3.0 with the latest cumulative patching. It should be more stable and performant than 4.1.5.

How to detect APScheduler's running jobs?

I have some recurring jobs run frequently or last for a while.
It seems that Scheduler().get_jobs() will only return the list of scheduled jobs that are not currently running, so I cannot determine if a job with certain id do not exists or is actually running.
How may I test if a job is running or not in this situation?
(I set up those jobs not the usual way, (because I need them to run in a random interval, not fixed interval), they are jobs that execute only once, but will add a job with the same id by the end of their execution, and they will stop doing so when reaching a certain threshold.)
APScheduler does not filter the list of jobs for get_jobs() in any way. If you need random scheduling, why not implement that in a custom trigger instead of constantly readding the job?

queue job all day and execute it at a specified time

Is there a plugin or can I somehow configure it, that a job (that is triggered by 3 other jobs) queues until a specified time and only then executes the whole queue?
Our case is this:
we have tests run for 3 branches
each of the 3 build jobs for those branches triggers the same smoke-test-job that runs immediately
each of the 3 build jobs for those branches triggers the same complete-test-job
points 1. and 2. work perfectly fine.
The complete-test-job should queue the tests all day long and just execute them in the evening or at night (starting from a defined time like 6 pm), so that the tests are run at night and during the day the job is silent.
It's no option to trigger the complete-test-job on a specified time with the newest version. we absolutely need the trigger of the upstream build-job (because of promotion plugin and we do not want to run already run versions again).
That seems a rather strange request. Why queue a build if you don't want it now... And if you want a build later, then you shouldn't be triggering it now.
You can use Jenkins Exclusion plugin. Have your test jobs use a certain resource. Make another job whose task is to "hold" the resource during the day. While the resource is in use, the test jobs won't run.
Problem with this: you are going to kill your executors by having queued non-executing jobs, and there won't be free executors for other jobs.
Haven't tried it myself, but this sounds like a solution to your problem.

Oozie start time and submission time delay

I'm working on a workflow that has both Hive and Java actions. Very often we have been noticing that there is a few minutes delay between Java action start time and the job submission time. We don't see that with Hive jobs, meaning Hive jobs seem to be submitted almost immediately after they are started. The Java jobs do not do much and so they finish successfully in seconds after they are submitted but the time between start and submission seem to be very night ( 4 -5 minutes). We are using fair scheduler and the there are enough mapper/reducer slots available. But still even if it's a resource problem the Hive jobs should also show delay between start and submission but they don't ! Java jobs are very simple jobs and they don't process any files etc and basically used to call a web service and they spawn only single mapper and no reducers where are the Hive jobs creates hundreds of mapper/reducer tasks but still there is not delay between start and submission. We are not able to figure out why oozie is not submitting the Java job immediately. Any ideas?