how to run (or not) a cron job based on the results of another cron job - apache

I need to execute a cron job based on whether or not the cron job that ran before it was at least partially successful, I am trying to set up conditions for the run... sometimes these condition would be on the local or remote machine...
is there a way to do this?

In that case you can share a common-file in which the first job will write its status and second cron job will read it an decide on that basis weather it should proceed or not.
In case of remote machine you may share that file on network.

Related

Is it possible to request more time to a running job in SLURM?

I know it's possible on a queued job to change directives via scontrol, for example
scontrol update jobid=111111 TimeLimit=08:00:00
This only works in some cases, depending on the administrative configuration of the slurm instance (I'm not an admin). Thus this post does not answer my question.
What I'm looking for is a way to ask SLURM to add more time to a running job, if resources are available, and even if it's already running. Sort of like a nested job request.
Particularly a running job that was initiated with srun on-the-fly.
In https://slurm.schedmd.com/scontrol.html, it is clearly written under TimeLimit:
Only the Slurm administrator or root can increase job's TimeLimit.
So I fear what you want is not possible.
An it makes sense, since the scheduler looks at job time to decide which jobs to launch and some short jobs can benefit from back-filling to start before longer jobs, it would be really a mess if users where allowed to change the job length while running. Indeed, how to define "when resource are available"? Some node can be IDLE for some time because slurm knows that it will need it soon for a large job

Prevent Job execution if job was scheduled to be executed in the past?

I'm looking for a way to prevent Hangfire from executing a job which has been scheduled to be executed in the past. For instance, if Product import is scheduled to be run at 4am, but server goes down and only comes back up at 7am, Hangfire it execute the 4am job immediately. I want to prevent this, any ideas?
What complicates matters, is that I still want to be able to execute the job manually when necessary, so I cannot hard-code a check to make sure job doesn't execute outside of 4am window.
Any ideas would be welcome.
More here https://github.com/HangfireIO/Hangfire/issues/620

Autosys JIL ignoring success conditions

I hope someone can point me in the right direction or shed some light on the issue I'm having. We have Autosys 11.3.5 running in Windows environment.
I have several jobs setup to launch on a remote NAS server.
I needed JOB_1 in particular to only run if another completed successfully.
Seems straight forward enough. In UI there's a section to specify Condition such as: s(job_name) as I have done and I'm assuming that ONLY if the job with name job_name succeeds that my initial job should run.
No matter what I do, when I make the second job fail on purpose (whether manually setting its status to FAILURE) or changing some of its parameters so that its natural run time causes it to fail. The other job that I run afterwards seems to ignore the condition altogether and complete successfully each time.
I've triple checked the job names (in fact I copy and pasted it from the JIL definition of the job so there are no typos), but it's still being ignored.
Any help in figuring out how to make one job only run if another did not fail (and NOT to run if it DID fail) would be appreciated.
If both the jobs are scheduled and become active together, then this should not happen.
The way i think is, you must be force starting the other job while the first is failed. If thats the case, then conditions will not work.
You need to let both the jobs start as per schedule, or at least the other job start as per schedule while the first one is failed. In that case the other job will stay in AC state until the first one is SU.
Let me know if this is not the case, i will have to rephrase another solution then.

How to detect APScheduler's running jobs?

I have some recurring jobs run frequently or last for a while.
It seems that Scheduler().get_jobs() will only return the list of scheduled jobs that are not currently running, so I cannot determine if a job with certain id do not exists or is actually running.
How may I test if a job is running or not in this situation?
(I set up those jobs not the usual way, (because I need them to run in a random interval, not fixed interval), they are jobs that execute only once, but will add a job with the same id by the end of their execution, and they will stop doing so when reaching a certain threshold.)
APScheduler does not filter the list of jobs for get_jobs() in any way. If you need random scheduling, why not implement that in a custom trigger instead of constantly readding the job?

queue job all day and execute it at a specified time

Is there a plugin or can I somehow configure it, that a job (that is triggered by 3 other jobs) queues until a specified time and only then executes the whole queue?
Our case is this:
we have tests run for 3 branches
each of the 3 build jobs for those branches triggers the same smoke-test-job that runs immediately
each of the 3 build jobs for those branches triggers the same complete-test-job
points 1. and 2. work perfectly fine.
The complete-test-job should queue the tests all day long and just execute them in the evening or at night (starting from a defined time like 6 pm), so that the tests are run at night and during the day the job is silent.
It's no option to trigger the complete-test-job on a specified time with the newest version. we absolutely need the trigger of the upstream build-job (because of promotion plugin and we do not want to run already run versions again).
That seems a rather strange request. Why queue a build if you don't want it now... And if you want a build later, then you shouldn't be triggering it now.
You can use Jenkins Exclusion plugin. Have your test jobs use a certain resource. Make another job whose task is to "hold" the resource during the day. While the resource is in use, the test jobs won't run.
Problem with this: you are going to kill your executors by having queued non-executing jobs, and there won't be free executors for other jobs.
Haven't tried it myself, but this sounds like a solution to your problem.