Gitlab CI choose between 2 manual jobs - gitlab-ci

I want to know if there is any way to have 2 "manual jobs", in the same stage, and if one is triggered, the second is canceled.
Basically, what I want to do is to have 2 "manual jobs", one for continue my pipeline, and if it is triggered, the pipeline continue and the second "manual job" is canceled.
Or if the second manual job is the triggered, the first manual job is canceled and the pipeline stop.
I tried many thing but it doesn't seem to work and I didn't find topic about this kind of problematic.

I believe you can use Directed Acyclic Graph to make sure job 2 and job 3 do not start until manual job 1 finishes successfully, but as far as I know there is no way to easily cancel one job from another.
You could try using Jobs API, but I'm not sure how to get ID of manual job 1 from manual job 2 and vice versa.
Cancelling the entire pipeline would be easy. All you need for that is to use predefined variable CI_PIPELINE_ID (or CI_PIPELINE_IID - I'm not sure which would be the right one) and Pipeline API.
Edit: I suppose, knowing the Pipeline ID, you could get all the jobs for the pipeline with Jobs API and then parse the JSON into a map of Job names to IDs and finally use this map to cancel all jobs you want canceled.

Related

Is it possible to schedule a Retry for a SQL job instead of its Step components whenever one of the Steps fail?

I am using SQL Server 2014 and I have a SQL job (an SSIS package which contains 11 steps) which has been scheduled to run on a daily basis at a specific time.
I know one can schedule each step to attempt a retry whenever that step fails. However, is there a way to configure a retry for the whole SQL job whenever the job fails at any step during the process? That is, if say, the job fails at Step 8, the whole job is run again from Step 1.
The tidiest solution I can think of would be creating an error handling Step in your Job which will be executed when any other step fails (change the On Failure action of all other steps to jump to this one) and managing the job's schedule to trigger again on the following minute, after the job ends. This way you will see the execution history of the job at the agent.
You will have to keep in mind recurrent failures, I doubt you want the job to be repeating itself indefinitely.
To configure the job to trigger, you can add a Schedule that fires every minute and enable/disable it when necessary. The job won't fire if it's already running.

Autosys JIL ignoring success conditions

I hope someone can point me in the right direction or shed some light on the issue I'm having. We have Autosys 11.3.5 running in Windows environment.
I have several jobs setup to launch on a remote NAS server.
I needed JOB_1 in particular to only run if another completed successfully.
Seems straight forward enough. In UI there's a section to specify Condition such as: s(job_name) as I have done and I'm assuming that ONLY if the job with name job_name succeeds that my initial job should run.
No matter what I do, when I make the second job fail on purpose (whether manually setting its status to FAILURE) or changing some of its parameters so that its natural run time causes it to fail. The other job that I run afterwards seems to ignore the condition altogether and complete successfully each time.
I've triple checked the job names (in fact I copy and pasted it from the JIL definition of the job so there are no typos), but it's still being ignored.
Any help in figuring out how to make one job only run if another did not fail (and NOT to run if it DID fail) would be appreciated.
If both the jobs are scheduled and become active together, then this should not happen.
The way i think is, you must be force starting the other job while the first is failed. If thats the case, then conditions will not work.
You need to let both the jobs start as per schedule, or at least the other job start as per schedule while the first one is failed. In that case the other job will stay in AC state until the first one is SU.
Let me know if this is not the case, i will have to rephrase another solution then.

RabbitMQ job queus completion indicator event

I am trying out RabbitMQ with springboot. I have a main process and within that process I am creating many number of small tasks that can be processed from other workers. From the main process perspective, I like to know when all of these tasks are completed so that it can move to next step. I did not find a easy way to query rabbitmq if the tasks are complete.
One solution I can think of is to store these tasks in a database and when each message is completed, update the database with COMPLETE status. Once all jobs are in COMPLETE status, the main process can know the jobs are COMPLETE and it can move to next step o fits process.
Another solution I can think of is that the main process maintain the list of jobs that is being sent to other workers. Once each worker completes it's job, it can send a message to the main process indicating the job is complete. Then the Main process can mark the job is complete and remove the item from the list.Once the list is empty, the main process will know the jobs are complete and it can move to next step of it's work.
I am looking to learn best practice on how other people have dealt this kind of situation. I appreciate for any suggestion.
Thank you!
There is no way to query RabbitMQ for this information.
The best way to approach this is with the use of a process manager.
The basic idea is to have your individual steps send a message back to a central process that keeps track of which steps are done. When that main process receives notice that all of the steps are done, it lets the system move on to the next thing.
The details of this approach are fairly complex, but I do have a blog post that covers the core of a process manager from a JavaScript/NodeJS perspective.
You should be able to find something like a "process manager" or "saga" as they are sometimes called, within your language and RabbitMQ framework of choice. If not, you should be able to write one for your process without too much trouble, as described in my blog post.

How to detect APScheduler's running jobs?

I have some recurring jobs run frequently or last for a while.
It seems that Scheduler().get_jobs() will only return the list of scheduled jobs that are not currently running, so I cannot determine if a job with certain id do not exists or is actually running.
How may I test if a job is running or not in this situation?
(I set up those jobs not the usual way, (because I need them to run in a random interval, not fixed interval), they are jobs that execute only once, but will add a job with the same id by the end of their execution, and they will stop doing so when reaching a certain threshold.)
APScheduler does not filter the list of jobs for get_jobs() in any way. If you need random scheduling, why not implement that in a custom trigger instead of constantly readding the job?

queue job all day and execute it at a specified time

Is there a plugin or can I somehow configure it, that a job (that is triggered by 3 other jobs) queues until a specified time and only then executes the whole queue?
Our case is this:
we have tests run for 3 branches
each of the 3 build jobs for those branches triggers the same smoke-test-job that runs immediately
each of the 3 build jobs for those branches triggers the same complete-test-job
points 1. and 2. work perfectly fine.
The complete-test-job should queue the tests all day long and just execute them in the evening or at night (starting from a defined time like 6 pm), so that the tests are run at night and during the day the job is silent.
It's no option to trigger the complete-test-job on a specified time with the newest version. we absolutely need the trigger of the upstream build-job (because of promotion plugin and we do not want to run already run versions again).
That seems a rather strange request. Why queue a build if you don't want it now... And if you want a build later, then you shouldn't be triggering it now.
You can use Jenkins Exclusion plugin. Have your test jobs use a certain resource. Make another job whose task is to "hold" the resource during the day. While the resource is in use, the test jobs won't run.
Problem with this: you are going to kill your executors by having queued non-executing jobs, and there won't be free executors for other jobs.
Haven't tried it myself, but this sounds like a solution to your problem.