In GitLab-CI document, I read the following contents:
In this example, job will run only for refs that are tagged, or if a build is explicitly requested via an API trigger or a Pipeline Schedule:
job:
# use special keywords
only:
- tags
- triggers
- schedules
I noticed the document uses or instead of and, which means the job is run when either one condition is met. But what if I want to configure a job to only run when all conditions are met, for instance, in a Pipeline Schedule and on master branch?
If your specific question is how do I only run a pipeline on master when it was scheduled this should work:
job:
only:
- master
except:
- triggers
- pushes
- external
- api
- web
In this example you exclude all except for the schedules 'trigger' and only run for the master branch.
Related
I have a CI which allow me to merge my MR when at least one job in each stages is succeded even if all others manual jobs are not executed.
I can do that by using the when: manual condition on jobs.
I changed the when: manual with rules keyword that contains when: manual and now my pipeline is blocked even if one job of each stage is succeded.
I tried with allow_failure: true and it allow to merge my MR. The problem is that I can merge without doing any job... I want at least one job of each stage to be executed to allow merging.
Do you have any idea of doing this using rules ?
Thank you !
I have a couple of queries wrt the Tivoli Work Scheduler composer -
Below command extracts all jobstreams from workstation irrespective of their status i.e. both Activbe and Draft/Inactive jobs. How could I extract only the active jobstreams
create jobstreams.txt from sched=WORKSTATION##
Similarly I would need to extract the jobs associated to active jobstreams only.
create jobs.txt from jobs=WORKSTATION##
My requirement is to loop over same set of files using multiple pipelines
e.g.
pipeline 1 consumes file 1 and then a certain output is generated
now pipeline 2 has to be triggered based on output of pipeline 1 else we should skip the run
If pipeline 2 runs then pipeline 3 has to be triggered based on output of pipeline 2 else we should skip the run
Is there any way to do this in ADF without nesting if-else ?
You can simply loop through multiple pipelines using combination of "Execute Pipeline activity" and "If Activity". The Execute Pipeline activity allows a Data Factory pipeline to invoke another pipeline. For false condition execute a different pipeline. Optionally if the child pipeline flow add "Execute Pipeline activity" refering the previous caller!
Caution: can get into a dangerous loop if right conditions are not configured.
What do I need to change in order to run these jobs in parallel?
There is one more runner available on the server, but it's not picking up the "pending" job until the "running" one is finished.
UPDATE
The jobs are picked up by different runners, but in a sequential mode. See ci-runner-1 and ci-runner-2.
See screenshots
The problem was that in config.toml (/etc/gitlab-runner/config.toml in my case) I had:
concurrent = 1.
Changed this to 0 or a value > 1, restart gitlab-runner and all good.
Reference:
https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
Is it possible to run a kettle job simultaneously more than once at the same time?
What I am Trying
Say we are running this script twice at a same time,
sh kitchen.sh -rep="development" -dir="job_directory" -job="job1"
If I run it only once at a point of time, data-flow is perfectly fine.
But, when I run this command twice at the same time, it throws error like:
ERROR 09-01 13:34:13,295 - job1 - Error in step, asking everyone to stop because of:
ERROR 09-01 13:34:13,295 - job1 - org.pentaho.di.core.exception.KettleException:
java.lang.Exception: Return code 1 received from statement : mkfifo /tmp/fiforeg
Return code 1 received from statement : mkfifo /tmp/fiforeg
at org.pentaho.di.trans.steps.mysqlbulkloader.MySQLBulkLoader.execute(MySQLBulkLoader.java:140)
at org.pentaho.di.trans.steps.mysqlbulkloader.MySQLBulkLoader.processRow(MySQLBulkLoader.java:267)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:50)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.lang.Exception: Return code 1 received from statement : mkfifo /tmp/fiforeg
at org.pentaho.di.trans.steps.mysqlbulkloader.MySQLBulkLoader.execute(MySQLBulkLoader.java:95)
... 3 more
It's important to run the jobs simultaneously twice at a same time. To accomplish this, I can duplicate every job and run the original and the duplicate job at a point of time. But, not a good approach for long run!
Question:
Is Pentaho not maintaining threads?
Am I missing some option, or can I enable some option to make pentaho create different threads for different job instances?
Of course Kettle maintains threads. A great many of them in fact. It looks like the problem is that the MySQL bulk loader uses a FIFO. You have two instances of a FIFO called /tmp/fiforeg. The first instance to run creates the FIFO just fine; the second then tries to create another instance with the same name and that results in an error.
At the start of the job, you need to generate a unique FIFO name for that instance. I think you can do this by adding a transformation at the start of the job that uses a Generate random value step to generate a random string or even a UUID and store it in a variable in the job via the Set variables step.
Then you can use this variable in the 'Fifo file' field of the MySQL bulk loader.
Hope that works for you. I don't use MySQL, so I have no way to make sure.