Fail GitLab CI pipeline if any of the stage jobs are failed - gitlab-ci

I want a pipeline to fail if any job of the build stage are failed. In fact, it already works in that way but I have to wait all other jobs of the same stage to finish.
Only then the pipeline will fail.
Each job has allow_failure set to false so I have no idea how to fail pipeline immediately if one of the stage jobs fails. Any ideas?

Related

Gitlab Pipeline shows wrong status

in .gitlab-ci.yml I am using trigger to use different CI configurations.
When running a pipeline, it shows the status as passed even that the downstream pipeline failed.
example:
How can I make the pipeline status to display failed when the downstream pipeline is failing?

What is the best practice to run 2 pipelines on Gitlab

What is the best practice to run 2 pipelines on the same project?
pipeline_1:
build jars jobs and run tests jobs
should be run each merge request.
pipeline_2:
build jars jobs and run e2e tests jobs
should be run every day.
Can I create 2 pipelines on the same project?
where one scheduled and second on each merge request and part of build jobs are common for both pipelines, but tests jobs are different.
Each "stage" in the .gitlab-ci.yml file is considered a pipeline, so this should just be a matter of adding the correct scripting for each stage.
On pipeline_2, you could set it to a pipeline schedule and make it dependent on the success of pipeline_1. That's what I would do.
Reference: https://docs.gitlab.com/ee/ci/parent_child_pipelines.html

I want to run a specific drone pipeline in series

I have a drone.yml running 3 pipelines on my github repo. One for pullrequests that get run as soon as some one submits a pull request, a pipeline for releases that makes docker containers and outputs a docker-compose, and now I'm making a pipeline that runs integration tests after a merge into master.
One of the steps is that it updates a test server, which is making the task challenging. Is there a way to force this specific drone pipeline to only run if there isn't any other of this pipeline running?
You can used depends_on to force the order of the pipelines execution.
Pipeline: Graph Execution

Jenkins - How to run two Jobs parallelly (1 FT Jobs and 1 Selenium Jobs) on same slave node

I want to run two jobs parallelly on the same Slave.
Job 1 is Functional Testing jobs doesn't require Browser and Job 2 is Selenium Job which requires Browser for testing.
As for running the job on the same slave, you can use the option Restrict where this project can be run, assuming you have the jenkins slave configured in your setup.
For running the jobs in parallel (are you trying to do this via Jenkinsfile or via freestyle jobs?). For jenkinsfile, you can use the parallel stages feature as described here. For freestyle jobs, I would suggest adding one more job (for example setup job) and use this job to trigger your two jobs at the same time. Here are few screenshots showing one of my pipeline triggering jobs in parallel.

Use of Enable blocking in PDI - Pig Script Executor

I am exploring Big data plugin in Pentaho 5.2. I was trying to run Pig Script executor. I am unable to understand the usage of
Enabling Blocking. The PDI documentation says that
If checked, the Pig Script Executor job entry will prevent downstream
entries from executing until the script has finished processing.
I am aware that running a pig script will convert the execution to Map reduce jobs. I am running the job with Start job -> Pig Script. If I disable the Enable blocking step I am unable to execute the script. I am getting permission denied errors. As per the documentation " ".
What does downstream mean here. I do not pass any hops from the pig script out. I am unable to understand the Enable blocking step. Any hints can be helpful and will be appreciated.
Enable blocking: the task is deployed to the Hadoop cluster; PDI will follow up on progress and only proceed with the rest of the job tasks AFTER the execution of the Hadoop job finishes;
Enable blocking is disabled: PDI deploys the task to the Hadoop cluster and forgets about it. The rest of the job tasks proceed immediately after the cluster accepts the task, but doesn't wait for it to complete.