GitLab pipelines equivalent for GitHub actions - gitlab-ci

I have a pipeline in GitLab that consists of multiple stages.
Each stage has a few jobs and produces artifacts, that are passed to the next stage if all the jobs from a stage will pass.
Something similar to this screenshot:
Is there any way to achieve something similar in GitHub actions?

Generally speaking, you can get very close to what you have above in GitHub actions. You'd trigger a workflow based on push and pull_request events so that it triggers when someone pushes to your repository, then you'd define each of your jobs. You would then use the needs syntax to define dependencies instead of stages (which is similar to the 14.2 needs syntax from GitLab), so for example your auto-deploy job would have needs: [test1, test2].
The one thing you will not be able to replicate is the manual wait on pushing to production. GitHub actions does not have the ability to pause at a job step and wait for a manual action. You can typically work around this by running workflows based on the release event, or by using a manual kick-off of the whole pipeline with a given variable set.
When looking at how to handle artifacts, check out the answer in this other stack overflow question: Github actions share workspace/artifacts between jobs?

Related

How to test my pipeline before changing it?

When changing the pipelines for my company I often see the pipeline breaking under some specific condition that we did not anticipate. We use yaml files to describe the pipelines (Azure Devops)
We have multiple scenarios, such as:
Pipelines are run by automatic triggers, by other pipelines and manually
Pipelines share the same templates
There are IF conditions for some jobs/steps based on parameters (user input)
In the end, I keep thinking of testing all scenarios before merging changes, we could create scripts to do that. But it's unfeasible to actually RUN all scenarios because it would take forever, so I wonder how to test it without running it. Is it possible? Do you have any ideas?
Thanks!
I already tried the Preview endpoints from Azure REST api, which is good, but it only validates the input, such as variables and parameters. We also needed to make sure which steps are running and the variables being set in those
As far as I know (I am still new to our ADO solutions), we have to fully run/schedule the pipeline to see that it runs and then wait a day or so for the scheduler to complete the auto executions. At this point I do have some failing pipelines for a couple of days that I need o fix.
I do get emails when a pipeline fails like this in the json that holds the metadata to create a job:
"settings": {
"name": "pipelineName",
"email_notifications": {
"on_failure": [
"myEmail#email.com"
],
"no_alert_for_skipped_runs": true
},
Theres an equivalent extension that can be added in this link, but I have not done it this way and cannot verify if it works.
Azure Pipelines: Notification on Job failure
https://marketplace.visualstudio.com/items?itemName=rvo.SendEmailTask
I am not sure what actions your pipeline does but if there are jobs being scheduled there on external computes like Databricks, there should be a email alert system you can use to detect failures.
Other than that if you had multiple environments (dev, qa, prod) you could test in non production environment.
Or if you have a dedicated storage location that is only for testing a pipeline, use that for the first few days and then reschedule the pipeline in the real location after testing it completes a few runs.

How does Tekton handle parallel tasks that access the same workspace?

In Tekton it's possible to set up a pipeline with multiple tasks that can (potentially) run in parallel and that access the same workspace. However, the documentation is not completely clear on what happens in this situation. Does it "lock" the workspace and force one task to wait until the other is done using it, or can both the tasks access and modify it at the same time (potentially interfering with each others' execution)?
Both tasks can access and modify it at the same time. There is no locking. Be careful that you do not have a concurrency problem!

Can spinnaker prevent out-of-order deployments?

Currently
We use a CI platform to build, test, and release new code when a new PR is merged into master. The "release" step is quite simple/stupid, and essentially runs kubectl patch with the tag of the newly-pushed docker image.
The Problem
When two PRs merge at about the same time (ex: A, then B -- B includes A's commits, but not vice-versa), it may happen that B finishes its build/test first, and begins its release step first. When this happens, A releases second, even though it has older code. The result is a steady-state in which B's code has been effectively rolled-back by As deployment.
We want to keep our CI/CD as continuous as possible, ideally without:
serializing our CI pipeline (so that only one workflow runs at a time)
delaying/batching our deployments
Does Spinnaker have functionality or best-practice that solves for this?
Best practises for your issue are widely described in Message Ordering for Asynchronous systems. The simpliest solution would be to implement FIFO priciple for your CI/CD pipeline.
It will save you from implementing checks between CI and CD parts.

Trigger jobs in gitlab-ci on merge request

It's posible run a job from gitlab-ci only on merge request?
Now, we have a big monolitic project with heavy tests, but we only want to run the test before merging to the branch master.
Well, it's not built in currently however it's not impossible to do it yourself. Gitlab allows to trigger a job. It also supports webhooks on merge requests. However webhooks don't support variable in URIs and triggers can't read request body so you'd have to create a script that will act like a middle-man here:
Webhook on merge request calls to your script
Script parses the request and calls a triggers in gitlab with correct REF
Trigger runs the job that is marked with;
only:
-triggers
It's a bit hacky but it's working and easy to implement.
This is now possible. This has been introduced in GitLab 11.6.
For the moment, no.
You should subscribe the issue to see if and when they will be available (and if your company is a enterprise customer, maybe you can contact them to ask to prioritize the implementation)

Does Spinnaker have fan-in capability?

Does Spinnaker provide "fan in" functionality, like GoCD? It looks like I can set a single pipeline to trigger multiple downstream pipelines (fan-out), but I can't make a downstream pipeline dependent on the successful completion of two upstream pipelines. If I set two triggers on the downstream pipeline, it starts immediately following the completion of the first trigger. I'd like to AND these triggers - ie: when TRIGGER1 and TRIGGER2 complete, start the pipeline.
The image below describes what I'm looking for, visualized in GoCD. The DeployTest pipeline requires successful completion of ManualGate1 and 2 before it starts.
Yes - you can do "fan in" operations pretty easily with Spinnaker pipelines.
Select the "DeployTest" stage, then set its "Depends On" stages to include "Manual Gate 1" and "Manual Gate 2":