Does Spinnaker have fan-in capability? - spinnaker

Does Spinnaker provide "fan in" functionality, like GoCD? It looks like I can set a single pipeline to trigger multiple downstream pipelines (fan-out), but I can't make a downstream pipeline dependent on the successful completion of two upstream pipelines. If I set two triggers on the downstream pipeline, it starts immediately following the completion of the first trigger. I'd like to AND these triggers - ie: when TRIGGER1 and TRIGGER2 complete, start the pipeline.
The image below describes what I'm looking for, visualized in GoCD. The DeployTest pipeline requires successful completion of ManualGate1 and 2 before it starts.

Yes - you can do "fan in" operations pretty easily with Spinnaker pipelines.
Select the "DeployTest" stage, then set its "Depends On" stages to include "Manual Gate 1" and "Manual Gate 2":

Related

How to test my pipeline before changing it?

When changing the pipelines for my company I often see the pipeline breaking under some specific condition that we did not anticipate. We use yaml files to describe the pipelines (Azure Devops)
We have multiple scenarios, such as:
Pipelines are run by automatic triggers, by other pipelines and manually
Pipelines share the same templates
There are IF conditions for some jobs/steps based on parameters (user input)
In the end, I keep thinking of testing all scenarios before merging changes, we could create scripts to do that. But it's unfeasible to actually RUN all scenarios because it would take forever, so I wonder how to test it without running it. Is it possible? Do you have any ideas?
Thanks!
I already tried the Preview endpoints from Azure REST api, which is good, but it only validates the input, such as variables and parameters. We also needed to make sure which steps are running and the variables being set in those
As far as I know (I am still new to our ADO solutions), we have to fully run/schedule the pipeline to see that it runs and then wait a day or so for the scheduler to complete the auto executions. At this point I do have some failing pipelines for a couple of days that I need o fix.
I do get emails when a pipeline fails like this in the json that holds the metadata to create a job:
"settings": {
"name": "pipelineName",
"email_notifications": {
"on_failure": [
"myEmail#email.com"
],
"no_alert_for_skipped_runs": true
},
Theres an equivalent extension that can be added in this link, but I have not done it this way and cannot verify if it works.
Azure Pipelines: Notification on Job failure
https://marketplace.visualstudio.com/items?itemName=rvo.SendEmailTask
I am not sure what actions your pipeline does but if there are jobs being scheduled there on external computes like Databricks, there should be a email alert system you can use to detect failures.
Other than that if you had multiple environments (dev, qa, prod) you could test in non production environment.
Or if you have a dedicated storage location that is only for testing a pipeline, use that for the first few days and then reschedule the pipeline in the real location after testing it completes a few runs.

Azure Data Factory: Execute Pipeline activity cannot reference calling pipeline, cyclical behaviour required

I have a number of pipelines that need to cycle depending on availability of data. If the data is not there wait and try again. The pipe behaviours are largely controlled by a database which captures logs which are used to make decisions about processing.
I read the Microsoft documentation about the Execute Pipeline activity which states that
The Execute Pipeline activity allows a Data Factory or Synapse
pipeline to invoke another pipeline.
It does not explicitly state that it is impossible though. I tried to reference Pipe_A from Pipe_A but the pipe is not visible in the drop down. I need a work-around for this restriction.
Constraints:
The pipe must not call all pipes again, just the pipe in question. The preceding pipe is running all pipes in parallel.
I don't know how many iterations are needed and cannot specify this quantity.
As far as possible best effort has been implemented and this pattern should continue.
Ideas:
Create a intermediary pipe that can be referenced. This is no good I would need to do this for every pipe that requires this behaviour because dynamic content is not allowed for pipe selection. This approach would also pollute the Data Factory workspace.
Direct control flow backwards after waiting inside the same pipeline if condition is met. This won't work either, the If activity does not allow expression of flow within the same context as the If activity itself.
I thought about externalising this behaviour to a Python application which could be attached to an Azure Function if needed. The application would handle the scheduling and waiting. The application could call any pipe it needed and could itself be invoked by the pipe in question. This seems drastic!
Finally, I discovered an activity Until which has do while behaviour. I could wrap these pipes in Until, the pipe executes and finishes and sets database state to 'finished' or cannot finish and sets the state to incomplete and waits. The expression then either kicks off another execution or it does not. Additional conditional logic can be included as required in the procedure that will be used to set a value to variable used by the expression in the Until. I would need a variable per pipe.
I think idea 4 makes sense, I thought I would post this anyway in case people can spot limitations in this approach and/or recommend an approach.
Yes, absolutely agree with All About BI, its seems in your scenario the best suited ADF Activity is Until :
The Until activity in ADF functions as a wrapper and parent component
for iterations, with inner child activities comprising the block of
items to iterate over. The result (s) from those inner child
activities must then be used in the parent Until expression to
determine if another iteration is necessary. Alternatively, if the
pipeline can be maintained
The assessment condition for the Until activity might comprise outputs from other activities, pipeline parameters, or variables.
When used in conjunction with the Wait activity, the Until activity allows you to create loop conditions to periodically check the status of specific operations. Here are some examples:
Check to see if the database table has been updated with new rows.
Check to see if the SQL job is complete.
Check to see whether any new files have been added to a specific
folder.

GitLab pipelines equivalent for GitHub actions

I have a pipeline in GitLab that consists of multiple stages.
Each stage has a few jobs and produces artifacts, that are passed to the next stage if all the jobs from a stage will pass.
Something similar to this screenshot:
Is there any way to achieve something similar in GitHub actions?
Generally speaking, you can get very close to what you have above in GitHub actions. You'd trigger a workflow based on push and pull_request events so that it triggers when someone pushes to your repository, then you'd define each of your jobs. You would then use the needs syntax to define dependencies instead of stages (which is similar to the 14.2 needs syntax from GitLab), so for example your auto-deploy job would have needs: [test1, test2].
The one thing you will not be able to replicate is the manual wait on pushing to production. GitHub actions does not have the ability to pause at a job step and wait for a manual action. You can typically work around this by running workflows based on the release event, or by using a manual kick-off of the whole pipeline with a given variable set.
When looking at how to handle artifacts, check out the answer in this other stack overflow question: Github actions share workspace/artifacts between jobs?

Gitlab CI choose between 2 manual jobs

I want to know if there is any way to have 2 "manual jobs", in the same stage, and if one is triggered, the second is canceled.
Basically, what I want to do is to have 2 "manual jobs", one for continue my pipeline, and if it is triggered, the pipeline continue and the second "manual job" is canceled.
Or if the second manual job is the triggered, the first manual job is canceled and the pipeline stop.
I tried many thing but it doesn't seem to work and I didn't find topic about this kind of problematic.
I believe you can use Directed Acyclic Graph to make sure job 2 and job 3 do not start until manual job 1 finishes successfully, but as far as I know there is no way to easily cancel one job from another.
You could try using Jobs API, but I'm not sure how to get ID of manual job 1 from manual job 2 and vice versa.
Cancelling the entire pipeline would be easy. All you need for that is to use predefined variable CI_PIPELINE_ID (or CI_PIPELINE_IID - I'm not sure which would be the right one) and Pipeline API.
Edit: I suppose, knowing the Pipeline ID, you could get all the jobs for the pipeline with Jobs API and then parse the JSON into a map of Job names to IDs and finally use this map to cancel all jobs you want canceled.

Can spinnaker prevent out-of-order deployments?

Currently
We use a CI platform to build, test, and release new code when a new PR is merged into master. The "release" step is quite simple/stupid, and essentially runs kubectl patch with the tag of the newly-pushed docker image.
The Problem
When two PRs merge at about the same time (ex: A, then B -- B includes A's commits, but not vice-versa), it may happen that B finishes its build/test first, and begins its release step first. When this happens, A releases second, even though it has older code. The result is a steady-state in which B's code has been effectively rolled-back by As deployment.
We want to keep our CI/CD as continuous as possible, ideally without:
serializing our CI pipeline (so that only one workflow runs at a time)
delaying/batching our deployments
Does Spinnaker have functionality or best-practice that solves for this?
Best practises for your issue are widely described in Message Ordering for Asynchronous systems. The simpliest solution would be to implement FIFO priciple for your CI/CD pipeline.
It will save you from implementing checks between CI and CD parts.