can every branch of the pipeline in GStreamer has a different state? - branch

can every branch of a GStreamer pipeline has a different state?
for example the main branch has one state and the state of a different branch will be in a other state?
I"ve tried to check implementation while using several pipelines but this approach did not resolves the issue. the issue cannot be solved by using multiple pipelines since it is required that the branches will change their state based on the resolution of the active pipeline.
thank you very much

Related

How to use one container in pipeline?

the situation is such that we move from jenkins to gitlab ci. Every time a stage occurs in the pipeline, a new container is created, I would like to know if it is possible to make the container used by the previous one, that is, a single one. Gitlab Executer is docker.
I want to save condition of one container
No, this is not possible in a practical way with the docker executor. Each job is executed in its own container. There is no setting to change this behavior.
Keep in mind that jobs (even across stages) can run concurrently and that jobs can land on runners on completely different underlying machines. Therefore, this is not really practical.

GitLab pipelines equivalent for GitHub actions

I have a pipeline in GitLab that consists of multiple stages.
Each stage has a few jobs and produces artifacts, that are passed to the next stage if all the jobs from a stage will pass.
Something similar to this screenshot:
Is there any way to achieve something similar in GitHub actions?
Generally speaking, you can get very close to what you have above in GitHub actions. You'd trigger a workflow based on push and pull_request events so that it triggers when someone pushes to your repository, then you'd define each of your jobs. You would then use the needs syntax to define dependencies instead of stages (which is similar to the 14.2 needs syntax from GitLab), so for example your auto-deploy job would have needs: [test1, test2].
The one thing you will not be able to replicate is the manual wait on pushing to production. GitHub actions does not have the ability to pause at a job step and wait for a manual action. You can typically work around this by running workflows based on the release event, or by using a manual kick-off of the whole pipeline with a given variable set.
When looking at how to handle artifacts, check out the answer in this other stack overflow question: Github actions share workspace/artifacts between jobs?

How to break down terraform state file

I am looking for guidance/advice on how to best break down a terraform state file into smaller state files.
We currently have one state file for each environment and it has become unmanageable so we are now looking to have a state file per terraform module so we need to separate out the current state file.
Would it be best to point it to a new s3 bucket, then run a plan and apply for the broken down modules and generate a fresh state file for each module or is there an easier or better way to achieve this?
This all depends upon how your environment has been provisioned and how critical the down time is ?
Below are the two general scenarios, I can think of from your question.
First Scenario - ( if you can take down time )
Destroy everything that you got and start from scratch by defining separate backend for each module and provision the infrastructure from that point on. So now you can have backend segregation and infrastructure management becomes easier.
Second Scenario - ( If you can't take down time )
Lets' say you are running mission critical workloads that absolutely can't take any down time.
In this case, you will have to come up with proper plan of migrating huge monolith backend to smaller backends.
Terraform has the command called terraform state mv which can help you with migrating one terraform state to another one.
When you work on the scenario, start from lower level environments and work from there.
Note down any caveats that you are encountering during these migration in lower level environments, the same caveats will apply in higher level environments as well
Some useful links
https://www.terraform.io/docs/cli/commands/state/mv.html
https://www.terraform.io/docs/cli/commands/init.html#backend-initialization
Although the only other answer (as of now) lists only two options - the other option is that you can simply make terraform repos (or folders, however you are handling your infrastructure) - and then do terraform import to bring existing infrastructure into those (hopefully) repos.
Once all of the imports have proven to be successful, you can remove the original repo/source/etc. of the monolithic terraform state.
The caveat is that the code for each of the new state sources must match the existing code and state, otherwise this will fail.

AWS steps parallel state to orchestrate EMR jobs

We are orchestrating data pipeline with AWS steps and we do need to run EMR jobs in parallel.
I have tried using Map state and it works as expected. The only problem with Map is that in case one step fails , it cancels all the other steps as well. To overcome this issue , I am thinking if we can create an array of steps and pass it dynamically to Branches in parallel state but I have not been able to do it as it is not accepting strings.
Is there a workaround for this or can we only hard code branches in Parallel state? Can States.Array() in someway be helpful in this situation?
Wrap the inner state machine in a one-branch parallel state and add error/retry policies to it. Basically, you want to catch all errors and ensure that the iteration always succeeds.
Just for someone who is trying to look for a solution to the stated problem. As suggested by Pooya, I did use catch block inside task within the Map rather than keeping it at map level.The state machine looks like this

Can spinnaker prevent out-of-order deployments?

Currently
We use a CI platform to build, test, and release new code when a new PR is merged into master. The "release" step is quite simple/stupid, and essentially runs kubectl patch with the tag of the newly-pushed docker image.
The Problem
When two PRs merge at about the same time (ex: A, then B -- B includes A's commits, but not vice-versa), it may happen that B finishes its build/test first, and begins its release step first. When this happens, A releases second, even though it has older code. The result is a steady-state in which B's code has been effectively rolled-back by As deployment.
We want to keep our CI/CD as continuous as possible, ideally without:
serializing our CI pipeline (so that only one workflow runs at a time)
delaying/batching our deployments
Does Spinnaker have functionality or best-practice that solves for this?
Best practises for your issue are widely described in Message Ordering for Asynchronous systems. The simpliest solution would be to implement FIFO priciple for your CI/CD pipeline.
It will save you from implementing checks between CI and CD parts.