What are the best practices for Tekton implementation with multiple repositories with multiple deployments - tekton

We have multiple repositories that have multiple deployments in K8S.
Today, we have Tekton with the following setup:
We have 3 different projects, that should be build the same and deploy (they are just different repo and different name)
We defined 3 Tasks: Build Image, Deploy to S3, and Deploy to K8S cluster.
We defined 1 Pipeline that accepts parameters from the PipelineRun.
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params.
In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
So - is our configuration and setup seems ok? Should we do something differently?

Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params. In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
This sounds ok. The GitHub webhook initiates PipelineRuns of your Pipeline through a Trigger. But your Pipeline can also be initiated by the users directly in the cluster, or by using the Tekton Dashboard.

Related

Using one gitlab ci runner for multi group

I have a VM for execute ci runner, and two groups.
The runner is installed for one group. Is there any way to share it to the other group.
Otherwise, can i install more runner in one VM server.
The answers to both your questions are yes, but for the first, it depends on if you use gitlab.com or a self-hosted version, and what you have access to.
First for the second part, yes, you can register a second (or third, fourth, ...) running on the same physical host. Just go through the registration process again. Also check the concurrent value in your config.toml file since that controls how many jobs can be running concurrently on that host. If it's lower than the number of runners you have, then they can't all be used at the same time, but sometimes that's on purpose. It's up to you to decide.
For the first part, you can install runners that are shared across the whole instance, but if you're using gitlab.com, only the Gitlab team can do this, so you'd have to use their shared runners. In the Group's CI/CD settings page, you can enable or disable Shared Runners from the gitlab instance for that group.
Otherwise, if you're using self-hosted, you can go to the admin area by clicking the wrench icon in the main nav bar, then go to "Runners" under the Overview tag on the left. On this page you can get the instance's registration token. Any runners registered using this token (opposed to a project's token or a group's token) will be available for all groups and projects on the Gitlab instance. You can also edit existing runners so that they aren't "locked" to a single project from here.
More information can be found in the docs.
Currently, GitLab does not support to assign one runner to multiple groups as a group runner. You could assign the runner to the complete GitLab instance instead (as #adam-marshall already mentioned) but then it will be usable by all members of your GitLab server.
You can try gitlab-multi-group-runner which circumvents this problem by assigning a specific runner to all projects of given GitLab groups with the GitLab API. However, this tool needs administration access to the GitLab server.

Dynamic Routing witihin vue.js application

in a current project I have a problem understanding (and configuring) routing within my vue.js app.
Our Setup
We have a setup, where for each Pull Request in our repos a new Snapshot Environment is created. This environment is one namespace within a kubernetes Cluster. All services in a current develop state are deployed with the new "snapshot" version of the service that triggered the CICD pipeline. To have a clear route for each snapshot environment, we use the namespace as part of the URL (https://HOST/NAMESPACE/APP/paths)
Our Problem
As you can see, the URL is highly dynamic, but currently, we could just build the container with the path and be happy. Thats our current setup. Unfortunately, we want the possibility, to deploy each and every container image on every HOST as well as every NAMESPACE, those parts are only known at runtime, not in the CICD Pipeline.
Is there any way to handle such a scenario with vue.js. I have basically every freedom to edit the app as well as the container, but can't change the way we want to host our app. Currently we build the App on the cluster and inject the NAMESPACE, which was the "easiest" way to do this. But if there is any other way, I would love to not have the build and run step together.
Thanks in advance.

Backing up a Serverless Framework deployment

I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.

Dockerized Gitlab Container Backup

I am using a GitLab docker image for integration testing of a service I'm helping to develop. Ideally, the image would be a preconfigured snapshot of GitLab with different users and repos available to run tests against. So the problem ends up being, what is a good way to automate the creation of 'snapshots' of GitLab (that can then be versioned etc.)?
My current solution to this problem is to use GitLab's built in backup utility via gitlab-rake gitlab:backup:create after getting GitLab to a state that I want. This then lets me use GitLab's gitlab-rake gitlab:backup:restore in a hook when the container is starting up to get the container back to the state that I expect (the backup having been ADDed in the Dockerfile for the image). This has the advantage of being relatively lightweight (backups are on the order of MBs) and the backups can be checked in to version control.
I have tried using docker export along with docker import to save the state of the container and then create an image based on that state. This has the advantage of being easy to automate since it is directly supported by Docker, but ends up being fairly expensive considering what the goal is (having users and repos available to test against). It also would require the images to be pushed to a registry of some kind in order to be easily distributed. Perhaps this is the best solution because it is well supported though.
I suppose my question is, what is the Docker way of approaching a problem like this?

Triggering iOS build/test job via Github pull request on CloudBees

I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).