I am looking for a way to trigger some GitLab CI jobs, on-demand from another service. Is this possible and if so, how?
Details: imagine that I need to trigger a complex build from an external process.
One workaround that comes into my mind if to have a job-scheduler.git repository that only contains .gitlabci.yml file that is rewritten each time I need to trigger a build. I put the code to be run there and that's it. Other ideas?
That seems to be the purpose of… triggers: https://gitlab.com/help/ci/triggers/README.md
you then have an api endpoint where you specify the project's id, the trigger token and the git ref.
Related
I am using KeyCloak as my user management tool, and love it.
The data of Keycloak is stored for me on a Postgres database. Over time, more clients are being registered, and other alterations to the realms may be done. My question is: How do I properly keep track of that, and propagate automatically changes between my different environments? For databases, I use liquibase for a purpose like this. I couldn't find anything similar for the Keycloak case.
So, I wanted to ask: How are you folks out there handling this? What am I missing?
It depends on how you're doing the management of those changes. There are generally two approaches:
Using the Keycloak admin console
Using the Keycloak CLI
If you're applying your changes via the admin console, then you can either rely on the database backup or setup a scheduled pipeline in your CI tool to make an export of the Keycloak realm into a file and archive it somewhere.
In case you're using the second approach, then you can have a git repository containing all the Keycloak CLI scripts that you run on your server (e.g. to add a client, to update a realm config, etc.). In that case, you can have them reviewed, versioned and then run as part of an automated pipeline. This will also allow you to run a script on different environments. But of course it comes with a price which is to write a script for every single task that you can typically do in admin console with a couple of clicks.
We have multiple repositories that have multiple deployments in K8S.
Today, we have Tekton with the following setup:
We have 3 different projects, that should be build the same and deploy (they are just different repo and different name)
We defined 3 Tasks: Build Image, Deploy to S3, and Deploy to K8S cluster.
We defined 1 Pipeline that accepts parameters from the PipelineRun.
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params.
In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
So - is our configuration and setup seems ok? Should we do something differently?
Our problem is that we want to get Webhooks externally from GitHub and to run the appropriate Pipeline automatically without the need to run it with params. In addition, we want to be able to have the PipelineRun with default paramaters, so Users can invoke deployments automatically.
This sounds ok. The GitHub webhook initiates PipelineRuns of your Pipeline through a Trigger. But your Pipeline can also be initiated by the users directly in the cluster, or by using the Tekton Dashboard.
i have a bitbucket repo that sends webhook to trigger jenkins job.
http://:8080/buildByToken/buildWithParameters?job=webhook_me&token=t
i want to send with the webhook the bitbucket branch name,
so i search the web for the right way to use Environment variables on bit bucket and i've found this site:
so i've edited the url with an "&branch=$BITBUCKET_BRANCH" at the end, but it won't work.
any ideas what should i do in order to send the webhook with the branch name?
*******EDIT*******
i saw that there is something called Bitbucket event payload.
which is a json that contains all of the details about the webhook:
https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html
but i can't figure a way how to use it and pull it's data from jenkins.
i think that, this is the way to solve this, but i don't know how to use it.
i've found a way to do it, it works for me.
you need to use the bitbucket plugin: Bitbucket Plugin
then inside the job you need to specify the branch that will trigger the job after a push and check the marked checkbox:
then on the bitbucket create a webhook with the following URL:
http://:/bitbucket-hook/ Like so:
then push something to this repository and that branch, and there you go!
if you try to push to a different branch, the job won't be triiggered
The $BITBUCKET_BRANCH is only available in the Jenkins job. You are just literally passing the text "$BITBUCKET_BRANCH as the "branch" parameter. You can't pass in an environment variable like that.
$BITBUCKET_BRANCH may simply be available in the job, depending on the version of Jenkins and type of job you are using. In a pipeline job, this would be easy to access (if you have the right version of things). You don't need to pass it in unless you are trying to give it some other branch. In that case, you will need to see if you can get the branch on the bitbucket side to pass in to Jenkins.
Have you tried using jenkins-like variable ${BITBUCKET_BRANCH} instead of $BITBUCKET_BRANCH which is more shell-like variable?
I'm familiar with Terraform and its terraform.tfstate file where it keeps track of which local resource identifiers map to which remote resources. I've noticed that there is a .serverless directory on my machine which seems to contain files such as CloudFormation templates and ZIP files containing Lambda code.
Suppose I create and deploy a project from my laptop, and Serverless spins up fooxyz.cloudfront.net which points to a Lambda function arn:aws:lambda:us-east-1:123456789012:function:handleRequest456. If I naively try to run Serverless again from another machine (or if I git clean my working directory), it'll spin up a new CloudFront endpoint since it doesn't know that fooxyz.cloudfront.net already represents the same application. I'm looking to back up the state it keeps internally, so that it modifies an existing resource rather than creates a new one. (The equivalent in Terraform would be to back up the terraform.tfstate file.)
If I wished to back up or restore a Serverless deployment state, which files would I back up? In the case of AWS, it seems like I should be backing up the CloudFormation templates; I don't want to back up the Lambda code since it's directly generated from the source. However, I'm likely going to use more than just AWS in the future, and so don't want to "special-case" the CloudFormation templates if at all possible.
How can I back up only the files I cannot regenerate?
I think what you are asking is If I or a colleague checks out the serverless code from git on a different machine, will we still be able to deploy and update the same lambda functions and the same API gateway endpoints?
And the answer to that is yes! Serverless keeps track of all of that for you within their files. Unless you run serverless destroy - no operation will create a new lambda or api endpoint.
My team and I are using this method: we commit all code to a git repo and one of us checks it out and deploys a function or the entire thing and it updates the existing set of functions properly. If you setup an environment file - that's all you need to worry about really. And I recommend leaving it outside of git entirely.
For AWS; Serverless Framework keeps track of your deployment via Cloudformation (CF) parameters/identifiers which are specific to an account/region. The CF stack templates are uploaded to an (auto-generated) S3 bucket so it's already backed up for you.
So all you really need to have is the original deployment code in a git repo and have access to your keys. Everything else is already backed up for you.
I would like Jenkins to comment whether a merge passes or fails (much like Travis CI) on Github pull requests. I understand this is a feature on BuildHive. However, I cannot find an option on BuildHive for using customer provided slaves. My question is twofold:
Is there an option to limit builds to customer provided slaves on BuildHive?
Is there a way I could enable comments on pull requests using DEV#cloud (the actual job must be run on a customer provided slave)? If so, could you point me in the right direction to get this set up?
DEV#cloud can validate pull request as BuildHive does, with some additional configuration. See http://wiki.cloudbees.com/bin/view/DEV/Github+Pull+Request+Validation
Answering in the order of your questions:
BuildHive uses the Validated Merge plugin for Git from Jenkins Enterprise to enable Jenkins to perform pull requests and run the builds before doing a push to the main repo. That said, currently you cannot use Customer Provided Executors with BuildHive.
DEV#cloud: Normally, all Jenkins Enterprise plugins are available in a paid tier of DEV#cloud. However, this plugin is not - as the plugin sets up a git server within Jenkins - not easily achievable in a cloud setup. I have created a ticket on CloudBees support requesting that the plugin be made available and the engineering team will investigate into delivering the feature.
Meanwhile, if you like you can use Jenkins Enterprise to use the feature (however it is an on-premises solution).