My requirement is to send notification on consecutive build failures.
I don’t want to send notifications for pipeline status change. If the pipeline fails say consecutive 5 times, notify by mail. Any thoughts.
Also is it possible to set/unset or increment pipeline scheduled variables inside gitlab-ci.yml file.
I have in my gitlab projects jobs defined and executed via gitlab-ci. However, it doesn't do well with interdependent jobs as there's no management of this case except manual.
The case I have is a service, which is a part of the overall app, takes long time to start. Starting this service is done within a job, while another job have another service, which is also a part of the overall app, querying the former service. Due to interdependence, I have just delayed the execution of this later job so that most probably the former job has its service up and running.
I wanted to use Rundeck as a job scheduler but not sure if this can be done with gitlab? Maybe I am wrong about gitlab, so does gitlab allow better job scheduling?
Here's an example of what I am doing:
.gitlab-ci.yml
deploy:
environment:
name:$CI_ENVIRONMENT
url: http://$CI_ENVIRONMENT.local.net:4999/
allow_failure: true
script:
- sudo dpkg -i myapp.deb
- sleep 30m //here I wait for the service to be ready for later jobs to run successfully
- RESULT=`curl http://localhost:9999/api/test | grep Success'
it looks like it is a typical trigger feature inside gitlab-ci
see gitlab-ci triggers
mostly at the end of the long start-up service job A to use curl to trigger another one
deploy_service_a:
stage: deploy
script:
- "curl --request POST --form token=TOKEN --form ref=master https://gitlab.example.com/api/v4/projects/9/trigger/pipeline"
only:
- tags
We have several pods (as service/deployments) in our k8s workflow that are dependent on each other, such that if one goes into a CrashLoopBackOff state, then all these services need to be redeployed.
Instead of having to manually do this, is there a programatic way of handling this?
Of course we are trying to figure out why the pod in question is crashing.
If these are so tightly dependant on each other, I would consider these options
a) Rearchitect your system to be more resilient towards failure and tolerate, if a pod is temporary unavailable
b) Put all parts into one pod as separate containers, making the atomic design more explicit
If these don't fit your needs, you can use the Kubernetes API to create a program that automates the task of restarting all dependent parts. There are client libraries for multiple languages and integration is quite easy. The next step would be a custom resource definition (CRD) so you can manage your own system using an extension to the Kubernetes API.
First thing to do is making sure that pods are started in correct sequence. This can be done using initContainers like that:
spec:
initContainers:
- name: waitfor
image: jwilder/dockerize
args:
- -wait
- "http://config-srv/actuator/health"
- -wait
- "http://registry-srv/actuator/health"
- -wait
- "http://rabbitmq:15672"
- -timeout
- 600s
Here your pod will not start until all the services in a list are responding to HTTP probes.
Next thing you may want to define liveness probe that periodically executes curl to the same services
spec:
livenessProbe:
exec:
command:
- /bin/sh
- -c
- curl http://config-srv/actuator/health &&
curl http://registry-srv/actuator/health &&
curl http://rabbitmq:15672
Now if any of those services fail - you pod will fail liveness probe, be restarted and wait for services to become back online.
That's just an example how it can be done. In your case checks can be different of course.
I am trying to setup the automated deployment through Bitbucket pipeline , but still not succeed might be my business requirement is not fulfill by the Bitbucekt pipeline.
Current Setup :
Dev Team push the code to default branch from their local machines. and as Team lead reviews their code and updated on UAT and production server manually by running the commands on the Server CLI directly.
#hg branch
#hg pull
#hg update
Automated deployment we want :
we have 3 environment DEV, UAT/Staging and production.
on the bases of the environments i have created
Release branches . DEV-Release, UAT-Release and PROD-Release respectively.
dev team push the code directly to the default branch dev lead will check the changes and then create a pull request form default to UAT-Release branch and after successful deployment on UAT server the again create Pull request from default to production branch and pipeline should be executed on the pull request and then started copying the bundle.zip on AWS S3 and then to AWS EC2 instance.
Issues :
The issue i am facing is bitbucket-pipeline.yml is not same on all release branches because the branch name s difference due to that when we create a pull request for any release branch we are getting the conflict of that file .
id there any why i can use the same bitbucket-pipline.yml file for all the branches and deployment should be happened on that particular for which pull request is created.
can we make that file dynamic for all branches with environment variables?
if the bitbucket pipeline can not fulfill my business requirement then what is other solution ?
if you guys think my business requirement is not good or justifiable just let me know on what step i have to change to achieve the final result of automated deployments
Flow :
Developer Machine push to--> Bitbucket default branch ---> Lead will review the code then pull request for any branch (UAT,PROD) --- > pipeline will be executed an push the code to S3 bucket ----> Awscodedeply ---> EC2 application server.
waiting for the prompt response.
I have a job in Jenkins. A website of our own triggers builds of this job via the REST api. Sometimes we want to abort the build. Sometimes, it can be before the build is even started. In such cases we have the queueItem # instead of the build #.
How to do so via the REST api ?
If the build has started, by POST on:
http://<Jenkins_URL>/job/<Job_Name>/<Build_Number>/stop
Will stop/cancel the current build.
If the build has not started, you have the queueItem, then POST on:
http://<Jenkins_URL>/queue/cancelItem?id=<queueItem>
This is assuming your Jenkins Server has not been secured, otherwise you need to add BASIC authentication for a user with Cancel privileges.
Actually this question is already answered. So I will add, how to find id=<queueItem> , which I got stuck on finding this solution, which will helpful for others.
So, you can get <queueItem>, by - http://jenkins:8081/queue/api/json
Sample Output will be of json type like this one -
[{"_class":"hudson.model.Cause$RemoteCause","shortDescription":"Started by remote host 172.18.0.2","addr":"172.18.0.2","note":null}]}],"blocked":false,"buildable":false,"id":117,"inQueueSince":16767552,"params":"\nakey\t=AKIQ\nskey=1bP0RuNkr19vGze/bcb4ijDqVr8o\nnameofr=us\noutputtype=json\noid=284102\nadminname=admin","stuck":false,"task"
You have to take "id":117, and parse it to -
queueItem =117.
http://<Jenkins_URL>/queue/cancelItem?id=queueItem
Maybe you want to remotely send a post http request to stop a running build, there is a clue FYI, the jenkins job can stop another job(running build), like any jenkins admin click the X button when job is running.
Http Request Plugins is required by Jenkins ver2.17
Uncheck the Prevent Cross Site Request Forgery exploits option. Manager Jenkins -> Configure Global Security -> Uncheck
Setup Http Request Plugins's authorization. Manager Jenkins -> Configure System -> HTTP Request Basic/Digest Authentication -> add. Make sure the user has the job cancel permission
Job A is running. In job B, add build step as HTTP Request, URL: http://Jenkins_URL/job/Job_A_Name/lastBuild/stop, HTTP mode: POST, Authorization select the user you have just set, then build job B.
Done
If you only need to cancel the active build from a certain job you can use this batch script (windows .bat syntax):
REM #Echo off
CLS
REM CANCEL ACTIVE BUILD
REM PARAMETER 1 ACTIVE JOB NAME
if [%1] == [] GOTO NO_ARGUMENT
SET domain=https://my.jenkins.com/job/
SET path=/lastBuild/stop
SET url=%domain%%1%path%
"\Program Files\Git\mingw64\bin\curl.exe" -X POST %url% --user user:pass"
GOTO THEEND
:NO_ARGUMENT
Echo You need to pass the active jobname to cancel last build execution
:THEEND
Path to your local curl needs to set.