My requirement is to send notification on consecutive build failures.
I don’t want to send notifications for pipeline status change. If the pipeline fails say consecutive 5 times, notify by mail. Any thoughts.
Also is it possible to set/unset or increment pipeline scheduled variables inside gitlab-ci.yml file.
Related
I'm developing a application that can collect mpreduce job progress info to analyze.The first way is parse log file.but It's ugly。Is there any method like hook or plugin can do this
You can probably use the YARN application API to get most of the information. See this Yarn Application API
Here is an excerpt from the page:
... All query parameters for this api will filter on all applications. However the queue query parameter will only implicitly filter on unfinished applications that are currently in the given queue.
There are other YARN APIs too, that you can utilize to achieve your goal. It is certainly better than scanning log files.
I try to send notification from Azure DevOps when a test fails in the release pipeline. If the test fails then the release pipeline has Partialy succeeded status.
I can't find option to notify when this pipeline fails in Azure.
Question: How to send notification e-mail when release doesn't succeed in Azure DevOps?
Create a new release notification subscription for "A deployment is completed". Add a new filter clause such that:
Deployment Status = Partially succeeded
or Deployment Status = Failed
If you want to treat a test failure as a failure and not a partial success, you will likely need to un-check the "Continue on error" option under "Control Options" of the test task in your release pipeline.
I am running a workflow on a n1-ultramem-40 instance that will run for several days. If an error occurs, I would like to catch and log the error, be notified, and automatically terminate the Virtual Machine. Could I use StackDriver and gcloud logging to achieve this? How could I automatically terminate the VM using these tools? Thanks!
Let's break the puzzle into two parts. The first is logging an error to Stackdriver and the second is performing an external action automatically when such an error is detected.
Stackdriver provides a wide variety of language bindings and package integrations that result in log messages being written. You could include such API calls in your application which detects the error. If you don't have access to the source code of your application but it instead logs to an external file, you could use the Stackdriver agents to monitor log files and relay the log messages to Stackdriver.
Once you have the error messages being sent to Stackdriver, the next task would be defining a Stackdriver log export definition. This is the act of defining a "filter" that looks for the specific log entry message(s) that you are interested in acting upon. Associated with this export definition and filter would be a PubSub topic. A pubsub message would then be written to this topic when an Stackdriver log entry is made.
Finally, we now have our trigger to perform your action. We could use a Cloud Function triggered from a PubSub message to execute arbitrary API logic. This could be code that performs an API request to GCP to terminate the VM.
I am trying to setup the automated deployment through Bitbucket pipeline , but still not succeed might be my business requirement is not fulfill by the Bitbucekt pipeline.
Current Setup :
Dev Team push the code to default branch from their local machines. and as Team lead reviews their code and updated on UAT and production server manually by running the commands on the Server CLI directly.
#hg branch
#hg pull
#hg update
Automated deployment we want :
we have 3 environment DEV, UAT/Staging and production.
on the bases of the environments i have created
Release branches . DEV-Release, UAT-Release and PROD-Release respectively.
dev team push the code directly to the default branch dev lead will check the changes and then create a pull request form default to UAT-Release branch and after successful deployment on UAT server the again create Pull request from default to production branch and pipeline should be executed on the pull request and then started copying the bundle.zip on AWS S3 and then to AWS EC2 instance.
Issues :
The issue i am facing is bitbucket-pipeline.yml is not same on all release branches because the branch name s difference due to that when we create a pull request for any release branch we are getting the conflict of that file .
id there any why i can use the same bitbucket-pipline.yml file for all the branches and deployment should be happened on that particular for which pull request is created.
can we make that file dynamic for all branches with environment variables?
if the bitbucket pipeline can not fulfill my business requirement then what is other solution ?
if you guys think my business requirement is not good or justifiable just let me know on what step i have to change to achieve the final result of automated deployments
Flow :
Developer Machine push to--> Bitbucket default branch ---> Lead will review the code then pull request for any branch (UAT,PROD) --- > pipeline will be executed an push the code to S3 bucket ----> Awscodedeply ---> EC2 application server.
waiting for the prompt response.
I've recently started to change my Jenkins jobs from being restricted to a certain slave to being restricted to a slave group identified by a label. However I have test jobs that I need to run on the same slave as the job that they're testing.
I need a way to tie two jobs together such that they can only be run on the same slave, but the slave is still chosen by Jenkins based on availability, etc.
Anyone know how to do this, or even if it's possible? Thanks in advance!
Couldn't you use
https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin
and to pass the node name ${NODE_NAME} (see https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project#Buildingasoftwareproject-JenkinsSetEnvironmentVariables) to the next build, that should be parametrized on the node label (that can be node name) using
https://wiki.jenkins-ci.org/display/JENKINS/NodeLabel+Parameter+Plugin
I need a way to tie two jobs together such that they can only be run on the same slave, but the slave is still chosen by Jenkins based on availability, etc.
I have the same problem, and I found Node Stalker Plugin
Right now the plugin can be found on the following url:
https://wiki.jenkins-ci.org/display/JENKINS/Node+Stalker+Plugin
Jenkins calls this plugin as
Job Node Stalker
on plugin management page. It will be part of Jenkins.