CodeBuild is building all commits to my destination branch. What is the correct way to configure builds on a specific branch? - aws-codebuild

In SourceVersion: ${BranchName} is set in the CodeBuild project. I thought that would limit the scope of all Push & Pull Request to ${BranchName} destination branch only.But apparently not. I am seeing every commit on every branch trigger the GitHub webhook to CodeBuild and cause it to build every commit! Not what I wanted.
So then in filters I tried to specify:
Triggers:
Webhook: true
FilterGroups:
- - Type: EVENT
Pattern: PULL_REQUEST_MERGED
- Type: BASE_REF
Pattern: !Sub "refs/heads/${BranchName}$"
ExcludeMatchedPattern: false
- Type: FILE_PATH
Pattern: ^webserver/
- - Type: EVENT
Pattern: PUSH
- Type: HEAD_REF
Pattern: !Sub "refs/heads/${BranchName}$"
ExcludeMatchedPattern: false
- Type: FILE_PATH
Pattern: ^webserver/
With this now no builds are triggered. The GitHub webhooks gets a response from CodeBuild: No build triggered for specified payload. Clearly the filters do not work as documented. So my question is what should go where between SourceVersion & Webhooks so that the objective of ONLY triggering builds when either changes are pushed or a pull request is merged into my destination branch?

Dumb me! I was making a change to a file outside the webserver/ folder and with File Path filter it was obviously ignoring it as expected.

Related

Have the same lambda configurations for multiple events in cloudFormation

I'm writing a CloudFormation code to create and configure an S3 bucket.
As part of the configuration, I'm adding a lambda triggering in 2 events. It's the same lambda.
How can I write this code? Should I duplicate the section or can I map the two events to the same behavior?
Here is the code
MyBucket:
Condition: CreateNewBucket
Type: AWS::S3::Bucket
## ...Bucket Config comes here... ##
## The interesting part ##
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put'
Function: My Lambda name
Filter:
S3Key:
Rules:
- Name: prefix
Value: 'someValue'
Is there an option to write:
LambdaConfigurations:
- Events: ['s3:ObjectCreated:Put', 's3:ObjectCreated:Post']
Or maybe
LambdaConfigurations:
- Event: 's3:ObjectCreated:Put',
- Event 's3:ObjectCreated:Post',
...
Or Do I need to copy-past the block twice?
I can't find an example for this behavior.
Sorry if this is a trivial question, I'm new to CloudFormation.
Thanks!
It depends on exactly what you want to trigger the lambda function. If you want the event to fire on all object create events (s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:copy, and s3:ObjectCreated:CompleteMultipartUpload), you can simply use 's3:ObjectCreated:*' as the value for Event.
If you want Put and Post specifically, you'll need to supply two event configurations, one for Put and another for Post. The CloudFormation parser will accept multiple Event elements in a LambdaConfiguration but only the last one to appear is applied to the event notification.
This is an interesting divergence in console/API functionality vs CloudFormation/CDK functionality. The PutBucketNotificationConfiguration API operation accepts LambdaFunctionConfiguration arguments that support multiple individual events. PutBucketNotificationConfiguration replaces PutBucketNotification, which accepted CloudFunctionConfiguration, which has a deprecated Event element. So I wonder if CloudFormation still refers to the older API operation.

How do I use a yaml selector?

I am experimenting with yaml selectors, so far without success. My selectors.yml:
selectors:
- name: daily
description: selects models for daily run
definition:
exclude:
- union:
- "tag:pp_backfill"
- "tag:inc_transactions_raw_data"
- "tag:hourly"
And when I try to use it I get an error:
$ dbt ls --selector daily
* Deprecation Warning: dbt v0.17.0 introduces a new config format for the
dbt_project.yml file. Support for the existing version 1 format will be removed
in a future release of dbt. The following packages are currently configured with
config version 1:
- honey_dbt
- dbt_utils
- audit_helper
- codegen
For upgrading instructions, consult the documentation:
https://docs.getdbt.com/docs/guides/migration-guide/upgrading-to-0-17-0
* Deprecation Warning: The "adapter_macro" macro has been deprecated. Instead,
use the `adapter.dispatch` method to find a macro and call the result.
adapter_macro was called for: dbt_utils.intersect
Encountered an error:
Runtime Error
Could not find selector named daily, expected one of []
I tried this in both 0.18.1 and 0.19.0, with and without config-version: 2. Any thoughts?
I think the blocker here might be that you are not currently selecting anything initially to then exclude specific models from using the tag method. Here is solution in my project and then an adoption that might work in your case.
Context
I'm running dbt version 0.19.0 on dbt Cloud. Both of these compiled and successfully ran dbt run --selector daily.
Jaffle Shop Example
stg_customers is tagged with dont_run_me and stg_orders is tagged with also_dont_run_me
selector.yml is the following at the root of the dbt project
selectors:
- name: daily
description: selects models for daily run
definition:
union:
- method: path
value: models
- exclude:
- method: tag
value: dont_run_me
- method: tag
value: also_dont_run_me
The logic here is that I'm first select all the models and then exclude the union of the models that have tags dont_run_me and also_dont_run_me.
dbt run --selector daily ended up running everything in my project except stg_customers and stg_orders
Specific Case
If you are trying to select all models except for ones that are tagged as pp_backfill, inc_transactions_raw_data, and hourly, I think the following will do the trick:
selectors:
- name: daily
description: selects models for daily run
definition:
union:
- method: path
value: models
- exclude:
- union:
- method: tag
value: pp_backfill
- method: tag
value: inc_transactions_raw_data
- method: tag
value: hourly

what are drone.io 0.8.5 plugin/gcr secretes' acceptable values?

I'm having trouble pushing to gcr with the following
gcr:
image: plugins/gcr
registry: us.gcr.io
repo: dev-221608/api
tags:
- ${DRONE_BRANCH}
- ${DRONE_COMMIT_SHA}
- ${DRONE_BUILD_NUMBER}
dockerfile: src/main/docker/Dockerfile
secrets: [GOOGLE_CREDENTIALS]
when:
branch: [prod]
...Where GOOGLE_CREDENTIALS will work, but if named say GOOGLE_CREDENTIALS_DEV it will not be properly picked up. GCR_JSON_KEY works fine. I recall reading legacy documentation that spelled out the acceptable variable names, of which GOOGLE_CREDENTIALS and GCR_JSON_KEY were listed among other variants but as of version 1 they've done some updates omitting that info.
So, question is, is the plugin capable of accepting whatever variable name or is it expecting specific variable names and if so what are they?
The Drone GCR plugin accepts the credentials in a secret named PLUGIN_JSON_KEY, GCR_JSON_KEY, GOOGLE_CREDENTIALS, or TOKEN (see code here)
If you stored the credentials in drone as GOOGLE_CREDENTIALS_DEV then you can rename it in the .drone.yml file like this:
...
secrets:
- source: GOOGLE_CREDENTIALS_DEV
target: GOOGLE_CREDENTIALS
...

How to override environment variables in jenkins_job_builder at job level?

I am trying to find a way to inherit/override environment variables in jenkins jobs defined via jenkins-job-builder (jjb).
Here is one template that does not work:
#!/usr/bin/env jenkins-jobs test
- defaults: &sample_defaults
name: sample_defaults
- job-template:
name: 'sample-{product_version}'
project-type: pipeline
dsl: ''
parameters:
- string:
name: FOO
default: some-foo-value-defined-at-template-level
- string:
name: BAR
default: me-bar
- project:
defaults: sample_defaults
name: sample-{product_version}
parameters:
- string:
name: FOO
value: value-defined-at-project-level
jobs:
- 'sample-{product_version}':
product_version:
- '1.0':
parameters:
- string:
name: FOO
value: value-defined-at-job-level-1
- '2.0':
# this job should have:
# FOO=value-defined-at-project-level
# BAR=me-bar
Please note that it is key to be able to override these parameters at job or project level instead of template.
Requirements
* be able to add as many environment variables like this without having to add one JJB variable for each of them
* user should not be forced to define these at template or job levels
* those var need to endup being exposed as environment variables at runtime for pipelines and freestyle jobs.
* syntax is flexible but a dictionary approach would be highly appreciated, like:
vars:
FOO: xxx
BAR: yyy
The first thing to understand is how JJB priorities where it will pull variables in from.
job-group section definition
project section definition
job-template variable definition
defaults definition
(This is not an exhaustive list but it's covers the features I use)
From this list we can immediately see that if we want to make job-templates have override-able then using JJB defaults configuration is useless as it has the lowest precedence when JJB is deciding where to pull from.
On the other side of the spectrum, job-groups has the highest precedence. Which unfortunately means if you define a variable in a job-group with the intention of of overriding it at the project level then you are out of luck. For this reason I avoid setting variables in job-groups unless I want to enforce a setting for a set of jobs.
Declaring variable defaults
With that out of the way there are 2 ways JJB allows us to define defaults for a parameter in a job-template:
Method 1) Using {var|default}
In this method we can define the default along with the definition of the variable. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
However where this method falls apart if you need to use the same JJB variable in more than one place as you will have multiple places to define the default value for the template. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
scm:
- git:
refspec: 'refs/heads/{branch|master}'
As you can see we now have 2 places were we are declaring {branch|master} not ideal.
Method 2) Defining the default variable value in the job-template itself
With this method we declare the default value of the variable in the job-template itself just once. I like to section off my job-templates like this:
- job-template:
name: '{project-name}-verify'
#####################
# Variable Defaults #
#####################
branch: master
#####################
# Job Configuration #
#####################
parameters:
- string:
name: BRANCH
default: {branch}
scm:
- git:
refspec: 'refs/heads/{branch}'
In this case there is still 2 branch definitions for the job-template. However we also provide the default value for the {branch} variable at the top of the file. Just once. This will be the value that the job takes on if it is not passed in by a project using the template.
Overriding job-templates variables
When a project now wants to use a job-template I like to use one of 2 methods depending on the situation.
- project:
name: foo
jobs:
- '{project-name}-merge'
- '{project-name}-verify'
branch: master
This is the standard way that most folks use and it will set branch: master for every job-template in the list. However sometimes you may want to provide an alternative value for only 1 job in the list. In this case the more specific declaration takes precendence.
- project:
name: foo
jobs:
- '{project-name}-merge':
branch: production
- '{project-name}-verify'
branch: master
In this case the verify job will get he value "master" but the merge job will instead get the branch value "production".

GitHub v3 API - how do I create the initial commit in a repository?

I'm using the v3 API and managed to list repos/trees/branches, access file contents, and create blobs/trees/commits. I'm now trying to create a new repo, and managed to do it with "POST user/repos"
But when I try to create blobs/trees/commits/references in this new repo I get the same error message. (409) "Git Repository is empty.". Obviously I can go and init the repository myself through the git command line, but would rather like if my application did it for me.
Is there a way to do that? What's the first thing I need to do through the API after I create an empty repository?
Thanks
Since 2012, it is now possible to auto initialize a repository after creation, according to this blog post published on the GitHub blog:
Today we’ve made it easier to add commits to a repository via the GitHub API. Until now, you could create a repository, but you would need to initialize it locally via your Git client before adding any commits via the API.
Now you can optionally init a repository when it’s created by sending true for the auto_init parameter:
curl -i -u pengwynn \
-d '{"name": "create-repo-test", "auto_init": true}' \
https://api.github.com/user/repos
The resulting repository will have a README stub and an initial commit.
Update May 2013: Note that the repository content API now authorize adding files.
See "File CRUD and repository statistics now available in the API".
Original answer (May 2012)
Since it doesn't seems to be supported yet ("GitHub v3 API: How to create initial commit for my shiny new repository?", as aclark comments), you can start by pushing an initial empty commit
git commit --allow-empty -m 'Initial commit'
git push origin master
That can be a good practice to initialize one's repository anyway.
And it is illustrated in "git's semi-secret empty tree".
If you want to create an empty initial commit (i.e. one without any file) you can do the following:
Create the repository using the auto_init option as in Jai Pandya's answer; or, if the repository already exists, use the create file endpoint to create a dummy file - this will create the branch:
PUT https://api.github.com/repos/USER/REPO/contents/dummy
{
"branch": "master",
"message": "Create a dummy file for the sake of creating a branch",
"content": "ZHVtbXk="
}
This will give you a bunch of data including a commit SHA, but you can discard all of it since we are about to obliterate that commit.
Use the create commit endpoint to create a commit that points to the empty tree:
POST https://api.github.com/repos/USER/REPO/git/commits
{
"message": "Initial commit",
"tree": "4b825dc642cb6eb9a060e54bf8d69288fbee4904"
}
This time you need to take note of the returned commit SHA.
Use the update reference endpoint to make the branch point to the commit you just created (notice the Use Of The ForceTM):
PATCH https://api.github.com/repos/USER/REPO/git/refs/heads/master
{
"sha": "<the SHA of the commit>",
"force": true
}
Done! Your repository has now one branch, one commit and zero files.
2023 January: The 2. step of Konamiman's solution did not work for me, unless I deleted the dummy file with the contents api:
DELETE https://api.github.com/repos/USER/REPO/contents/dummy
{
"branch": "master",
"message": "Delete dummy file",
"sha": "e69de29bb2d1d6434b8b29ae775ad8c2e48c5391"
}
(This sha is for the empty content "".)
After deleting the dummy file somehow 4b825dc642cb6eb9a060e54bf8d69288fbee4904 becomes assignable to a new commit.
It looks like over time this changed. It feels bad to rely on undocumented API behavior. :(