Json Patch: Append multiple? - json-patch

Basically, how can I can combine these two operations?
- op: add
path: /spec/template/spec/volumes/-
value:
name: php-src
emptyDir: {}
- op: add
path: /spec/template/spec/volumes/-
value:
name: nginx-src
emptyDir: {}
If I try like this, it deletes the existing entries:
- op: add
path: /spec/template/spec/volumes
value:
- name: php-src
emptyDir: {}
- name: nginx-src
emptyDir: {}
I just want to append two new entries to the end of /spec/template/spec/volumes which is an existing array.

This is not possible - the Json Patch spec only allows for adding singular values.

Related

Swagger 2.0 Parser error duplicated mapping key

Parser error, duplicated mapping key, Jump to line 52
Issues while adding common parameter id
Line 44 is over-indented. Replace
/tasks/{id}:
get:
with
/tasks/{id}:
get:
There was a lot to fix.
There were multiple parameters defined in one operation.
The path /tasks/{id} was defined multiple times.
=> You have to use once and put each http-method inside this path.
I tried my best to create the YAML and I hope I fixed it correctly.
It's much easier to do such stuff, if the complete code is part of the question.
openapi: 3.0.1
info:
title: Sample API v2
description: Sample Data API v2
contact:
name: John Smith
email: john.smith#email.com
version: v2
servers:
- url: https://task-manager-pvs.herokuapp.com/
paths:
/tasks:
get:
tags:
- Tasks
summary: Get All Tasks
operationId: GetAllTasks
parameters: []
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
post:
tags:
- Tasks
summary: Create Task
operationId: CreateTask
parameters: []
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/CreateTaskRequest'
example:
name: test
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
/tasks/{id}:
get:
tags:
- Single Task
parameters:
- name: id
in: path
required: true
schema:
type: string
description: The Task ID
summary: Get Single Task
operationId: GetSingleTask
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
patch:
tags:
- Single Task
summary: Update Task
parameters:
- in: path
name: id
schema:
type: string
required: true
description: The Task ID
operationId: UpdateTask
requestBody:
description: ''
content:
application/json:
schema:
$ref: '#/components/schemas/UpdateTaskRequest'
example:
name: st
required: true
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
delete:
parameters:
- in: path
name: id
schema:
type: string
required: true
description: The Task ID
tags:
- Single Task
summary: Delete Task
operationId: DeleteTask
responses:
'200':
description: ''
headers: {}
deprecated: false
security: []
components:
schemas:
CreateTaskRequest:
title: CreateTaskRequest
required:
- name
type: object
properties:
name:
type: string
example:
name: test
UpdateTaskRequest:
title: UpdateTaskRequest
required:
- name
type: object
properties:
name:
type: string
example:
name: st
tags:
- name: Tasks
description: ''
- name: Single Task
description: ''

Tekton trigger flow from github

I am learning Tekton (for business), coming from github actions (private).
The Tekton docs (or any other tutorial I could find) have instructions on how to automatically start a pipeline from a github push. Basically they all somewhat follow the below flow: (I am aware of PipelineRun/TaskRun etc)
Eventlistener - Trigger - TriggerTemplate - Pipeline
All above steps are basically configuration steps you need to take (and files to create and maintain), one easier than the other but as far as I can see they also need to be taken for every single repo you're maintaining. Compared to github actions where I just need 1 file in my repo describing everything I need this seems very elaborate (if not cumbersome).
Am I missing something ? Or is this just the way to go ?
Thanks !
they also need to be taken for every single repo you're maintaining
You're mistaken here.
The EventListener receives the payload of your webhook.
Based on your TriggerBinding, you may map fields from that GitHub payload, to variables, such as your input repository name/URL, a branch or ref to work with, ...
For GitHub push events, one way to do it would be with a TriggerBinding such as the following:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: github-push
spec:
params:
- name: gitbranch
value: $(extensions.branch_name) # uses CEL interceptor, see EL below
- name: gitrevision
value: $(body.after) # uses body from webhook payload
- name: gitrepositoryname
value: $(body.repository.name)
- name: gitrepositoryurl
value: $(body.repository.clone_url)
We may re-use those params within our TriggerTemplate, passing them to our Pipelines / Tasks:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: github-pipelinerun
spec:
params:
- name: gitbranch
- name: gitrevision
- name: gitrepositoryname
- name: gitrepositoryurl
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: github-job-
spec:
params:
- name: identifier
value: "demo-$(tt.params.gitrevision)"
pipelineRef:
name: ci-docker-build
resources:
- name: app-git
resourceSpec:
type: git
params:
- name: revision
value: $(tt.params.gitrevision)
- name: url
value: $(tt.params.gitrepositoryurl)
- name: ci-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitrevision)
- name: target-image
resourceSpec:
type: image
params:
- name: url
value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitbranch)
timeout: 2h0m0s
Using the following EventListener:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: github-listener
spec:
triggers:
- name: github-push-listener
interceptors:
- name: GitHub push payload check
github:
secretRef:
secretName: github-secret # a Secret you would create (option)
secretKey: secretToken # the secretToken in my Secret matches to secret configured in GitHub, for my webhook
eventTypes:
- push
- name: CEL extracts branch name
ref:
name: cel
params:
- name: overlays
value:
- key: truncated_sha
expression: "body.after.truncate(7)"
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: github-push
template:
ref: github-pipelinerun
And now, you can expose that EventListener, with an Ingress, to receive notifications from any of your GitHub repository.

EventRule matched by default EventBridge EventBus ignored by custom EventBus

I have a Cloudformation template which implements a CI/CD process for Lambda functions (at bottom)
Essentially it -
watches a Github repo
pulls source code on new git tag creation
starts a CodeBuild process which runs unit tests, zips source code (on test success) and pushes archive to S3
enables CodeBuild notifications
implements a CloudWatch EventRule to pattern match raw CodeBuild notifications, format and push them to SNS
binds a Lambda function to SNS, which pushes the notifications to Slack via a webhook
This works fine with the default EventBridge EventBus, but the pattern matching seems to fail if I switch to a custom EventBus.
(see resources EventBus and EventRule in the stack; it's currently set up to use the custom EventBus, and fails / ignores new git tags; if you comment out the reference to property EventBusName in EventBus, it defaults to using the default EventBus and works)
Why would a custom EventBus behave differently to the default EventBus in this situation ?
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
AppName:
Type: String
RepoOwner:
Type: String
RepoName:
Type: String
RepoBranch:
Type: String
Default: master
RepoAuth:
Type: String
Default: master
WebhookUrl:
Type: String
WebhookLambda:
Type: String
CodeBuildBuildSpec:
Type: String
CodeBuildType:
Type: String
Default: LINUX_CONTAINER
CodeBuildComputeType:
Type: String
Default: BUILD_GENERAL1_SMALL
CodeBuildImage:
Type: String
Default: aws/codebuild/standard:4.0
LambdaHandler:
Type: String
Default: "index.handler"
LambdaMemory:
Type: Number
Default: 128
LambdaTimeout:
Type: Number
Default: 30
LambdaRuntime:
Type: String
Default: python3.8
Resources:
ArtifactsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Sub:
- ${app_name}-lambda-artifacts
- app_name:
Ref: AppName
CodeBuildProject:
Properties:
Environment:
ComputeType:
Ref: CodeBuildComputeType
Image:
Ref: CodeBuildImage
Type:
Ref: CodeBuildType
Name:
Fn::Sub:
- ${app_name}-lambda-ci
- app_name:
Ref: AppName
ServiceRole:
Fn::GetAtt:
- CodeBuildRole
- Arn
Source:
Auth:
Resource:
Ref: RepoAuth
Type: OAUTH
Location:
Fn::Sub:
- "https://github.com/${repo_owner}/${repo_name}.git"
- repo_owner:
Ref: RepoOwner
repo_name:
Ref: RepoName
Type: GITHUB
BuildSpec:
Fn::Sub:
- "${build_spec}"
- build_spec:
Ref: CodeBuildBuildSpec
Artifacts:
Type: NO_ARTIFACTS
SourceVersion:
Ref: RepoBranch
Triggers:
Webhook: true
FilterGroups:
- - Type: EVENT
Pattern: PUSH
ExcludeMatchedPattern: false
- Type: HEAD_REF
Pattern: "refs/tags/.*"
ExcludeMatchedPattern: false
Type: AWS::CodeBuild::Project
CodeBuildRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Version: '2012-10-17'
Policies:
- PolicyDocument:
Statement:
- Action:
- codebuild:*
- events:*
- s3:PutObject
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Effect: Allow
Resource: '*'
Version: '2012-10-17'
PolicyName: code-build-role-policy
Type: AWS::IAM::Role
WebhookFunction:
Properties:
FunctionName:
Fn::Sub:
- ${app_name}-lambda-webhook
- app_name:
Ref: AppName
Code:
ZipFile:
Ref: WebhookLambda
Environment:
Variables:
WEBHOOK_URL:
Ref: WebhookUrl
Handler:
Ref: LambdaHandler
MemorySize:
Ref: LambdaMemory
Role:
Fn::GetAtt:
- WebhookFunctionRole
- Arn
Runtime:
Ref: LambdaRuntime
Timeout:
Ref: LambdaTimeout
Type: AWS::Lambda::Function
WebhookFunctionRole:
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRole
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Version: '2012-10-17'
Policies:
- PolicyDocument:
Statement:
- Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Effect: Allow
Resource: '*'
Version: '2012-10-17'
PolicyName: webhook-role-policy
Type: AWS::IAM::Role
WebhookFunctionPermission:
Properties:
Action: "lambda:InvokeFunction"
FunctionName:
Ref: WebhookFunction
Principal: "sns.amazonaws.com"
SourceArn:
Ref: WebhookTopic
Type: AWS::Lambda::Permission
WebhookTopic:
Properties:
Subscription:
- Protocol: lambda
Endpoint:
Fn::GetAtt:
- WebhookFunction
- Arn
Type: AWS::SNS::Topic
WebhookTopicPolicy:
Properties:
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: "events.amazonaws.com"
Action:
- "sns:Publish"
Resource:
Ref: WebhookTopic
Topics:
- Ref: WebhookTopic
Type: AWS::SNS::TopicPolicy
EventBus:
Type: AWS::Events::EventBus
Properties:
Name:
Fn::Sub:
- ${app_name}-bus
- app_name:
Ref: AppName
EventRule:
Type: AWS::Events::Rule
Properties:
EventBusName: # CURRENTLY USING CUSTOM EVENT BUS (PATTERN MATCHING FAILS); REMOVE THIS PROPERTY TO SWITCH TO DEFAULT EVENT BUS (PATTERN MATCHING WORKS)
Ref: EventBus
EventPattern:
source:
- "aws.codebuild"
detail-type:
- "CodeBuild Build Phase Change"
detail:
completed-phase:
- SUBMITTED
- PROVISIONING
- DOWNLOAD_SOURCE
- INSTALL
- PRE_BUILD
- BUILD
- POST_BUILD
- UPLOAD_ARTIFACTS
- FINALIZING
completed-phase-status:
- TIMED_OUT
- STOPPED
- FAILED
- SUCCEEDED
- FAULT
- CLIENT_ERROR
project-name:
- Ref: CodeBuildProject
State: ENABLED
Targets:
- Arn:
Ref: WebhookTopic
Id:
Fn::Sub:
- "${project_name}-codebuild-notifications"
- project_name:
Ref: CodeBuildProject
InputTransformer:
InputPathsMap:
build-id: "$.detail.build-id"
project-name: "$.detail.project-name"
completed-phase: "$.detail.completed-phase"
completed-phase-status: "$.detail.completed-phase-status"
InputTemplate: |
"{'build-id': '<build-id>', 'project-name': '<project-name>', 'completed-phase': '<completed-phase>', 'completed-phase-status': '<completed-phase-status>'}"
"Custom event buses serve a use case of receiving events from your custom applications and services. Unfortunately, it is not possible for AWS services to push events to a custom event bus." (AWS Support)

duckling module for rasa

I try to train my bot based on rasa_nlu.
Below is my config file and i have problems because entity like "next month" is recognized by ner_spacy to be something else than time data. I want this type of entity to be recognized only by duckling module.
Thanks
language: "en"
project: "nav-os"
pipeline:
- name: "nlp_spacy"
model: "en"
- name: "ner_spacy"
- name: "tokenizer_spacy"
- name: "intent_entity_featurizer_regex"
- name: "intent_featurizer_spacy"
- name: "ner_crf"
- name: "ner_synonyms"
- name: "intent_classifier_sklearn"
- name: "ner_duckling"
dimensions:
- "time"
You could exclude the dimension for spaCy by defining the ones you want to include like it is described in the documentation.
Means you could configure the spacy_ner component like the following to only extract PERSON entities (just as example).
pipeline:
- name: "SpacyEntityExtractor"
# dimensions to extract
dimensions: ["PERSON"]

Create bigquery table using google cloud deployment manager YAML file

I am trying to create a big query table using the deployment manager by following YAML file :
imports:
- path: schema.txt
resources:
- name: test
type: bigquery.v2.table
properties:
datasetId: test_dt
tableReference:
datasetId: test_dt
projectId: test_dev
tableId: test
schema:
fields: {{ imports["schema.txt"] }}
However, when I try to give the table schema definition via .txt file I get a parsing error. If I give the schema definition instead of .txt file then the script runs successfully. This method of importing the text file is given in the google cloud help. Can anyone help me with this?
I think the way the deployment manager is formatting the contents of the .txt file might be incorrect. A good way to debug this would be to collect HTTP request traces and comparing the difference between the two requests.
this is yaml we could get working using nested or repeated field in bigquery deployment manager.
# Example of the BigQuery (dataset and table) template usage.
#
# Replace `<FIXME:my_account#email.com>` with your account email.
imports:
- path: templates/bigquery/bigquery_dataset.py
name: bigquery_dataset.py
- path: templates/bigquery/bigquery_table.py
name: bigquery_table.py
resources:
- name: dataset_name_here
type: bigquery_dataset.py
properties:
name: dataset_name_here
location: US
access:
- role: OWNER
userByEmail: my_account#email.com
- name: table_name_here
type: bigquery_table.py
properties:
name: table_name_here
datasetId: $(ref.dataset_name_here.datasetId)
timePartitioning:
properties:
field:
type: DAY
schema:
- name: column1
type: STRUCT
fields:
- name: column2
type: string
- name: test1
type: RECORD
mode: REPEATED
fields:
- name: test2
type: string
Sample YAML for Creating View in BigQuery using Deployment manager:
Note: This YAML also shows how to create partitioning(_PARTITIONTIME) on a table (hello_table)
# Example of the BigQuery (dataset and table) template usage.
# Replace `<FIXME:my_account#email.com>` with your account email.
imports:
- path: templates/bigquery/bigquery_dataset.py
name: bigquery_dataset.py
- path: templates/bigquery/bigquery_table.py
name: bigquery_table.py
resources:
- name: dataset_name
type: bigquery_dataset.py
properties:
name: dataset_name
location: US
access:
- role: OWNER
userByEmail: my_account#email.com
- name: hello
type: bigquery_table.py
properties:
name: hello_table
datasetId: $(ref.dataset_name.datasetId)
timePartitioning:
type: DAY
schema:
- name: partner_id
type: STRING
- name: view_step
type: bigquery_table.py
properties:
name: hello_view
datasetId: $(ref.dataset_name.datasetId)
view:
query: select partner_id from `project_name.dataset_name.hello_table`
useLegacySql: False