How to access cross data store sequence generated ID within the same use case - end-to-end

I am working on one end to end testing scenario where I need a user record perf map use a key which is a reference to meta_table id within the same use case.
While user data is stored in NoSQL database, meta_table is stored in RDBMS and uses autogenerated id, so each time I run only a test workflow
meta_table for the same use cases is having completely new id generated.
Here is my data setup workflow:
pipeline:
db1:
register:
action: dsunit:register
datastore: db1
config:
driverName: mysql
descriptor: '[username]:[password]#tcp(127.0.0.1:3306)/[dbname]?parseTime=true'
credentials: $mysqlCredentials
parameters:
dbname: db1
prepare:
sequence:
datastore: db1
action: dsunit.sequence
tables:
- meta_table
post:
- seq = $Sequences
data:
action: nop
init:
- db1key = data.db1.setup
- db1Setup = $AsTableRecords($db1key)
setup:
datastore: db1
action: dsunit:prepare
data: $db1Setup
db2:
register:
action: dsunit:register
datastore: db2
config:
driverName: aerospike
descriptor: tcp([host]:3000)/[namespace]
parameters:
namespace: test
host: 127.0.0.1
port: 3000
prepare:
data:
action: nop
init:
- db2key = data.db2.setup
- db2Setup = $AsTableRecords($db2key)
setup:
datastore: db2
action: dsunit:prepare
data: $db2Setup
And there are my data files:
#db1_setup.json
[
{
"Table": "meta_table",
"Value": {
"id": "$seq.meta_table",
"name": "Name 1",
"type_id": 1
},
"PostIncrement": [
"seq.meta_table"
],
"Key": "${tagId}_meta_table"
}
]
#db2_setup.json
[
{
"Table": "user",
"Value": {
"id": "$userID",
"email": "user#abc.com",
"perf": {
"??meta_table.id here??": "$timestamp"
}
},
"AutoGenerate": {
"userID": "uuid.next"
},
"Key": "${tagId}_user"
}
]
I am looking for a valid expression to replace:
"??meta_table.id here??"
to expand to meta_table.id,
I have tried the following, but none worked:
$meta_table.id
${meta_table.id}

You want to use
${dsunit.${tagId}_meta_table[0].id}
dsunit since that is the root
{tagId}_meta_table since that is the Key to your table
[0] since the table can have multiple rows

Related

Define a condition in cloudformation template with alarms

How to define/declare a condition to create an alarm in prod?
With the condition:Isprod would work to create an alarm in prod?
WOULD this work? how to define a condition below?
LambdaInvocationsAlarm:
Condition: IsProd
Type: AWS::CloudWatch::Alarm
Properties:
AlarmDescription: Lambda invocations
AlarmName: LambdaInvocationsAlarm
ComparisonOperator: LessThanLowerOrGreaterThanUpperThreshold
EvaluationPeriods: 1
Metrics:
- Expression: ANOMALY_DETECTION_BAND(m1, 2)
Id: ad1
- Id: m1
MetricStat:
Metric:
MetricName: Invocations
Namespace: AWS/Lambda
Period: !!int 86400
Stat: Sum
ThresholdMetricId: ad1
TreatMissingData: breaching
As #Marcin said, you should explain what you have tried and what is blocking more precisely.
But what you suggest could work yes: you can define a Condition named isProd and use it to create - or not - resources. Regarding this condition: AWS does not know what is a production stage in your environment, so you need to specify that. Does your production stage matches an account? Does it match a region? Something else?
As an example and if we assume that your production stage matches a specific AWS account, then you could define the condition as below (it's JSON, feel free to convert to YAML):
{
"Parameters": {
"ProdAccountParameter": {
"Type": "String",
"Description": "Enter the production account identifier."
}
},
"Conditions": {
"isProd": {
"Fn::Equals": [
{
"Ref": "ProdAccountParameter"
},
{
"Ref": "AWS::AccountId"
}
]
}
},
...
}
(Then, when deploying the template, you'll need to provide your AWS production account).

Ansible if nested value doesn't exist in nested array

I'd like to make my Ansible EIP creation idempotent. In order to do that I only want the task to run when Tag "Name" value "tag_1" doesn't exist.
However I'm not sure how I could add this as a 'when' at the end of a task.
"eip_facts.addresses": [
{
"allocation_id": "eipalloc-blablah1",
"domain": "vpc",
"public_ip": "11.11.11.11",
"tags": {
"Name": "tag_1",
}
},
{
"allocation_id": "eipalloc-blablah2",
"domain": "vpc",
"public_ip": "22.22.22.22",
"tags": {
"Name": "tag_2",
}
},
{
"allocation_id": "eipalloc-blablah3",
"domain": "vpc",
"public_ip": "33.33.33.33",
"tags": {
"Name": "tag_3",
}
}
]
(Tags are added later) I'm looking for something like:
- name: create elastic ip
ec2_eip:
region: eu-west-1
in_vpc: yes
when: eip_facts.addresses[].tags.Name = "tag_1" is not defined
What is the correct method of achieving this? Bear in mind the value can not exist in that parameter in the entire array, not just a single iteration.
Ok, I found a semi-decent solution
- name: Get list of EIP Name Tags
set_fact:
eip_facts_Name_tag: "{{ eip_facts.addresses | map(attribute='tags.Name') | list }}"
Which extracts the Name tag and puts them into an array
ok: [localhost] => {
"msg": [
"tag_1",
"tag_2",
"tag_3"
]
}
and then...
- debug:
msg: "Hello"
when: '"tag_1" in "{{ eip_facts_Name_tag }}"'
This will work, beware though, this doesn't do an exact string search. So if you did a search for just 'tag' that'd count as a hit too.

Using $ref for jsonschema in Abao

Can someone help with schemas refs in abao? How to use --schemas option? Here is simple gist https://gist.github.com/SeanSilke/e5a2f7673ad4aa2aa43ba800c9aec31b
I try to run "abao api.raml --schemas fref.json" but got error " Missing/unresolved JSON schema $refs (fref.json) in schema".
By the way the server is mocked by osprey-mock-service.
You need add id field to your JSON schemas.
For run use: abao api.raml --server http://localhost:3000 --schemas=./*.json
Example files:
api.raml
#%RAML 0.8
title: simple API
baseUri: http://localhost:3000
/song:
get:
responses:
200:
body:
application/json:
schema: !include schema.json
example: |
{
"songId": "e29b",
"songTitle": "The song",
"albumId": "18310"
}
fref.json
{
"id": "fref.json",
"type": "string"
}
schema.json
{
"$schema": "http://json-schema.org/draft-03/schema",
"id": "schema.json",
"type": "object",
"properties":{
"songId": {"$ref": "fref.json"}
},
"required": ["songId", "albumId", "songTitle"]
}

How to parameterize ports in OpenShift JSON Project Template

I'm trying to create a custom project template in OpenShift Origin. The Service configuration specifically, looks like below:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "${NAME}",
"annotations": {
"description": "Exposes and load balances the node.js application pods"
}
},
"spec": {
"ports": [
{
"name": "web",
"port": "${APPLICATION_PORT}",
"targetPort": "${APPLICATION_PORT}",
"protocol": "TCP"
}
],
"selector": {
"name": "${NAME}"
}
}
},
where, APPLICATION_PORT is supplied as a user parameter:
"parameters": [
{
"name": "APPLICATION_PORT",
"displayName": "Application Port",
"description": "The exposed port that will route to the node.js application",
"value": "8000"
},
When I try to use this template to create a project, I get the following error:
spec.ports[0].targetPort: Invalid value: "8000": must be an IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])*...
I get a similar error in my DeploymentConfig as well, for the http ports in the liveness and readiness probes:
"readinessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 3,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
"livenessProbe": {
"timeoutSeconds": 3,
"initialDelaySeconds": 30,
"httpGet": {
"path": "/Info",
"port": "${APPLICATION_ADMIN_PORT}"
}
},
where, APPLICATION_ADMIN_PORT, again, is user-supplied.
Error:
spec.template.spec.containers[0].livenessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
spec.template.spec.containers[0].readinessProbe.httpGet.port: Invalid value: "8001": must be an IANA_SVC_NAME...
I've been following https://blog.openshift.com/part-2-creating-a-template-a-technical-walkthrough/ to understand templates, and it, unfortunately, does not have any examples of ports being parameterized anywhere.
It almost seems as if strings are not allowed as the values of these ports. Is that the case? What's the right way to parameterize these values? Should I switch to YAML?
Versions:
OpenShift Master: v1.1.6-3-g9c5694f
Kubernetes Master: v1.2.0-36-g4a3f9c5
Edit 1: I tried the same configuration in YAML format, and got the same error. So, JSON vs YAML is not the issue.
Unfortunately it is not currently possible to parameterize non-string field values: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
" Parameters can be referenced by placing values in the form "${PARAMETER_NAME}" in place of any string field in the template."
Templates are in the process of being upstreamed to Kubernetes and this limitation is being addressed there:
https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/templates.md
The proposal is being implemented in PRs 25622 and 25293 in the kubernetes repo.
edit:
Templates now support non-string parameters as documented here: https://docs.openshift.org/latest/dev_guide/templates.html#writing-parameters
I don't know if this option was available in 2016 when this post was added but now you can use ${{PARAMETER_NAME}} to parameterize non-string field values.
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: ${{PORT_PARAMETER}}
protocol: TCP
targetPort: ${{PORT_PARAMETER}}
sessionAffinity: None
This may a be a bad practice but I'm using sed to substitute int parameters:
cat template.yaml | sed -e 's/PORT/8080/g' > proxy-template-subst.yaml
Template:
apiVersion: template.openshift.io/v1
kind: Template
objects:
- apiVersion: v1
kind: Service
metadata:
name: ${NAME}
namespace: ${NAMESPACE}
spec:
externalTrafficPolicy: Cluster
ports:
- name: ${NAME}-port
port: PORT
protocol: TCP
targetPort: PORT
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
parameters:
- description: Desired service name
name: NAME
required: true
value: need_real_value_here
- description: IP adress
name: IP
required: true
value: need_real_value_here
- description: namespace where to deploy
name: NAMESPACE
required: true
value: need_real_value_here

Bigquery add columns to table schema

I am trying to add new column to BigQuery existing table. I have tried bq command tool and API approach. I get following error when making call to Tables.update().
I have tried with providing full schema with additional field and that also gives me same error as shown below.
With API I get following Error:
{
"schema": {
"fields": [{
"name": "added_column",
"type": "integer",
"mode": "nullable"
}]
}
}
{
"error": {
"errors": [{
"domain": "global",
"reason": "invalid",
"message": "Provided Schema does not match Table [blah]"
}],
"code": 400,
"message": "Provided Schema does not match Table [blah]"
}
}
With BQ tool I get following error:
./bq update -t blah added_column:integer
BigQuery error in update operation: Provided Schema does not match Table [blah]
Try this:
bq --format=prettyjson show yourdataset.yourtable > table.json
Edit table.json and remove everything except the inside of "fields" (e.g. keep the [ { "name": "x" ... }, ... ]). Then add your new field to the schema.
Or pipe through jq
bq --format=prettyjson show yourdataset.yourtable | jq .schema.fields > table.json
Then run:
bq update yourdataset.yourtable table.json
You can add --apilog=apilog.txt to the beginning of the command line which will show exactly what is sent / returned from the bigquery server.
In my case I was trying to add a REQUIRED field to a template table, and was running into this error. Changing the field to NULLABLE , let me update the table.
Also more recent version on updates for anybody stumbling from Google.
#To create table
bq mk --schema domain:string,pageType:string,source:string -t Project:Dataset.table
#Or using schema file
bq mk --schema SchemaFile.json -t Project:Dataset.table
#SchemaFile.json format
[{
"mode": "REQUIRED",
"name": "utcTime",
"type": "TIMESTAMP"
},
{
"mode": "REQUIRED",
"name": "domain",
"type": "STRING"
},
{
"mode": "NULLABLE",
"name": "testBucket",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "isMobile",
"type": "BOOLEAN"
},
{
"mode": "REQUIRED",
"name": "Category",
"type": "RECORD",
"fields": [
{
"mode": "NULLABLE",
"name": "Type",
"type": "STRING"
},
{
"mode": "REQUIRED",
"name": "Published",
"type": "BOOLEAN"
}
]
}]
# TO update
bq update --schema UpdatedSchema.json -t Project:Dataset.table
# Updated Schema contains old and any newly added columns
Some docs for template tables
Example using the BigQuery Node JS API:
const fieldDefinition = {
name: 'nestedColumn',
type: 'RECORD',
mode: 'REPEATED',
fields: [
{name: 'id', type: 'INTEGER', mode: 'NULLABLE'},
{name: 'amount', type: 'INTEGER', mode: 'NULLABLE'},
],
};
const table = bigQuery.dataset('dataset1').table('source_table_name');
const metaDataResult = await table.getMetadata();
const metaData = metaDataResult[0];
const fields = metaData.schema.fields;
fields.push(fieldDefinition);
await table.setMetadata({schema: {fields}});
I was stuck trying to add columns to an existing table in BigQuery using the Python client and found this post several times. I'll then let the piece of code that solved it for me, in case someone's having the same problem:
# update table schema
bigquery_client = bigquery.Client()
dataset_ref = bigquery_client.dataset(dataset_id)
table_ref = dataset_ref.table(table_id)
table = bigquery_client.get_table(table_ref)
new_schema = list(table.schema)
new_schema.append(bigquery.SchemaField('LOLWTFMAN','STRING'))
table.schema = new_schema
table = bigquery_client.update_table(table, ['schema']) # API request
You can add Schema to your table through GCP console Easier and Clear:-
Here's a quick snippet I wrote that will dynamically add schema columns if the data coming in (from a server, etc) doesn't match what exists currently in a BigQuery Table:
def verify_schema(client, table, data_dict):
schema = list(table.schema)
existing_schema_names = [schema.name for schema in schema]
validation_list = [True if schema_field in existing_schema_names else schema.append(
bigquery.SchemaField(name=schema_field, field_type='STRING', mode='NULLABLE')) for schema_field in data_dict.keys()]
if None in validation_list:
table.schema = schema
client.update_table(table, ['schema'])