I need to describe REST (json) api with OpenAPI (Swagger) syntax. I have stuck at the point when I need to describe nested request body. Please suggest how to make it, lets use as example the next nested request body:
{
"pauses" : [
{"name" : "PAUSING_AUTO"},
{"name" : "NO_PAUSE_CRITERIA","Min" : 15},
{"name" : "PREVENTED_PAUSE","Min" : 5},
{"name" : "REVERT_TO_RUN"},
{"name" : "RUNNING"}
]
}
The following description would do:
pauses:
type: "array"
items:
type: "object"
required:
- name
properties:
name:
type: "string"
Min:
type: "integer"
Related
I am attempting to run an existing OpenAPI schema through Open API Enforcer and I am getting various validation errors in the usage of allOf. One such usage is described below:
Consider the following property in a schema:
queryVersion:
allOf:
- $ref: 'VersionDefinition.yaml'
- description: >-
This is my overriding version
VersionDefinition.yaml is defined as follows:
description: >-
Some default version description.
type: string
default: '5.2'
There are two issues with the above definition:
OpenAPI Enforcer expects all schemas defined within allOf or similar keyword to start with a type definition. So, the error it spits out is:
at: queryVersion > allOf > 1
Missing required property: type
I fix that by modifying the allOf definition as follows:
queryVersion:
allOf:
- $ref: 'VersionDefinition.yaml'
- type : object
properties:
- description: >-
This is my overriding version
That eliminates the error but what should I expect to see in the generated schema? Original author indicates he is using allOf to override the property description. However, the generated schema includes this result:
"queryVersion": {
"allOf": [
{
"description": "Some default version description",
"type": "string",
"default": "5.2"
},
{
"type": "object",
"properties": [
{
"description": "This is my overriding version"
}
]
}
]
}
What I expected to see was:
"queryVersion": {
"description": "This is my overriding version",
"type": "string",
"default": "5.2"
}
I'll keep digging but any ideas?
We are using Camunda for our approval process implementation in our application.
We created a BPMN process with human Task service. We are using the below URL
engine-rest/engine/default/process-definition/key/processKey/start
we pass our form parameters as input to this service
{
"variables": {
"requestId" : {"value" : "xxxxx", "type" : "String"},
"catalog" : {"value" : "yyyy", "type" : "String"},
"businessReason": {"value":"yyyyy","type":"String"},
"link": {"value":"","type":"String"}
}
}
The response of this start task is below-
{
"links": [
{
"method": "GET",
"href": "http://localhost:8080/engine-rest/engine/default/process-instance/31701",
"rel": "self"
}
],
"id": "31701",
"definitionId": "xxxxx:7:31605",
"businessKey": null,
"caseInstanceId": null,
"ended": false,
"suspended": false,
"tenantId": null
}
The id in the response is not the actual task ID which we use to get the task details etc instead its the execution ID.
Is there a way to get the task id back in the response.? Also can we add some parameteres to the above response. Like
"status" : "success"
I am having listener class created for the Human task but not sure how to add response parameters . Any help is appreciated
This is not possible unless you build a custom REST resource on top of Camunda's Java API. See https://docs.camunda.org/manual/7.6/reference/rest/overview/embeddability/ for info how you would embed the default REST resources into a custom JAX-RS application.
I want to create a feedback mechanism where if wit.ai fails to understand some command it can suggest a list of intents to be chosen by user and using this I can update the synonym under that entity.
You can use this url to get all your intents:
curl -XGET 'https://api.wit.ai/entities/intent?v=20170101' -H "Authorization: Bearer $TOKEN"
{
"builtin" : false,
"doc" : "User-defined entity",
"exotic" : false,
"id" : "58731dcc-3180-43c9-46fd-8881447d9f0c",
"lang" : "en",
"lookups" : [ "trait" ],
"name" : "intent",
"values" : [ {
"value" : "demo-free",
"expressions" : [ "#Cortex, is demo free?", "Is demo free?" ]
}, {
"value" : "demo-info",
"expressions" : [ "#Cortex, who is using demo?", "#Cortex, Who's using demo?", "Who's using it?", "Who's using demo?" ]
}, {
"value" : "mongo-status",
"expressions" : [ "#Cortex, is mongo2 ok?", "#Cortex, how is mongo1?", "#Cortex, is mongo1 ok?", "#Cortex, is mongo ok>", "is mongo-1 ok?", "#Cortex, is mongodb ok?", "#Cortex, is mongo ok?", "how are the mongo servers?", "how are the mongod servers?", "is mongo ok", "is mongodb ok?", "Mongo status", "Check mongo status" ]
}, {
"value" : "cortex-help",
"expressions" : [ "what can you do for me?", "help me", "help", "how can you help me?", "What can you do?" ]
}, {
"value" : "mongo-logs",
"expressions" : [ "#Cortex, can I see all the db logs?", "#Cortex, can I see all db logs?", "#Cortex, can I see the mongo logs?", "can I see the mongo logs?", "can I see the mongod logs?", "let me see the mongo logs", "can I see the mongodb logs?", "show me mongo logs", "show me the mongo logs" ]
} ]
You can use this to pick one (or more) example(s) of each intent and show those to your users.
Can someone help with schemas refs in abao? How to use --schemas option? Here is simple gist https://gist.github.com/SeanSilke/e5a2f7673ad4aa2aa43ba800c9aec31b
I try to run "abao api.raml --schemas fref.json" but got error " Missing/unresolved JSON schema $refs (fref.json) in schema".
By the way the server is mocked by osprey-mock-service.
You need add id field to your JSON schemas.
For run use: abao api.raml --server http://localhost:3000 --schemas=./*.json
Example files:
api.raml
#%RAML 0.8
title: simple API
baseUri: http://localhost:3000
/song:
get:
responses:
200:
body:
application/json:
schema: !include schema.json
example: |
{
"songId": "e29b",
"songTitle": "The song",
"albumId": "18310"
}
fref.json
{
"id": "fref.json",
"type": "string"
}
schema.json
{
"$schema": "http://json-schema.org/draft-03/schema",
"id": "schema.json",
"type": "object",
"properties":{
"songId": {"$ref": "fref.json"}
},
"required": ["songId", "albumId", "songTitle"]
}
I am trying to read csv files as input data and write the output in avro format.
Note :- Pig Version Apache Pig version 0.12.1.2.1.5.0-695
REGISTER /usr/lib/pig/lib/avro-1.7.4.jar;
REGISTER /usr/lib/pig/lib/piggybank.jar;
REGISTER /usr/lib/pig/lib/jackson-mapper-asl-1.8.8.jar;
REGISTER /usr/lib/pig/lib/jackson-core-asl-1.8.8.jar;
REGISTER /usr/lib/pig/lib/json-simple-1.1.1.jar;
A = LOAD '/data/raw/event';
store A into '/data/dev/raw/pig'
using org.apache.pig.piggybank.storage.avro.AvroStorage('no_schema_check',
'schema', ' {
"name" : "EVENT",
"type" : "record",
"fields" : [ {
"name" : "evt",
"type" : [ "long", "null" ]
}, {
"name" : "mac",
"type" : [ "int", "null" ]
}, {
"name" : "sec",
"type" : [ "int", "null" ]
} ]
}');
I get the below exception :
ERROR 2997: Unable to recreate exception from backed error: Error: org.apache.avro.file.DataFileWriter$AteException: java.lang.RuntimeException:
Unsupported type in record:class org.apache.pig.data.DataByteArray
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:263)
at org.apache.pig.piggybank.storage.avro.PigAvroRecordWriter.write(PigAvroRecordWriter.java:49)
at org.apache.pig.piggybank.storage.avro.AvroStorage.putNext(AvroStorage.java:749)
Caused by: java.lang.RuntimeException: Unsupported type in record:class org.apache.pig.data.DataByteArray
at org.apache.pig.piggybank.storage.avro.PigAvroDatumWriter.getField(PigAvroDatumWriter.java:385)
at org.apache.pig.piggybank.storage.avro.PigAvroDatumWriter.writeRecord(PigAvroDatumWriter.java:363)
Please let me know If I have missed any thing or if any work around exists
By default Pig will load all the fields as DataByteArray.
So you have to load the data with schema as follows
A = LOAD '/data/raw/event' as (evt:long,mac,int,sec:int)