How can i suggest intents to user in wit.ai? - wit.ai

I want to create a feedback mechanism where if wit.ai fails to understand some command it can suggest a list of intents to be chosen by user and using this I can update the synonym under that entity.

You can use this url to get all your intents:
curl -XGET 'https://api.wit.ai/entities/intent?v=20170101' -H "Authorization: Bearer $TOKEN"
{
"builtin" : false,
"doc" : "User-defined entity",
"exotic" : false,
"id" : "58731dcc-3180-43c9-46fd-8881447d9f0c",
"lang" : "en",
"lookups" : [ "trait" ],
"name" : "intent",
"values" : [ {
"value" : "demo-free",
"expressions" : [ "#Cortex, is demo free?", "Is demo free?" ]
}, {
"value" : "demo-info",
"expressions" : [ "#Cortex, who is using demo?", "#Cortex, Who's using demo?", "Who's using it?", "Who's using demo?" ]
}, {
"value" : "mongo-status",
"expressions" : [ "#Cortex, is mongo2 ok?", "#Cortex, how is mongo1?", "#Cortex, is mongo1 ok?", "#Cortex, is mongo ok>", "is mongo-1 ok?", "#Cortex, is mongodb ok?", "#Cortex, is mongo ok?", "how are the mongo servers?", "how are the mongod servers?", "is mongo ok", "is mongodb ok?", "Mongo status", "Check mongo status" ]
}, {
"value" : "cortex-help",
"expressions" : [ "what can you do for me?", "help me", "help", "how can you help me?", "What can you do?" ]
}, {
"value" : "mongo-logs",
"expressions" : [ "#Cortex, can I see all the db logs?", "#Cortex, can I see all db logs?", "#Cortex, can I see the mongo logs?", "can I see the mongo logs?", "can I see the mongod logs?", "let me see the mongo logs", "can I see the mongodb logs?", "show me mongo logs", "show me the mongo logs" ]
} ]
You can use this to pick one (or more) example(s) of each intent and show those to your users.

Related

How to describe nested request body in OpenAPI (Swagger) syntax?

I need to describe REST (json) api with OpenAPI (Swagger) syntax. I have stuck at the point when I need to describe nested request body. Please suggest how to make it, lets use as example the next nested request body:
{
"pauses" : [
{"name" : "PAUSING_AUTO"},
{"name" : "NO_PAUSE_CRITERIA","Min" : 15},
{"name" : "PREVENTED_PAUSE","Min" : 5},
{"name" : "REVERT_TO_RUN"},
{"name" : "RUNNING"}
]
}
The following description would do:
pauses:
type: "array"
items:
type: "object"
required:
- name
properties:
name:
type: "string"
Min:
type: "integer"

Get Camunda TaskID after creation in response

We are using Camunda for our approval process implementation in our application.
We created a BPMN process with human Task service. We are using the below URL
engine-rest/engine/default/process-definition/key/processKey/start
we pass our form parameters as input to this service
{
"variables": {
"requestId" : {"value" : "xxxxx", "type" : "String"},
"catalog" : {"value" : "yyyy", "type" : "String"},
"businessReason": {"value":"yyyyy","type":"String"},
"link": {"value":"","type":"String"}
}
}
The response of this start task is below-
{
"links": [
{
"method": "GET",
"href": "http://localhost:8080/engine-rest/engine/default/process-instance/31701",
"rel": "self"
}
],
"id": "31701",
"definitionId": "xxxxx:7:31605",
"businessKey": null,
"caseInstanceId": null,
"ended": false,
"suspended": false,
"tenantId": null
}
The id in the response is not the actual task ID which we use to get the task details etc instead its the execution ID.
Is there a way to get the task id back in the response.? Also can we add some parameteres to the above response. Like
"status" : "success"
I am having listener class created for the Human task but not sure how to add response parameters . Any help is appreciated
This is not possible unless you build a custom REST resource on top of Camunda's Java API. See https://docs.camunda.org/manual/7.6/reference/rest/overview/embeddability/ for info how you would embed the default REST resources into a custom JAX-RS application.

Using $ref for jsonschema in Abao

Can someone help with schemas refs in abao? How to use --schemas option? Here is simple gist https://gist.github.com/SeanSilke/e5a2f7673ad4aa2aa43ba800c9aec31b
I try to run "abao api.raml --schemas fref.json" but got error " Missing/unresolved JSON schema $refs (fref.json) in schema".
By the way the server is mocked by osprey-mock-service.
You need add id field to your JSON schemas.
For run use: abao api.raml --server http://localhost:3000 --schemas=./*.json
Example files:
api.raml
#%RAML 0.8
title: simple API
baseUri: http://localhost:3000
/song:
get:
responses:
200:
body:
application/json:
schema: !include schema.json
example: |
{
"songId": "e29b",
"songTitle": "The song",
"albumId": "18310"
}
fref.json
{
"id": "fref.json",
"type": "string"
}
schema.json
{
"$schema": "http://json-schema.org/draft-03/schema",
"id": "schema.json",
"type": "object",
"properties":{
"songId": {"$ref": "fref.json"}
},
"required": ["songId", "albumId", "songTitle"]
}

Defining a queue with a config file in RabbitMQ

Is there a way to define a queue in a configuration file like in ActiveMQ :
http://activemq.apache.org/configure-startup-destinations.html
Yes, it is possible.
The easiest way:
Add queue manually, from webUI
By default webUI is exposed on port 15672.
Add queue accessing http://localhost:15672/#/queues
Export config file from webUI.
Access main page http://localhost:15672/#/. At the bottom you have section Import / export definitions, and button download broker definitions.
Just download the file and it will contain all defined queues.
Sample config file, with users, virtual host and queue:
I have formatted the file using JStool plugin, JSFormat option from Notepad++.
By default, file is single line and not very readable.
Next to 'download broker definitions' there is button 'upload broker definitions'. You may upload your file (it will work with pretty-formatted file).
{
"rabbit_version" : "3.5.7",
"users" : [{
"name" : "guest",
"password_hash" : "42234423423",
"tags" : "administrator"
}
],
"vhosts" : [ {
"name" : "/uat"
}
],
"permissions" : [{
"user" : "guest",
"vhost" : "/uat",
"configure" : ".*",
"write" : ".*",
"read" : ".*"
}
],
"parameters" : [],
"policies" : [],
"queues" : [{
"name" : "sms",
"vhost" : "/uat",
"durable" : false,
"auto_delete" : false,
"arguments" : {}
}
],
"exchanges" : [],
"bindings" : []
}

Making storage plugin on Apache Drill to HDFS

I'm trying to make storage plugin for Hadoop (hdfs) and Apache Drill.
Actually I'm confused and I don't know what to set as port for hdfs:// connection, and what to set for location.
This is my plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://localhost:54310",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json"
},
"avro": {
"type": "avro"
}
}
}
So, is ti correct to set localhost:54310 because I got that with command:
hdfs -getconf -nnRpcAddresses
or it is :8020 ?
Second question, what do I need to set for location? My hadoop folder is in:
/usr/local/hadoop
, and there you can find /etc /bin /lib /log ... So, do I need to set location on my datanode, or?
Third question. When I'm connecting to Drill, I'm going through sqlline and than connecting on my zookeeper like:
!connect jdbc:drill:zk=localhost:2181
My question here is, after I make storage plugin and when I connect to Drill with zk, can I query hdfs file?
I'm very sorry if this is a noob question but I haven't find anything useful on internet or at least it haven't helped me.
If you are able to explain me some stuff, I'll be very grateful.
As per Drill docs,
{
"type" : "file",
"enabled" : true,
"connection" : "hdfs://10.10.30.156:8020/",
"workspaces" : {
"root" : {
"location" : "/user/root/drill",
"writable" : true,
"defaultInputFormat" : null
}
},
"formats" : {
"json" : {
"type" : "json"
}
}
}
In "connection",
put namenode server address.
If you are not sure about this address.
Check fs.default.name or fs.defaultFS properties in core-site.xml.
Coming to "workspaces",
you can save workspaces in this. In the above example, there is a workspace with name root and location /user/root/drill.
This is your HDFS location.
If you have files under /user/root/drill hdfs directory, you can query them using this workspace name.
Example: abc is under this directory.
select * from dfs.root.`abc.csv`
After successfully creating the plugin, you can start drill and start querying .
You can query any directory irrespective to workspaces.
Say you want to query employee.json in /tmp/data hdfs directory.
Query is :
select * from dfs.`/tmp/data/employee.json`
I have similar problem, Drill cannot read dfs server. Finally, the problem is cause by namenode port.
The default address of namenode web UI is http://localhost:50070/.
The default address of namenode server is hdfs://localhost:8020/.