Get Camunda TaskID after creation in response - bpmn

We are using Camunda for our approval process implementation in our application.
We created a BPMN process with human Task service. We are using the below URL
engine-rest/engine/default/process-definition/key/processKey/start
we pass our form parameters as input to this service
{
"variables": {
"requestId" : {"value" : "xxxxx", "type" : "String"},
"catalog" : {"value" : "yyyy", "type" : "String"},
"businessReason": {"value":"yyyyy","type":"String"},
"link": {"value":"","type":"String"}
}
}
The response of this start task is below-
{
"links": [
{
"method": "GET",
"href": "http://localhost:8080/engine-rest/engine/default/process-instance/31701",
"rel": "self"
}
],
"id": "31701",
"definitionId": "xxxxx:7:31605",
"businessKey": null,
"caseInstanceId": null,
"ended": false,
"suspended": false,
"tenantId": null
}
The id in the response is not the actual task ID which we use to get the task details etc instead its the execution ID.
Is there a way to get the task id back in the response.? Also can we add some parameteres to the above response. Like
"status" : "success"
I am having listener class created for the Human task but not sure how to add response parameters . Any help is appreciated

This is not possible unless you build a custom REST resource on top of Camunda's Java API. See https://docs.camunda.org/manual/7.6/reference/rest/overview/embeddability/ for info how you would embed the default REST resources into a custom JAX-RS application.

Related

Netcore server respond to slack interactive message

I am trying to build Slack application with asp net core server. Currently, I have added Slash command which makes a request to my local server through ngrok. Once my server receives that request, it makes a post request to configured slack webhook to display interactive message which looks like pic from attachment.
I want user to be able to select yes or no and receive the result in my controller, but I can't realize how to tell Slack where should it make a post request. I attach the code of this message which is posted through weebhook into Slack:
{
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "This is a section block with a button."
}
},
{
"type": "actions",
"block_id": "actionblock789",
"elements": [
{
"type": "button",
"text": {
"type": "plain_text",
"text": "Yes"
},
"style": "primary",
"value": "yes"
},
{
"type": "button",
"text": {
"type": "plain_text",
"text": "No"
},
"value": "no"
}
]
}
]
}
What I have in netcore is TestController with route /api/test which I suppose should receive from Slack a payload where information about selected button is set, but I couldn't find a way where specify a url in this json code.
You can not configure a URL in the JSON code. That is not how it works with the Slack API.
Slack will send all responses to interactive message - like when someone clicks on a button - to the configured request URL of your Slack app. Check out this link how to find that parameter: documentation

Is there any way to access private datasets through API call

I'm running Mirth 3.6.1 with CKAN 2.8 and being a newbie to this I've run into an issue: Is there a way to access resources in private datasets in CKAN through API requests? I can't seem to do it.
I have an organization with a public dataset and I can can go through Mirth via the API router to the correct Mirth channel and get the data from CKAN, like normal, with an API request. But if I make the dataset private, it all falls apart. Even though I use the correct API-key. Although, that doesn't seem to make a difference. I get success true regardless of whether I use the API-key or not (or if it's even the correct key)
The API-key included in the request is that of the sysadmin.
When I directly access the CKAN resource through a CKAN-endpoint and the dataset is public, I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search", (URL instead of real url)
"success": true,
"result": {
"count": 1,
"results": [
{
"mimetype": null,
"cache_url": null,
"state": "active",
"hash": "REDACTED__", (sensitive data)
"description": "",
"format": "",
"url": "https://URL/datastore/dump/0696c0a1-b249-4fd5-ba80-caf7046a650b", (URL instead of real url)
"datastore_active": true,
"created": "2019-03-19T00:30:04.313593",
"cache_last_updated": null,
"package_id": "11211598-34f8-4d67-ab34-b7fd590ae08d",
"mimetype_inner": null,
"last_modified": null,
"position": 1,
"revision_id": "17b85d36-4ec1-4645-b9b1-dcfe310a54e6",
"size": null,
"url_type": "datastore",
"id": "0696c0a1-b249-4fd5-ba80-caf7046a650b",
"resource_type": null,
"name": "REDACTED" (sensitive data)
}
]
}
}
When the dataset is private, regardless of whether I include the API-key or not (or if it's even the real api-key), I get this response:
{
"help": "https://URL/api/3/action/help_show?name=resource_search",
"success": true,
"result": {
"count": 0,
"results": []
}
}
So, how can I do a resource_search for a resource in a private dataset?
Thanks in advance.
Yes you can do that by using include_private:True in the dataset
Please see the below link
https://docs.ckan.org/en/2.8/api/index.html#ckan.logic.action.get.package_search

Ruckus SmartZone API

I am having issues when trying to create a Zone using the API.
I can create the zone with the basic info, but as soon as I want to add another property (specifically "location") I get an error.
This is my dataset I use for the POST
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA"
"location": "$CalledStationName_val",
}
The API creates everything until I either include the "location" property in the original POST or if I try a PUT or PATCH atferwards.
Result value:
{"message":["object instance has properties which are not allowed by the schema: [\"location\"]"],"errorCode":101,"errorType":"Bad HTTP request"}
Anyone come across this or have any ideas on how to get this working?
Thanks
A comma is required after "countryCode": "ZA". The post payload should look like this:
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA",
"location": "$CalledStationName_val",
}

How do I create an event using the SocialTables API?

I'm trying to use the /4.0/legacyvm3/teams/{team}/events endpoint to create an event. I'm running into some trouble with spaces.
I used the /4.0/legacyvm3/teams/{team}/venues endpoint to get a list of venues. I chose one to include in the spaces section and posted this:
{
"name": "Event via API Test 04",
"category": "athletic event",
"public": true,
"attendee_management": true,
"start_time": "2017-04-05T16:13:54.217Z",
"end_time": "2017-04-05T16:13:54.217Z",
"uses_metric": false,
"venue_mapper_version": 0,
"spaces": [
{
"venue_id": 128379,
"name": "Snurrrggggg"
}
]
}
The endpoint returns a 400 code and this error:
{
"code": 400,
"message": "Cannot read property 'toLowerCase' of undefined"
}
I tried including the wizard section, but each time it would return this error:
{
"message": "Access Denied to this feature"
}
After some experimentation, this body succeeded:
{
"name": "Event via API Test 03",
"category": "athletic event",
"public": true,
"attendee_management": true,
"start_time": "2017-04-05T16:13:54.217Z",
"end_time": "2017-04-05T16:13:54.217Z",
"uses_metric": false,
"venue_mapper_version": 0,
"spaces": [
{
"name": "Fake News Room"
}
]
}
But the application itself would not display the diagram, and the newly created room did not show up in my list of venues. Perhaps it did not assign permissions to it?
In any case, I don't actually want to create a new venue/space. I want to pass in an existing venue/space. How do I do that?
The short answer is to create a working diagram in 4.0 you will need to POST some data to the /4.0/diagrams endpoint.
The room you create doesn't map to the same concept as venues. When you create an event as you did, it creates a new space entity. The spaces endpoints can return information on those.

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.