Camunda API Assign and Count Task Count By Candidate Group - bpmn

I am creating a workflow in Camunda. I'd like to:
assign tasks to a group, other than a specific user
count the number of tasks per group.
I did not find any guideline from Camunda docs talking about assigning tasks to a group, so I tried Add Identity Link
POST /task/{{taskid}}/identity-links
Request body:
{"groupId": "teamA", "type": "candidate"}
However, when I try Get Task Count By Candidate Group
GET /task/report/candidate-group-count
There was no task assigned to TeamA from the Response body, in contrast, all my desired tasks fell into the group null.
[
{
"groupName": "accounting",
"taskCount": 4
},
{
"groupName": "sales",
"taskCount": 2
},
{
"groupName": null,
"taskCount": 6
}
]
Any advice on how to fix this?
UPDATE 2022-02-11
I think the issue might be caused by the version of camunda bpmn platform. After I downgrade the version from 7.17.0-alpha4 to 7.16, the issue is resolved.

Related

Selecting the latest document for each "Group"

I am using Azure Cosmos DB SQL API to try to achieve the following;
We have device data stored within a collection and would love to retrieve the latest event data per device serial effectively without having to do N queries for each device separately.
SELECT *
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1') ORDER BY c.EventEnqueuedUtcTime DESC
Im assuming I would need to use Group By - https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-group-by
Any assistance would be greatly appreciated
Rough example of data :
[
{
"temperature": 25.22063251827873,
"humidity": 71.54208429695204,
"serial": "V55555555",
"testid": 1,
"location": {
"type": "Point",
"coordinates": [
30.843687,
-29.789895
]
},
"EventProcessedUtcTime": "2020-09-07T12:04:34.5861918Z",
"PartitionId": 0,
"EventEnqueuedUtcTime": "2020-09-07T12:04:34.4700000Z",
"IoTHub": {
"MessageId": null,
"CorrelationId": null,
"ConnectionDeviceId": "V55555555",
"ConnectionDeviceGenerationId": "637323979596346475",
"EnqueuedTime": "2020-09-07T12:04:34.0000000"
},
"Name": "admin",
"id": "6dac491e-1f28-450d-bf97-3a15a0efaad8",
"_rid": "i2UhAI7ofAo3AQAAAAAAAA==",
"_self": "dbs/i2UhAA==/colls/i2UhAI7ofAo=/docs/i2UhAI7ofAo3AQAAAAAAAA==/",
"_etag": "\"430131c1-0000-0100-0000-5f5621d80000\"",
"_attachments": "attachments/",
"_ts": 1599480280
}
]
UPDATE:
So doing the following returns the correct data but sadly you can only return data thats inside your group by or an aggregate function (i.e. cant do select *)
SELECT c.serial, MAX(c.EventProcessedUtcTime)
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
GROUP BY c.serial
[
{
"serial": "synap-aim-g1",
"$1": "2020-09-09T06:29:42.6812629Z"
},
{
"serial": "V55555555",
"$1": "2020-09-07T12:04:34.5861918Z"
}
]
Thanks for #AnuragSharma-MSFT's help:
I am afraid there is no direct way to achieve it using a query in
cosmos db. However you can refer to below link for the same topic. If
you are using any sdk, this would help in achieving the desired
functionality: https://learn.microsoft.com/en-us/answers/questions/38454/index.html
We're glad that you resolved it in this way, thanks for sharing the update:
So doing the following returns the correct data but sadly you can only return data thats inside your group by or an aggregate function (i.e. cant do select *)
SELECT c.serial, MAX(c.EventProcessedUtcTime)
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
GROUP BY c.serial
[
{
"serial": "synap-aim-g1",
"$1": "2020-09-09T06:29:42.6812629Z"
},
{
"serial": "V55555555",
"$1": "2020-09-07T12:04:34.5861918Z"
}
]
If the question is really about an efficient approach to this particular query scenario, we can consider denormalization in cases where the query language itself doesn't offer an efficient solution. This guide on partitioning and modeling has a relevant section on getting the latest items in a feed.
We just need to get the 100 most recent posts, without the need to
paginate through the entire data set.
So to optimize this last request, we introduce a third container to
our design, entirely dedicated to serving this request. We denormalize
our posts to that new feed container.
Following this approach, you could create a "Feed" or "LatestEvent" container dedicated to the "latest" query which uses the device serial as id and having a single partition key in order to guarantee that there is only one (the most recent) event item per device, and that it can be fetched by the device serial or listed with least possible cost using a simple query:
SELECT *
FROM c
WHERE c.serial IN ('V55555555','synap-aim-g1')
The change feed could be used to upsert the latest event, such that the latest event is created/overwritten in the "LatestEvent" container as its source item is created in the main.

How to implement "GET" request to work for STRAVA API from Postman?

Further update...
I got this working. Although Strava's documentation does not say any of the arguments in the call are mandatory it seems they all are. You need to put valid before and after arguments in epoch time and... (and this is the part that confused me a bit) you need to give a page number and items per page. The items per page default to 30 but the page number does not default. The way it works is if you say page 1 and 30 items per page you get items 1 - 30. If you say page 3 and 30 items per page you get items 31 - 60 and so on. You have to create a loop that keeps going until it gets a blank page. You then know you have retrieved all the activities. (At least that is how I think it works.)
Adrian
Question update...
After some digging and experimenting I have managed to solve some of my problem (as described below) on my own. When one creates an app on Strava listed under your settings under "My API Application" the token given has scope "read" and seems to be very, very limited.
After following the steps listed here Strava Authentication I was able to get a new token with the following scopes:
scope=read,activity:read,activity:read_all,profile:read_all,read_all
So... I thought I was "golden" as the saying goes.
Well now I am able to get individual activities using:
https://www.strava.com/api/v3/activities/2110745394?include_all_efforts="true"&access_token={{ADR_Strava_API_Key}}
But when I try to get a list of all activities I don't get any error messages but Strava simply returns
[] and this for an athlete that I know has over 1800 activities.
What I really want is to get the list of activities. Any help would be appreciated.
Thank you
Adrian
I can get athlete information back from Strava using postman using the following https request:
https://www.strava.com/api/v3/athletes/19133707?access_token={{ADR_Strava_API_Key}}
The following gets returned:
{
"id": 19133707,
"username": "adrian_geekie",
"resource_state": 2,
"firstname": "Adrian",
"lastname": "Geekie",
"city": "Gauteng, South Africa",
"state": "GP",
"country": "South Africa",
"sex": "M",
"premium": true,
"summit": true,
"created_at": "2017-01-03T16:07:37Z",
"updated_at": "2019-01-28T16:08:07Z",
"badge_type_id": 1,
"profile_medium": "https://dgalywyr863hv.cloudfront.net/pictures/athletes/19133707/5599004/2/medium.jpg",
"profile": "https://dgalywyr863hv.cloudfront.net/pictures/athletes/19133707/5599004/2/large.jpg",
"friend": null,
"follower": null
}
But when I try to get activities using this request:
https://www.strava.com/api/v3/19133707/activities?before=&after=1546293601&page=&per_page=&access_token={{ADR_Strava_API_Key}}
I get this returned:
{
"message": "Record Not Found",
"errors": [
{
"resource": "resource",
"field": "path",
"code": "invalid"
}
]
}
According to me I am asking for all records after the 1st of January 2019 i.e. epoch timestamp 1546293601. I know there are many activities for that athlete after that date. (More than 20).
I have also tried to get a single activity using:
https://www.strava.com/api/v3/activities/2110745394?include_all_efforts="true"&access_token={{ADR_Strava_API_Key}}
and I get the result:
{
"message": "Resource Not Found",
"errors": [
{
"resource": "Activity",
"field": "",
"code": "not found"
}
]
}
On the Strava developer's page the examples are given for HTTPie like this:
https://www.strava.com/api/v3/activities/{id}?include_all_efforts=" "Authorization: Bearer [[token]]
So I am replacing "Authorization: Bearer [[token]] with &access_token=
Perhaps that is my error but access_token works in the first example.
I am sorry if this is a total idiot question. I am a beginner and I would appreciate any help.
Thank you

How to update(add/remove) all Groups in ONE Grouping, without affecting OTHER Groupings?

I need help with an API call to update a Members Groups in a Grouping in my MailChimp List.
I have a few "Interest Groupings", each with several "Groups". For example, the first two are....
Grouping: Purchased
Groups: P_SPA3ASX, P_SPA3CFD, P_SPA3ETF, .....
Grouping: Member Stattus
Groups: Lead, Active, Inactive, Staff
Using an API call, I would like to update the Groups in the Purchased Grouping, without affecting ANY of the other Groupings. I have had some success but one scenario eludes me.
My API Call looks like this:
POST to: https://us11.api.mailchimp.com/2.0/lists/update-member.json
POST Body is:
{
"apikey": "myapikey",
"id": "mtlistid",
"email": {
"leid": "165320973"
},
"double_optin": false,
"update_existing": false,
"send_welcome": false,
"replace_interests": false,
"merge_vars": {
"groupings": [
{
"name": "Purchased",
"groups": ["P_SPA3ASX","","","",""]
}
]
}
}
When I change the "replace_interests" setting and pass different "groups" to the API, this is what happens.
Scenario 1: replace_interests = false
Result:
Good. Groups are added to "Purchased".
Bad. Groups are NOT removed from "Purchased".
Good. Other Groupings are not affected.
Scenario 2: replace_interests = true
Result:
Good. Groups are added to "Purchased".
Good. Groups are removed from "Purchased".
Bad. Other Groupings ARE affected. They are all cleared!
But how do I achieve all three (Add Groups, Remove Groups and Not affect other Groupings).
This isn't possible via API v2.0. In order to update a list subscribers interests, you have to supply all of them due to the use of an array to describe that data. In API v3.0, interests can be modified individually without affecting other interests.

Asana integration with Slack

I am looking to implement a solution where when I create a project in Asana it will create a room in Slack with all the same members.I was planning on writing a script to run every couple of minutes to look for either new projects or changes in membership of current projects and then call out to slack to make the changes. This, however, would be a lot of chatter so I was hoping someone might know of and be able to recommend another way that will make these changes on an as needed basis.
It sounds like you have the best solution outlined for this use case.
In order to get a list of new projects in a workspaces you should query the projects endpoint and check for newly created projects based on the created_at field, using opt_fields field selector to have that returned in your query. I strongly suggest that you scope this query to a single workspace and use pagination.
GET 'https://api.asana.com/api/1.0/workspaces/5233820891524/projects?opt_fields=name,created_at&limit=2' | j
{
"data": [
{
"id": 23154287843671,
"created_at": "2014-12-31T18:35:49.695Z",
"name": "Ninja Things"
},
{
"id": 23154287843675,
"created_at": "2014-12-31T18:35:59.174Z",
"name": "Unicorns"
}
],
"next_page": {
"offset": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA",
"uri": "https://api.asana.com/api/1.0/workspaces/5233820891524/projects?limit=2&opt_output=json&opt_fields=name%2Ccreated_at&offset=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA",
"path": "/workspaces/5233820891524/projects?limit=2&opt_output=json&opt_fields=name%2Ccreated_at&offset=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJib3JkZXJfcmFuayI6ImRTbm5ZaGNOOWFFIiwiaWF0IjoxNDM4ODE0MzY0LCJleHAiOjE0Mzg4MTUyNjR9.82zecHAT51-GSrL6FdcrRdMs45U7PZ3g-d4Zuo_B8UA"
}
}
For new members of current projects you would need to query individual projects and check the memberships property.
I would have suggested using the Events api to check for new members but tested and determined that new members are not considered an event on the project, something that we will consider changing.

How to get total number of edits for a given wikipedia page from the API?

I actually do not want to list each edit, but to get only the count of it.
this data is available for every article on the left panel in link:
https://en.wikipedia.org/w/index.php?title=Wikipedia&action=info
But this produces complete web page with tables, formatting etc and its exhaustive for wikipedia servers. So I ask if is there a way to only get those few numbers and ommit the whole website scraping.
Probably not the answer you want but there isn't a way to get this information yet.
As a workaround you can use the prop=revisions to get all the revisions contributed to the article. You will be able to count the rev tag from here:
http://en.wikipedia.org/w/api.php?format=xml&action=query&titles=Wikipedia&prop=revisions&rvprop=ids&rvlimit=max
Alternatively, you can ask YQL to count it for you with the following command:
SELECT * FROM xml
WHERE url="http://en.wikipedia.org/w/api.php?format=xml&action=query&titles=Wikipedia&prop=revisions&rvprop=ids&rvlimit=max"
AND itemPath="/api/query/pages/page/revisions/rev"
Example output (Link to full output):
{
"query": {
"count": 500, //This is the total amount of edits
"created": "2014-03-04T02:29:42Z",
"lang": "en-US",
"results": {
"rev": [{
"parentid": "597995345",
"revid": "598005528"
}, {
"parentid": "597994174",
"revid": "597995345"
}, {
"parentid": "597891867",
"revid": "597994174"
}]
}
}
}
Unfortunately, the upper limit for users to retrieve revision data is 500 and for bots it's 5000.
To get the exact count, you will have to set up a parser on your server to capture the exact count from the info page whenever a user queries the data from your side.