Keen-io: i can't delete special event using extraction query filter - keen-io

using extraction query (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxx/queries/extraction?api_key=xxxx&event_collection=dispatched-orders&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
result: [
{
mobile: "13185716746",
keen : {
timestamp: "2015-02-10T07:10:07.816Z",
created_at: "2015-02-10T07:10:08.725Z",
id: "54d9aed03bc6964a7d311f9e"
},
data : {
itemId: 2130,
num: 1
},
features: {
communityId: 2000,
dispatcherId: 39,
tradeId: 8581
}
}
]
}
but if i use the same filters in my delete query url (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
properties: {
data.num: "num",
keen.created_at: "datetime",
mobile: "string",
keen.id: "string",
features.communityId: "num",
features.dispatcherId: "num",
keen.timestamp: "datetime",
features.tradeId: "num",
data.itemId: "num"
}
}
plz help me ...

It looks like you are issuing a GET request for the delete comment. If you perform a GET on a collection you get back the schema that Keen has inferred for that collection.
You'll want to issue the above as a DELETE request. Here's the cURL command to do that:
curl -X DELETE "https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800"
Note that you'll probably need to URL encode that JSON as you mentioned in your above post!

Related

GROQ: Query one-to-many relationship with parameter as query input

I have a blog built in NextJS, backed by Sanity. I want to start tagging posts with tags/categories.
Each post may have many categories.
Category is a reference on post:
defineField({
name: 'category',
title: 'Category',
type: 'array',
of: [
{
type: 'reference',
to: [
{
type: 'category',
},
],
},
],
}),
This is my GROQ query:
*[_type == "post" && count((category[]->slug.current)[# in ['dogs']]) > 0] {
_id,
title,
date,
excerpt,
coverImage,
"slug": slug.current,
"author": author->{name, picture},
"categories": category[]-> {name, slug}
}
The above works, when it is hardcoded, but swapping out 'dogs' with $slug for example will cause the query to fail. (Where $slug is a param provided)
*[_type == "post" && count((category[]->slug.current)[# in [$slug]]) > 0]
{
$slug: 'travel'
}
How do I make the above dynamic?
Returns all documents that are storefronts // within 10 miles of the user-provided currentLocation parameter ; // For a given $currentLocation geopoint
I can't believe it. Rookie mistake. I needed to pay more attention in the Sanity IDE. (To be fair there was a UI bug that hid the actual issue)
The param should not contain the $. E.g the following works in the GROQ IDE.
{
slug: 'travel'
}

Replace Objects with the corresponding ObjectId with Mongoose if found in the MongoDB

I have a MEAN-Stack setup in which i have Devices and Servicecases saved in the MongoDB-Database.
Devices can be the content of a Servicecase
If a new Case should be created, my Frontend will deliver the following form data:
content: [
{
"device": 012345678909876,
"errorDesc": "lorem"
},
{
"device": 012345678909876,
"errorDesc": "ipsum"
}
]
There could be a device document with the submitted device number in the Database. If yes, the received doc should be populated with its ObjectId to look like this:
content: [
{
device: { type: Schema.Types.ObjectId, ref: 'Device' },
errorDesc: String
},
...
]
If not, it should stay as it is
I could iterate through each device of the array and use the findOne() query and, if a doc was found, replace it, but is there a more efficient way to use the populate() transformation?

How to convert Json to CSV and send it to big query or google cloud bucket

I`m new to nifi and I want to convert big amount of json data to csv format.
This is what I am doing at the moment but it is not the expected result.
These are the steps:
processes to create access_token and send request body using InvokeHTTP(This part works fine I wont name the processes since this is the expected result) and getting the response body in json.
Example of json response:
[
{
"results":[
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdaasdasdad",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdasda"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
{
"customer":{
"resourceName":"customers/123456789",
"id":"11111111"
},
"campaign":{
"resourceName":"customers/123456789/campaigns/222456422222",
"name":"asdasdasdasd",
"id":"456456546546"
},
"adGroup":{
"resourceName":"customers/456456456456/adGroups/456456456456",
"id":"456456456546",
"name":"asdasdasdas"
},
"metrics":{
"clicks":"11",
"costMicros":"43068982",
"impressions":"2079"
},
"segments":{
"device":"DESKTOP",
"date":"2021-11-22"
},
"incomeRangeView":{
"resourceName":"customers/456456456/incomeRangeViews/456456546~456456456"
}
},
....etc....
]
}
]
Now I am using:
===>SplitJson ($[].results[])==>JoltTransformJSON with this spec:
[{
"operation": "shift",
"spec": {
"customer": {
"id": "customer_id"
},
"campaign": {
"id": "campaign_id",
"name": "campaign_name"
},
"adGroup": {
"id": "ad_group_id",
"name": "ad_group_name"
},
"metrics": {
"clicks": "clicks",
"costMicros": "cost",
"impressions": "impressions"
},
"segments": {
"device": "device",
"date": "date"
},
"incomeRangeView": {
"resourceName": "keywords_id"
}
}
}]
==>> MergeContent( here is the problem which I don`t know how to fix)
Merge Strategy: Defragment
Merge Format: Binary Concatnation
Attribute Strategy Keep Only Common Attributes
Maximum number of Bins 5 (I tried 10 same result)
Delimiter Strategy: Text
Header: [
Footer: ]
Demarcator: ,
What is the result I get?
I get a json file that has parts of the json data
Example: I have 50k customer_ids in 1 json file so I would like to send this data into big query table and have all the ids under the same field "customer_id".
The MergeContent uses the split json files and combines them but I will still get 10k customer_ids for each file i.e. I have 5 files and not 1 file with 50k customer_ids
After the MergeContent I use ==>>ConvertRecord with these settings:
Record Reader JsonTreeReader (Schema Access Strategy: InferSchema)
Record Writer CsvRecordWriter (
Schema Write Strategy: Do Not Write Schema
Schema Access Strategy: Inherit Record Schema
CSV Format: Microsoft Excel
Include Header Line: true
Character Set UTF-8
)
==>>UpdateAttribute (custom prop: filename: ${filename}.csv) ==>> PutGCSObject(and put the data into the google bucket (this step works fine- I am able to put files there))
With this approach I am UNABLE to send data to big query(After MergeContent I tried using PutBigQueryBatch and used this command in bq sheel to get the schema I need:
bq show --format=prettyjson some_data_set.some_table_in_that_data_set | jq '.schema.fields'
I filled all the fields as needed and Load file type: I tried NEWLINE_DELIMITED_JSON or CSV if I converted it to CSV (I am not getting errors but no data is uploaded into the table)
)
What am I doing wrong? I basically want to map the data in such a way that each fields data will be under the same field name
The trick you are missing is using Records.
Instead of using X>SplitJson>JoltTransformJson>Merge>Convert>X, try just X>JoltTransformRecord>X with a JSON Reader and a CSV Writer. This skips a lot of inefficiency.
If you really need to split (and you should avoid splitting and merging unless totally necessary), you can use MergeRecord instead - again with a JSON Reader and CSV Writer. This would make your flow X>Split>Jolt>MergeRecord>X.

How to extract the field from JSON object with QueryRecord

I have been struggling with this problem for a long time. I need to create a new JSON flowfile using QueryRecord by taking an array (field ref) from input JSON field refs and skip the object field as shown in example below:
Input JSON flowfile
{
"name": "name1",
"desc": "full1",
"refs": {
"ref": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
}
QueryRecord configuration
JSONTreeReader setup as Infer Schema and JSONRecordSetWriter
select name, description, (array[rpath(refs, '//ref[*]')]) as sources from flowfile
Output JSON (need)
{
"name": "name1",
"desc": "full1",
"references": [
{
"source": "source1",
"url": "url1"
},
{
"source": "source2",
"url": "url2"
}
]
}
But got error:
QueryRecord Failed to write MapRecord[{references=[Ljava.lang.Object;#27fd935f, description=full1, name=name1}] with schema ["name" : "STRING", "description" : "STRING", "references" : "ARRAY[STRING]"] as a JSON Object due to java.lang.ClassCastException: null
Try the following approach, in your case it shoud work:
1) Read your JSON field fully (I imitated it with GenerateFlowFile processor with your example)
2) Add EvaluateJsonPath processor which will put 2 header fileds (name, desc) into the attributes:
3) Add SplitJson processor which will split your JSON byt refs/ref/ groups (split by "$.refs.ref"):
4) Add ReplaceText processor which will add you header fields (name, desc) to the split lines (replace "[{]" value with "{"name":"${json.name}","desc":"${json.desc}","):
5) It`s done:
Full process in my demo case:
Hope this helps.
Solution!: use JoltTransformJSON to transform JSON by Jolt specification. About this specification.

Datastore API Filter by field then sort ascending

I'm playing around with the Google Datastore API (runQuery method), and I am trying to run the gQuery String
'Select * from Transaction Where User = "[Me]" ORDER BY Start[date] ASC'
Sending that JSON Object gives me the following error:
400
- Show headers -
{
"error": {
"code": 400,
"message": "no matching index found. recommended index is:\n- kind: Transaction\n properties:\n - name: User\n - name: Start\n",
"status": "FAILED_PRECONDITION"
}
}
Alternatively, If I run this string:
{
"gqlQuery":
{
"allowLiterals":
"queryString":"
SELECT * FROM Transaction WHERE User = "[Me]"
"
}
}
I get a 200 Response
200
- Show headers -
{
"batch": {
"entityResultType": "FULL",
"entityResults": [
{
"entity": {
"key": {
"partitionId": {
"projectId": "project-id-5200999099906492774"
},
"path": [
{
"kind": "Transaction",
"id": "4693202737039992"
}
]
},....
or if I run this one to just order all the results:
'Select * from Transaction ORDER BY Start[date] ASC'
I get a 200 Response as well:
200
- Show headers -
{
"batch": {
"entityResultType": "FULL",
"entityResults": [
{
"entity": {
"key": {
"partitionId": {
"projectId": "project-id-5200707080506492774"
},
"path": [
{
"kind": "Transaction",
"id": "5641081148407808"
}
]
},...
So how can I do both operations in one line?
UPDATE:
As recommended below, I have used the google cloud platform to update the indexes manually. You create a valid yaml file in notepad and then use the upload tool (three vertical dots button on the right side of the cloud control command line tool) to place it on the server and direct to it with the comand line. Here are my results so far:
nathaniel#project-id-5200707080555492774:~$ gcloud datastore create-indexes /home/nathaniel/index.yaml
Configurations to update:
descriptor: [/home/nathaniel/index.yaml]
type: [datastore indexes]
target project: [project-id-5200707044406492774]
Do you want to continue (Y/n)? y
nathaniel#project-id-5200707080506492774:~$
This is the Yaml file I used:
indexes:
- kind: Transaction
properties:
- name: User
direction: asc
- name: Period
direction: asc
- name: Status
direction: asc
- name: auditStatus
direction: asc
- name: role
direction: asc
- name: Start
direction: desc
- name: End
direction: asc
Still unable to complete the query, but it may take time to populate. I'll check back through the day and update my results. As of 1:35PM EST, The indexes still don't seem to have updated, as shown below:
Like the error message says - in order to run the query with a WHERE and ORDER, you need a composite index on User and Start properties for Transaction kind. You can learn more about the indexes at https://cloud.google.com/datastore/docs/concepts/indexes.
You can create indexes using the gcloud command line tool. Refer to the documentation at - https://cloud.google.com/sdk/gcloud/reference/datastore/create-indexes
Once your index is created/active, which may take a while depending on the amount of data you have, you should be able to run the first query.