Terms aggregation nested in another terms aggregation - nest

How would you implement the following request in NEST?
I'm struggling with putting the "terms" aggregation in another "terms".
http://localhost:9200/my_index/responses/_search
{
"size":0,
"aggs":{
"FILTER":{
"filter":{"term":{"field":{"value":"INPUT_VARIABLE_OF_THE_METHOD"}}},
"aggs":{
"TERMS_ROWS":{
"terms":{
"field":"row"
},
"aggs": {
"TERMS_COLUMNS": {
"terms": {
"field": "column"
}
}
}
}
}
}
}
}
Working on Elasticsearch 5.4
NEST API version 5.3.0
My mapping:
{
"responses": {
"dynamic": "strict",
"_parent": {
"type": "panelists"
},
"_routing": {
"required": true
},
"properties": {
"column": {
"type": "text",
"analyzer": "index_analyzer_text_value",
"fielddata": true
},
"field": {
"type": "keyword"
},
"row": {
"type": "text",
"analyzer": "index_analyzer_text_value",
"fielddata": true
}
}
}
My settings:
{
"hoard_v0.2_2018-03-19_6dcb7ba5eea448b99f21837a52b5699c": {
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "10000"
}
},
"number_of_shards": "8",
"provided_name": "hoard_v0.2_2018-03-19_6dcb7ba5eea448b99f21837a52b5699c",
"creation_date": "1521447172612",
"analysis": {
"analyzer": {
"index_analyzer_text_value": {
"filter": [
"lowercase"
],
"type": "custom",
"tokenizer": "keyword"
}
}
},
"number_of_replicas": "1",
"uuid": "TmEBCNHXT_uN5N6q5XbNtw",
"version": {
"created": "5040099"
}
}
}
}
}
Working on Elasticsearch 5.4
NEST API version 5.3.0
Thanks!

Found it!
var columnAggregation = new TermsAggregation("COLUMN_TERMS")
{
Fie`enter code here`ld = "column",
Size = GetSize(termsItem.Limit)
};
searchRequest.Aggregations(a2 => a2
.Filter("FILTER", fAgg => fAgg
.Filter(f => f
.Term("field", termsItem.Field)
)
.Aggregations(a3 => a3
.Terms("TERMS_ROWS", a4 => a4
.Field("row")
.Aggregations( a5 => a5
.Terms("TERMS_COLUMNS", a6 => columnAggregation)
)
)
)
)
)

Related

Unomi use of UpdatePoperties event to enrich profil

We are tring to enrich a profil using UpdateProperties event.
Something wrong, it didn't work.
Step of our process :
Search of the profil :
Use of the prive API /profiles/search
So we find a profil and get the profilId "60de10fe-e6ff-11ec-8fea-0242ac120002" for next step.
Enrich profil :
Use of the API public /context.json to post the json
{
"sessionId": null,
"profileId": "60de10fe-e6ff-11ec-8fea-0242ac120002",
"events": [
{
"itemType": "event",
"scope": "myScope",
"eventType": "updateProperties",
"properties": {
"update": {
"properties.age": "24"
},
"add": {
"properties.kids" : "1"
}
}
}
]
}
Response :
{
"profileId": "60de10fe-e6ff-11ec-8fea-0242ac120002",
"sessionId": null,
"profileProperties": null,
"sessionProperties": null,
"profileSegments": null,
"profileScores": null,
"filteringResults": null,
"processedEvents": 1,
"personalizations": null,
"trackedConditions": [
{
"parameterValues": {
"operator": "and",
"subConditions": [
{
"parameterValues": {
"formId": "zoneLeadFormEvent"
},
"type": "formEventCondition"
}
]
},
"type": "booleanCondition"
},
{
"parameterValues": {
"formId": "testFormTracking",
"pagePath": "/tracker/"
},
"type": "formEventCondition"
},
{
"parameterValues": {
"formId": "searchForm"
},
"type": "formEventCondition"
},
{
"parameterValues": {
"formId": "advancedSearchForm"
},
"type": "formEventCondition"
}
],
"anonymousBrowsing": false,
"consents": {}
}
Validation of the profile changes
Get /cxs/profiles/60de10fe-e6ff-11ec-8fea-0242ac120002
So, the properties are not updated.
We use a rules the merge the profil on a custom identifier :
{
"metadata": {
"id": "update_with_custom_identifier",
"name": "UpdateWithCustomIdentifier",
"description": "Copy my properties to profile properties on update"
},
"condition": {
"parameterValues": {
"subConditions": [
{
"parameterValues": {
},
"type": "updatePropertiesEventCondition"
}
],
"operator": "and"
},
"type": "booleanCondition"
},
"actions": [
{
"parameterValues": {
"mergeProfilePropertyValue": "eventProperty::target.properties.myIdentifier",
"mergeProfilePropertyName": "mergeCustomIdentifier"
},
"type": "mergeProfilesOnPropertyAction"
},
{
"parameterValues": {
},
"type": "allEventToProfilePropertiesAction"
}
]
}
What's the good method ?
Thanks in advance

MongoDB Lookup values based on dynamic field name

I'm pretty sure the below can be done, I'm struggling to understand how to do it in MongoDB.
My data is structured like this (demo data):
db={
"recipes": [
{
"id": 1,
"name": "flatbread pizza",
"ingredients": {
"1010": 1,
"1020": 2,
"1030": 200
}
},
{
"id": 2,
"name": "cheese sandwich",
"ingredients": {
"1040": 1,
"1050": 2
}
}
],
"ingredients": [
{
"id": 1010,
"name": "flatbread",
"unit": "pieces"
},
{
"id": 1020,
"name": "garlic",
"unit": "clove"
},
{
"id": 1030,
"name": "tomato sauce",
"unit": "ml"
},
{
"id": 1040,
"name": "bread",
"unit": "slices"
},
{
"id": 1050,
"name": "cheese",
"unit": "slices"
}
]
}
The output I'm trying to achieve would look like this:
[
{
"id": 1,
"name": "flatbread pizza",
“flatbread”: “1 pieces”,
“garlic”: “2 cloves”,
“tomato sauce”: “200 ml”
},
{
"id": 2,
"name": "cheese sandwich",
“bread”: “1 slices”,
“cheese”: “2 slices”
}
]
I've tried several approaches, and I get stuck at the bit where I need to do a lookup based on the ingredient name (which actually is the id). I tried using $objectToArray to turn it into a k-v document, but then I get stuck in how to construct the lookup pipeline.
This is not a simple solution, and probably can be improved:
db.recipes.aggregate([
{
"$addFields": {
ingredientsParts: {
"$objectToArray": "$ingredients"
}
}
},
{
$unwind: "$ingredientsParts"
},
{
"$group": {
_id: "$id",
name: {
$first: "$name"
},
ingredientsParts: {
$push: {
v: "$ingredientsParts.v",
id: {
$toInt: "$ingredientsParts.k"
}
}
}
}
},
{
"$lookup": {
"from": "ingredients",
"localField": "ingredientsParts.id",
"foreignField": "id",
"as": "ingredients"
}
},
{
$unwind: "$ingredients"
},
{
"$addFields": {
"ingredientsPart": {
"$filter": {
input: "$ingredientsParts",
as: "item",
cond: {
$eq: [
"$$item.id",
"$ingredients.id"
]
}
}
}
}
},
{
$project: {
ingredients: 1,
ingredientsPart: {
"$arrayElemAt": [
"$ingredientsPart",
0
]
},
name: 1
}
},
{
"$addFields": {
units: {
k: "$ingredients.name",
v: {
"$concat": [
{
$toString: "$ingredientsPart.v"
},
" ",
"$ingredients.unit"
]
}
}
}
},
{
$group: {
_id: "$_id",
name: {
$first: "$name"
},
units: {
$push: "$units"
}
}
},
{
"$addFields": {
"data": {
"$arrayToObject": "$units"
}
}
},
{
"$addFields": {
"data.id": "$_id",
"data.name": "$name"
}
},
{
"$replaceRoot": {
"newRoot": "$data"
}
}
])
You can see it works here
As rickhg12hs said, it can be modeled better.

Elasticsearch running aggregate guery - products bought by clients who only buy that product

My docs represents an order in the given format:
{
"name", // the client
"sku" // the product
}
So, suppose the follow data exists:
{ "name": "rudolph", "sku": "apple" }
{ "name": "rudolph", "sku": "apple" }
{ "name": "rudolph", "sku": "apple" }
{ "name": "john", "sku": "banana" }
{ "name": "john", "sku": "banana" }
{ "name": "paul", "sku": "banana" }
{ "name": "paul", "sku": "apple" }
{ "name": "peter", "sku": "banana" }
I can get the clients who bought only 1 kind of item with the query:
{
"aggs": {
"clients": {
"terms": {
"field": "name"
},
"aggs": {
"distinct_sku": {
"cardinality": {
"field": "sku"
}
},
"unique_sku": {
"bucket_selector": {
"buckets_path": {
"qty": "distinct_sku"
},
"script": "params.qty == 1"
}
},
"aggs": {
"terms": {
"field": "sku"
}
}
}
}
},
"size": 0
}
which results
{
...
"aggregations": {
"clients": {
...
"buckets": [
{
"key": "rudolph",
"doc_count": 3,
...
"aggs": {
...
"buckets": [
{
"key": "apple",
"doc_count": 3
}
]
}
},
{
"key": "john",
"doc_count": 2,
...
"aggs": {
...
"buckets": [
{
"key": "banana",
"doc_count": 2
}
]
}
},
{
"key": "peter",
"doc_count": 1,
...
"aggs": {
...
"buckets": [
{
"key": "banana",
"doc_count": 1
}
]
}
}
]
}
}
}
It's possible to manage the query to return how much each item appears in the result?
Something like this:
{
"apple" : 1,
"banana" : 2
}
Thanks in advance.
EDIT
My base has a huge amount of clients and a small quantity of products, so:
Iterate over the above aggregation result to build the wanted result is not an option.
If I have to send a query for each product, It's Ok.

Create a new Google Sheet with row or column groups

I'm trying to create a new spreadsheet using spreadsheets#create, with specified row groups.
In the API Explorer, I am entering in the JSON below. which corresponds to the following appearance:
No errors are flagged or returned when I execute the call, but when the sheet is created, the specified grouping is not created - only the values are set.
{ "properties": {
"title": "Test Spreadsheet",
"locale": "en"
},
"sheets": [
{ "properties": {"title": "Test1"},
"data": [
{
"startRow": 0,
"startColumn": 0,
"rowData": [
{ "values": [
{ "userEnteredValue": { "stringValue": "Top1" } }
]
},
{ "values": [
{ "userEnteredValue": { "stringValue": "Top2" } }
]
},
{ "values": [
{ "userEnteredValue": { "stringValue": "" } },
{ "userEnteredValue": { "stringValue": "Top2A" } }
]
},
{ "values": [
{ "userEnteredValue": { "stringValue": "" } },
{ "userEnteredValue": { "stringValue": "Top2B" } }
]
},
{ "values": [
{ "userEnteredValue": { "stringValue": "" } },
{ "userEnteredValue": { "stringValue": "Top2C" } }
]
},
{ "values": [
{ "userEnteredValue": { "stringValue": "Top3" } }
]
}
]
}
],
"rowGroups": [
{ "range": {
"dimension": "ROWS",
"startIndex": 2,
"endIndex": 5
}
}
]
}
]
}
Even when I create the rowGroups JSON directly on the page, with its structured editor to make sure it is properly defined, the created spreadsheet still doesn't group the specified rows. I have triple-checked all my objects from the top down, and can't see what I am doing wrong.

error in Analyzer while creating an index

I am trying to create an index in elasticsearch with the following settings. I am facing the following error
error:
RemoteTransportException[[ys2order-stg2-01][inet[/64.101.206.15:9300]][indices:admin/mapping/put]];
nested: MapperParsingException[Analyzer [autocomplete] not found for
field [DESCRIPTION_AUTO]]; status: 400
{
"settings": {
"index": {
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 20
}
},
"analyzer": {
" autocomplete": {
"type": "custom",
"filter": [
"lowercase",
"autocomplete_filter"
],
"tokenizer": "standard"
}
}
},
"refresh_interval": -1,
"number_of_replicas": 1,
"number_of_shards": 4,
"index_analyzer": "default_index",
"search_analyzer": "default_search"
}
},
"mappings": {
"itemsnew": {
"properties": {
"DESCRIPTION_AUTO": {
"index_analyzer": "autocomplete",
"search_analyzer": "standard",
"type": "string"
}
}
}
}
}
You need to remove a space in your analyzer name:
"analyzer": {
" autocomplete": { <--- remove initial space here
"type": "custom",
"filter": [
"lowercase",
"autocomplete_filter"
],
"tokenizer": "standard"
}
....
"DESCRIPTION_AUTO": {
"index_analyzer": "autocomplete", <-- because there is no initial space here !!!
"search_analyzer": "standard",
"type": "string"
}