How do I write a SQL query for a shown azure cosmos db documents? - sql

I have following documents in azure cosmos db collection.
// Document 1
{
"c": {
"firstName": "Robert"
}
"elements" : [
{
"a": "x2",
"b": {
"name": "yadda2",
"id": 1
}
}
]
}
// Document 2
{
"c": {
"firstName": "Steve"
}
"elements" : [
{
"a": "x5",
"b": {
"name": "yadda2",
"id": 4
}
},
{
"a": "x3",
"b": {
"name": "yadda8",
"id": 5
}
},
]
}
// Document 3
{
"c": {
"firstName": "Johnson"
}
"elements" : [
{
"a": "x4",
"b": {
"name": "yadda28",
"id": 25
}
},
{
"a": "x5",
"b": {
"name": "yadda30",
"id": 37
}
},
]
}
I need to write a query that returns all documents which have "b" object whose name is "yadda2" (i.e. /elements/*/b/name=yadda2). In other words, this query should return document 1 and 2 but NOT 3.
I tried following but it did not work:
SELECT * FROM x where ARRAY_CONTAINS(x.elements, {b: { name: "yadda2"}})
What am I doing wrong?

Just modify your sql to :
SELECT * FROM x where ARRAY_CONTAINS(x.elements, {b: { name: "yadda2"}},true)
Result:
Based on the official doc , the boolean expression could specify if the match is full or partial.
Hope it helps you.

Related

Nested "for loop" searches in SQL - Azure CosmosDB

I am using Cosmos DB and have a document with the following simplified structure:
{
"id1":"123",
"stuff": [
{
"id2": "stuff",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big ostrich",
"meta": 1
}
]
},
{
"id3": "default",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
}
My issue is - I have an array of these documents and need to search name to see if it matches my search word. For example I want both big trees and trees to return if a user types in trees.
So currently I push every document into an array and do the following:
For each document
for each stuff
for each a.b.c.d[0].e
for each classes
var splice = name.split(' ')
if (splice.includes(searchWord))
return id1, id2 and id3.
Using cosmosDB I am using SQL with the following code:
client.queryDocuments(
collection,
`SELECT * FROM root r`
).toArray((err, results) => {stuff});
This effectively brings every document in my collection into an array to perform the search manually above as mentioned.
This is going to cause issues when I have 1000s or 1,000,000s of documents in the array and I believe I should be leveraging the search mechanics available within Cosmos itself. Is anyone able to help me to work out what SQL query would be able to perform this type of function?
Having searched everything is it also possible to search the 5 latest documents?
Thanks for any insight in advance!
1.Is anyone able to help me to work out what SQL query would be able to
perform this type of function?
According to your sample and description, I suggest you using ARRAY_CONTAINS in cosmos db sql. Please refer to my sample:
sample documents:
[
{
"id1": "123",
"stuff": [
{
"id2": "stuff",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big ostrich",
"meta": 1
}
]
},
{
"id3": "default",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
},
{
"id1": "456",
"stuff": [
{
"id2": "stuff2",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things2",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "trees",
"meta": 1
}
]
},
{
"id3": "default2",
"name": "other",
"classes": [
{
"name": "green trees",
"meta": 1
},
{
"name": "trees",
"score": 1
}
]
}
]
}
]
}
}
}
}
]
},
{
"id1": "789",
"stuff": [
{
"id2": "stuff3",
"a": {
"b": {
"c": {
"d": [
{
"e": [
{
"id3": "things3",
"name": "animals",
"classes": [
{
"name": "ostrich",
"meta": 1
},
{
"name": "big",
"meta": 1
}
]
},
{
"id3": "default3",
"name": "other",
"classes": [
{
"name": "big trees",
"meta": 1
}
]
}
]
}
]
}
}
}
}
]
}
]
query :
SELECT distinct c.id1,stuff.id2,e.id3 FROM c
join stuff in c.stuff
join d in stuff.a.b.c.d
join e in d.e
where ARRAY_CONTAINS(e.classes,{name:"trees"},true)
or ARRAY_CONTAINS(e.classes,{name:"big trees"},true)
output:
2.Having searched everything is it also possible to search the 5 latest
documents?
Per my research, features like LIMIT is not supported in cosmos so far. However , TOP is supported by cosmos db. So if you could add sort field(such as date or id), then you could use sql:
select top 5 from c order by c.sort desc

ES6: Joining of subqueries to two different rows through the AND operator

I have following index:
+-----+-----+-------+
| oid | tag | value |
+-----+-----+-------+
| 1 | t1 | aaa |
| 1 | t2 | bbb |
| 2 | t1 | aaa |
| 2 | t2 | ddd |
| 2 | t3 | eee |
+-----+-----+-------+
where: oid - object ID, tag - property name, value - property value.
Mappings:
"mappings": {
"document": {
"_all": { "enabled": false },
"properties": {
"oid": { "type": "integer" },
"tag": { "type": "text" }
"value": { "type": "text" },
}
}
}
This simple structure allows store any number of object properties and it is a quite simple to search by one property or by more using OR logical operator.
E.g. get object oid's where:
(tag='t1' AND value='aaa') OR (tag='t2' AND value='ddd')
ES query:
{
"_source": { "includes":["oid"] },
"query": {
"bool": {
"should": [
{
"bool": {
"must": [
{ "term": { "tag": "t1" } },
{ "term": { "value": "aaa" } }
]
}
},
{
"bool": {
"must": [
{ "term": { "tag": "t2" } },
{ "term": { "value": "ddd" } }
]
}
}
],
"minimum_should_match": "1"
}
}
}
But it is hard to search by two or more properties using AND logical operator. So the question is how to join two sub-queries to two different records through the AND operator. E.g. get object oid's where:
(tag='t1' AND value='aaa') AND (tag='t2' AND value='ddd')
In this case result must be: { "oid": "2" }
Searching data contains in two different records and applying MUST instead of SHOULD from the previous example returns nothing in this case.
I have two equivalents in SQL of what I need:
SELECT i1.[oid]
FROM [index] i1 INNER JOIN [index] i2 ON i1.oid = i2.oid
WHERE
(i1.tag='t1' AND i1.value='aaa')
AND
(i2.tag='t2' AND i2.value='ddd')
---------
SELECT [oid] FROM [index] WHERE tag='t1' AND value='aaa'
INTERSECT
SELECT [oid] FROM [index] WHERE tag='t2' AND value='ddd'
Do the two requests and merge them on the client is not the option.
Elastic Search version is 6.1.1
In order to achieve what you want, you need to use the nested type, i.e. your mapping should look like this:
PUT my-index
{
"mappings": {
"doc": {
"properties": {
"oid": {
"type": "keyword"
},
"data": {
"type": "nested",
"properties": {
"tag": {
"type": "keyword"
},
"value": {
"type": "text"
}
}
}
}
}
}
}
The documents would be indexed like this:
PUT /my-index/doc/_bulk
{ "index": {"_id": 1}}
{ "oid": 1, "data": [ {"tag": "t1", "value": "aaa"}, {"tag": "t2", "value": "bbb"}] }
{ "index": {"_id": 2}}
{ "oid": 2, "data": [ {"tag": "t1", "value": "aaa"}, {"tag": "t2", "value": "ddd"}, {"tag": "t3", "value": "eee"}] }
Then you can make your query work like this:
POST my-index/_search
{
"query": {
"bool": {
"filter": [
{
"nested": {
"path": "data",
"query": {
"bool": {
"filter": [
{
"term": {
"data.tag": "t1"
}
},
{
"term": {
"data.value": "aaa"
}
}
]
}
}
}
},
{
"nested": {
"path": "data",
"query": {
"bool": {
"filter": [
{
"term": {
"data.tag": "t2"
}
},
{
"term": {
"data.value": "ddd"
}
}
]
}
}
}
}
]
}
}
}
There might be one way, which is a little ugly: adding terms aggregations to your query body.
{
"query": {
"bool": {
"should": [
{
"bool": {
"must": [
{ "term": { "tag": "t1" } },
{ "term": { "value": "aaa" } }
]
}
},
{
"bool": {
"must": [
{ "term": { "tag": "t2" } },
{ "term": { "value": "ddd" } }
]
}
}
],
"minimum_should_match": "1"
}
},
"size": 0,
"aggs": {
"find_joined_oid": {
"terms": {
"field": "oid.keyword"
}
}
}
}
If everything goes right, this will output something like
{
"took": 123,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 123,
"max_score": 0,
"hits": []
},
"aggregations": {
"find_joined_oid": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "1",
"doc_count": 1
},
{
"key": "2",
"doc_count": 2
}
}
}
}
Here, in the "aggregations" part,
"key": "1"
means your "oid":"1", and
"doc_counts": 1
means there is 1 hit in query with "oid":"1".
As you know how many tags you are querying to match, say N, in the aggregations result body, only those "key"s with "doc_count" equal to N are the result you're pursuing. In this example, you are querying tag:t1 (with value aaa) and tag:t2 (with value ddd), thus N=2. You can iterate in the result bucket list to find out those "key"s who have "doc_count" equal to 2.
However, there should be a better way. If you would alter your mapping to a document like style, ie. store all fields of one oid in one doc, life will be much easier.
{
"properties": {
"oid": { "type": "integer" },
"tag-1": { "type": "text" }
"value-1": { "type": "text" },
"tag-2": { "type": "text" }
"value-2": { "type": "text" }
}
}
When you want to add new tag-value pairs, just get the original doc with oid concerned, put new tag-pair into the doc, and put the whole new doc back into Elasticsearch with the same _id which you get from the original one. Most of the time dynamic mapping will work properly in your case, which means you don't need to assert mapping for new fields explicitly.
No-SQL databases like Elasticsearch and others are not designed to handle such SQL style query you are asking.

Unwind an array in DocumentDB query

I have documents that look like this:
[
{
"id": "e1bb9b05-11f2-459e-37d3-9bf9fed56c96",
"name": "bulbasaur",
"type": [
{
"slot": 2,
"type": {
"url": "https://pokeapi.co/api/v2/type/4/",
"name": "poison"
}
},
{
"slot": 1,
"type": {
"url": "https://pokeapi.co/api/v2/type/12/",
"name": "grass"
}
}
]
}
]
The following query is about as close as I can get, but not quite the output I'm hoping for.
Query
SELECT
c.id, c.name, t.type.name as type
FROM
c
JOIN
t IN c.types
WHERE
c.name = "bulbasaur"
Result
[
{
"id": "e1bb9b05-11f2-459e-37d3-9bf9fed56c96",
"name": "bulbasaur",
"type": "poison"
},
{
"id": "e1bb9b05-11f2-459e-37d3-9bf9fed56c96",
"name": "bulbasaur",
"type": "grass"
}
]
Hoping for
[
{
"id": "e1bb9b05-11f2-459e-37d3-9bf9fed56c96",
"name": "bulbasaur",
"types": ["poison", "grass"]
}
]
Is this possible with a DocumentDB query?
This requires use of DocumentDB UDFs, which can extend query functionality with custom transformations. For example, register this:
function unwindTypeArray(value) {
var result = { id: value.id, name: value.name, types: []};
for (var idx in value.type) {
console.log(idx);
var name = value.type[idx].type.name;
result.types.push(name);
}
return result;
}
Then call it inside a query like:
SELECT udf.unwindTypeArray(c) FROM c WHERE c.name = "bulbasaur"

Transform JSON response with lodash

I'm new in lodash (v3.10.1), and having a hard time understanding.
Hope someone can help.
I have an input something like this:
{
{"id":1,"name":"Matthew","company":{"id":1,"name":"abc","industry":{"id":5,"name":"Medical"}}},
{"id":2,"name":"Mark","company":{"id":1,"name":"abc","industry":{"id":5,"name":"Medical"}}},
{"id":3,"name":"Luke","company":{"id":1,"name":"abc","industry":{"id":5,"name":"Medical"}}},
{"id":4,"name":"John","company":{"id":1,"name":"abc","industry":{"id":5,"name":"Medical"}}},
{"id":5,"name":"Paul","company":{"id":1,"name":"abc","industry":{"id":5,"name":"Medical"}}}
];
I would like to output this or close to this:
{
"industries": [
{
"industry":{
"id":5,
"name":"Medical",
"companies": [
{
"company":{
"id":1,
"name":"abc",
"employees": [
{"id":1,"name":"Matthew"},
{"id":2,"name":"Mark"},
{"id":3,"name":"Luke"},
{"id":4,"name":"John"},
{"id":5,"name":"Paul"}
]
}
}
]
}
}
]
}
Here's something that gets you close to what you want. I structured the output to be an object instead of an array. You don't need the industries or industry properties in your example output. The output structure looks like this:
{
"industry name": {
"id": "id of industry",
"companies": [
{
"company name": "name of company",
"id": "id of company",
"employees": [
{
"id": "id of company",
"name": "name of employee"
}
]
}
]
}
}
I use the _.chain function to wrap the collection with a lodash wrapper object. This enables me to explicitly chain lodash functions.
From there, I use the _.groupBy function to group elements of the collection by their industry name. Since I'm chaining, I don't have to pass in the array again to the function. It's implicitly passed via the lodash wrapper. The second argument of the _.groupBy is the path to the value I want to group elements by. In this case, it's the path to the industry name: company.industry.name. _.groupBy returns an object with each employee grouped by their industry (industries are keys for this object).
I then do use _.transform to transform each industry object. _.transform is essentially _.reduce except that the results returned from the _.transform function is always an object.
The function passed to the _.transform function gets executed against each key/value pair in the object. In the function, I use _.groupBy again to group employees by company. Based off the results of _.groupBy, I map the values to the final structure I want for each employee object.
I then call the _.value function because I want to unwrap the output collection from the lodash wrapper object.
I hope this made sense. If it doesn't, I highly recommend reading Lo-Dash Essentials. After reading the book, I finally got why lodash is so useful.
"use strict";
var _ = require('lodash');
var emps = [
{ "id": 1, "name": "Matthew", "company": { "id": 1, "name": "abc", "industry": { "id": 5, "name": "Medical" } } },
{ "id": 2, "name": "Mark", "company": { "id": 1, "name": "abc", "industry": { "id": 5, "name": "Medical" } } },
{ "id": 3, "name": "Luke", "company": { "id": 1, "name": "abc", "industry": { "id": 5, "name": "Medical" } } },
{ "id": 4, "name": "John", "company": { "id": 1, "name": "abc", "industry": { "id": 5, "name": "Medical" } } },
{ "id": 5, "name": "Paul", "company": { "id": 1, "name": "abc", "industry": { "id": 5, "name": "Medical" } } }
];
var result = _.chain(emps)
.groupBy("company.industry.name")
.transform(function(result, employees, industry) {
result[industry] = {};
result[industry].id = _.get(employees[0], "company.industry.id");
result[ industry ][ 'companies' ] = _.map(_.groupBy(employees, "company.name"), function( employees, company ) {
return {
company: company,
id: _.get(employees[ 0 ], 'company.id'),
employees: _.map(employees, _.partialRight(_.pick, [ 'id', 'name' ]))
};
});
return result;
})
.value();
Results from your example are as follows:
{
"Medical": {
"id": 5,
"companies": [
{
"company": "abc",
"id": 1,
"employees": [
{
"id": 1,
"name": "Matthew"
},
{
"id": 2,
"name": "Mark"
},
{
"id": 3,
"name": "Luke"
},
{
"id": 4,
"name": "John"
},
{
"id": 5,
"name": "Paul"
}
]
}
]
}
}
If you ever wanted the exact same structure as in the questions, I solved it using the jsonata library:
(
/* lets flatten it out for ease of accessing the properties*/
$step1 := $ ~> | $ |
{
"employee_id": id,
"employee_name": name,
"company_id": company.id,
"company_name": company.name,
"industry_id": company.industry.id,
"industry_name": company.industry.name
},
["company", "id", "name"] |;
/* now the magic begins*/
$step2 := {
"industries":
[($step1{
"industry" & $string(industry_id): ${
"id": $distinct(industry_id)#$I,
"name": $distinct(industry_name),
"companies": [({
"company" & $string(company_id): {
"id": $distinct(company_id),
"name": $distinct(company_name),
"employees": [$.{
"id": $distinct(employee_id),
"name": $distinct(employee_name)
}]
}
} ~> $each(function($v){ {"company": $v} }))]
}
} ~> $each(function($v){ {"industry": $v} }))]
};
)
You can see it in action on the live demo site: https://try.jsonata.org/VvW4uTRz_

hierarchical faceting with Elasticsearch

I'm using elasticsearch and need to implement facet search for hierarchical object as follow:
category 1 (10)
subcategory 1 (4)
subcategory 2 (6)
category 2 (X)
...
So I need to get facets for two related objects. Documentation says that it's possible to get such kind of facets for numeric value, but I need it for strings http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-facets-terms-stats-facet.html
Here is another interesting topic, unfortunately it's old: http://elasticsearch-users.115913.n3.nabble.com/Pivot-facets-td2981519.html
Does it possible with elastic search?
If so, how can I do that?
The previous solution works really well until you have no more than a multi-level tag on a single-document. In this case a simple aggregation doesn't work, because the flat structure of the lucene fields mix the results on the internal aggregation.
See the example below:
DELETE /test_category
POST /test_category
# Insert a doc with 2 hierarchical tags
POST /test_category/test/1
{
"categories": [
{
"cat_1": "1",
"cat_2": "1.1"
},
{
"cat_1": "2",
"cat_2": "2.2"
}
]
}
# Simple two-levels aggregations query
GET /test_category/test/_search?search_type=count
{
"aggs": {
"main_category": {
"terms": {
"field": "categories.cat_1"
},
"aggs": {
"sub_category": {
"terms": {
"field": "categories.cat_2"
}
}
}
}
}
}
That's the WRONG response that I have got on ES 1.4, where the fields on the internal aggregation are mixed at a document level:
{
...
"aggregations": {
"main_category": {
"buckets": [
{
"key": "1",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1",
"doc_count": 1
},
{
"key": "2.2", <= WRONG
"doc_count": 1
}
]
}
},
{
"key": "2",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1", <= WRONG
"doc_count": 1
},
{
"key": "2.2",
"doc_count": 1
}
]
}
}
]
}
}
}
A Solution can be to use nested objects. These are the steps to do:
1) Define a new type in the schema with nested objects
POST /test_category/test2/_mapping
{
"test2": {
"properties": {
"categories": {
"type": "nested",
"properties": {
"cat_1": {
"type": "string"
},
"cat_2": {
"type": "string"
}
}
}
}
}
}
# Insert a single document
POST /test_category/test2/1
{"categories":[{"cat_1":"1","cat_2":"1.1"},{"cat_1":"2","cat_2":"2.2"}]}
2) Run a nested aggregation query:
GET /test_category/test2/_search?search_type=count
{
"aggs": {
"categories": {
"nested": {
"path": "categories"
},
"aggs": {
"main_category": {
"terms": {
"field": "categories.cat_1"
},
"aggs": {
"sub_category": {
"terms": {
"field": "categories.cat_2"
}
}
}
}
}
}
}
}
That's the response, now correct, that I have got:
{
...
"aggregations": {
"categories": {
"doc_count": 2,
"main_category": {
"buckets": [
{
"key": "1",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "1.1",
"doc_count": 1
}
]
}
},
{
"key": "2",
"doc_count": 1,
"sub_category": {
"buckets": [
{
"key": "2.2",
"doc_count": 1
}
]
}
}
]
}
}
}
}
The same solution can be extended to a more than two-levels hierarchy facet.
Currently, elasticsearch does not support hierarchical facetting out-of-the-box. But the upcoming 1.0 release features a new aggregations module, that can be used to get these kind of facets (which are more like pivot-facets rather than hierarchical facets). Version 1.0 is currently in beta, you can download the second beta and test out aggregatins by yourself. Your example might look like
curl -XPOST 'localhost:9200/_search?pretty' -d '
{
"aggregations": {
"main category": {
"terms": {
"field": "cat_1",
"order": {"_term": "asc"}
},
"aggregations": {
"sub category": {
"terms": {
"field": "cat_2",
"order": {"_term": "asc"}
}
}
}
}
}
}'
The idea is, to have a different field for each level of facetting and bucket your facets based on the terms of the first level (cat_1). These aggregations then would have sub-buckets, based on the terms of the second level (cat_2). The result may look like
{
"aggregations" : {
"main category" : {
"buckets" : [ {
"key" : "category 1",
"doc_count" : 10,
"sub category" : {
"buckets" : [ {
"key" : "subcategory 1",
"doc_count" : 4
}, {
"key" : "subcategory 2",
"doc_count" : 6
} ]
}
}, {
"key" : "category 2",
"doc_count" : 7,
"sub category" : {
"buckets" : [ {
"key" : "subcategory 1",
"doc_count" : 3
}, {
"key" : "subcategory 2",
"doc_count" : 4
} ]
}
} ]
}
}
}