Unable to queries in nested object in CosmosDB entry - sql

Is there a way to query a nested object within a document in CosmosDB when there are multiple nested items with same id:
{
id: "GUID",
"items"[
{
"item": {
"item_id": "123456",
"order_name": "name1"
},
"item": {
"item_id": "123456",
"order_name": "name2",
}
}]
}
Id be looking to check the item_id and pull back that item object. Using the query SELECT c.item FROM c WHERE c.item.item_id = '123456' will only work if there is only one item but with more than one the query does not return anything. Using the query SELECT * FROM c WHERE c.item.item_id = '123456' doesn't bring back anything either.

I'm not sure I fully understand your question but here a couple of ways to work with nested queries:
Option 1:
SELECT
c.id,
items
FROM c
JOIN (SELECT
b.item_id,
b.order_name
FROM b WHERE b.parentID = c.id) AS items
The query will return the "items" field as an object
Option 2:
SELECT
c.id,
ARRAY (
SELECT
b.item_id,
b.order_name
FROM b WHERE b.parentID = c.id)
) AS items
FROM c
The query will return the "items" field as an array of objects

Related

How can i get keys from each object in array (postgresql)?

I done
let statuses = await t.any(`SELECT DISTINCT status FROM mails`)
and got
"statuses": [
{
"status": "error"
},
{
"status": "success"
}
]
How can I get array with keys of objects ? ['error', 'success'] ?
Assuming status is a jsonb column (which it should be), you can do:
select distinct st.status
from mails m
cross join jsonb_array_elements(m.status -> 'statuses') as st(status)
If status is a json column you will need to use json_array_elements() instead

In BigQuery, how do I check if two ARRAY of STRUCTs are equal

I have a query that outputs two array of structs:
SELECT modelId, oldClassCounts, newClassCounts
FROM `xyz`
GROUP BY 1
How do I create another column that is TRUE if oldClassCounts = newClassCounts?
Here is a sample result in JSON:
[
{
"modelId": "FBF21609-65F8-4076-9B22-D6E277F1B36A",
"oldClassCounts": [
{
"id": "A041EBB1-E041-4944-B231-48BC4CCE025B",
"count": "33"
},
{
"id": "B8E4812B-A323-47DD-A6ED-9DF877F501CA",
"count": "82"
}
],
"newClassCounts": [
{
"id": "A041EBB1-E041-4944-B231-48BC4CCE025B",
"count": "33"
},
{
"id": "B8E4812B-A323-47DD-A6ED-9DF877F501CA",
"count": "82"
}
]
}
]
I want the equality column to be TRUE if oldClassCounts and newClassCounts are exactly the same like the output above.
Anything else should be false.
I would go about with this solution
#standardSQL
WITH xyz AS (
SELECT "FBF21609-65F8-4076-9B22-D6E277F1B36A" AS modelId,
[STRUCT("A041EBB1-E041-4944-B231-48BC4CCE025B" as id, "33" as count),
STRUCT("B8E4812B-A323-47DD-A6ED-9DF877F501CA" as id, "82" as count)] AS oldClassCounts,
[STRUCT("A041EBB1-E041-4944-B231-48BC4CCE025B" as id, "33" as count),
STRUCT("B8E4812B-A323-47DD-A6ED-9DF877F501CA" as id, "82" as count)] as newClassCounts),
o as (SELECT modelId, id, count, array_length(oldClassCounts) as len FROM xyz, UNNEST(oldClassCounts) as old_c),
n as (SELECT modelId, id, count, array_length(newClassCounts) as len FROM xyz, UNNEST(newClassCounts) as new_c),
uneq as (select * from o except distinct select * from n)
select xyz.*, IF(uneq.modelId is not null, false, true) as equal from xyz left join (select distinct modelId from uneq) uneq on xyz.modelId = uneq.modelId
It works regardless of the order or having duplicates within the arrays. The idea is that we treat each of the arrays as a separate temporary table removing all elements that exist in one but not the other (using except distinct) and then have an extra check for the length of the arrays in case there are duplicates e.g.
"FBF21609-65F8-4076-9B22-D6E277F1B36A" AS modelId,
[STRUCT("A041EBB1-E041-4944-B231-48BC4CCE025B" as id, "33" as count),
STRUCT("B8E4812B-A323-47DD-A6ED-9DF877F501CA" as id, "82" as count),
STRUCT("B8E4812B-A323-47DD-A6ED-9DF877F501CA" as id, "82" as count)]
I would consider comparing the result of TO_JSON_STRING function applied on both of these arrays.
In the query it would be done in the following way:
SELECT modelId,
oldClassCounts,
newClassCounts,
CASE WHEN TO_JSON_STRING(oldClassCounts) = TO_JSON_STRING(newClassCounts)
THEN true
ELSE false
END
FROM `xyz`;
I'm not sure about GROUP BY 1 part, because non of the fields are grouped or aggregated.
It is not going to work, if the order of elements in the array is going to be different. This solution is not perfect, but worked for the data you provided.

Postgresql search if exists in nested jsonb

I'm new with jsonb request and i got a problem. Inside an 'Items' table, I have 'id' and 'data' jsonb. Here is what can look like a data:
[
{
"paramId": 3,
"value": "dog"
},
{
"paramId": 4,
"value": "cat"
},
{
"paramId": 5,
"value": "fish"
},
{
"paramId": 6,
"value": "",
"fields": [
{
"paramId": 3,
"value": "cat"
},
{
"paramId": 4,
"value": "dog"
}
]
},
{
"paramId": 6,
"value": "",
"fields": [
{
"paramId": 5,
"value": "cat"
},
{
"paramId": 3,
"value": "dog"
}
]
}
]
The value in data is always an array with object inside but sometimes the object can have a 'fields' value with objects inside. It is maximum one level deep.
How can I select the id of the items which as for example an object containing "paramId": 3 and "value": "cat" and also have an object with "paramId": 5 and "value" LIKE '%ish%'.
I already have found a way to do that when the object is on level 0
SELECT i.*
FROM items i
JOIN LATERAL jsonb_array_elements(i.data) obj3(val) ON obj.val->>'paramId' = '3'
JOIN LATERAL jsonb_array_elements(i.data) obj5(val) ON obj2.val->>'paramId' = '5'
WHERE obj3.val->>'valeur' = 'cat'
AND obj5.val->>'valeur' LIKE '%ish%';
but I don't know how to search inside the fields array if fields exists.
Thank you in advance for you help.
EDIT:
It looks like my question is not clear. I will try to make it better.
What I want to do is to find all the 'item' having in the 'data' column objects who match my search criteria. This without looking if the objects are at first level or inside a 'fields' key of an object.
Again for example. This record should be selected if I search:
'paramId': 3 AND 'value': 'cat
'paramId': 4 AND 'value': LIKE '%og%'
the matching ones are in the 'fields' key of the object with 'paramId': 6 and I don't know how to do that.
This can be expressed using a JSON/Path expression without the need for unnesting everything
To search for paramId = 3 and value = 'cat'
select *
from items
where data #? '$[*] ? ( (#.paramId == 3 && #.value == "cat") || exists( #.fields[*] ? (#.paramId == 3 && #.value == "cat")) )'
The $[*] part iterates over all elements of the first level array. To check the elements in the fields array, the exists() operator is used to nest the expression. #.fields[*] iterates over all elements in the fields array and applies the same expression again. I don't see a way how repeating the values could be avoided though.
For a "like" condition, you can use like_regex:
select *
from items
where data #? '$[*] ? ( (#.paramId == 4 && #.value like_regex ".*og.*") || exists( #.fields[*] ? (#.paramId == 4 && #.value like_regex ".*og.*")) )'
For now I have found a solution but it is not really clean and I don't know how it will perform in production with 10M records.
SELECT i.id, i.data
FROM ( -- A;
select it.id, it.data, i as value
from items it,
jsonb_array_elements(it.data) i
union
select it.id, it.data, f as value
from items it,
jsonb_array_elements(it.data) i,
jsonb_array_elements(i -> 'fields') f
) as i
WHERE (i.value ->> 'paramId' = '5' -- B1;
AND i.value ->> 'value' LIKE '%ish%')
OR (i.value ->> 'paramId' = '3' -- B2;
AND i.value ->> 'value' = 'cat')
group by i.id, i.data
having COUNT(*) >= 2; -- C;
A: I "flatten" the first and second level (second level is in 'fields' key)
B1, B2: These are my search criteria
C: I make sure the fields have all the criteria matching. If 3 criteria --> COUNT(*) >=3
It really doesn't look clean to me. It is working for dev purpose but I think there is a better way to do it.
If somebody have an idea Big thanks to him/her!

jsonb LIKE query on nested objects in an array

My JSON data looks like this:
[{
"id": 1,
"payload": {
"location": "NY",
"details": [{
"name": "cafe",
"cuisine": "mexican"
},
{
"name": "foody",
"cuisine": "italian"
}
]
}
}, {
"id": 2,
"payload": {
"location": "NY",
"details": [{
"name": "mbar",
"cuisine": "mexican"
},
{
"name": "fdy",
"cuisine": "italian"
}
]
}
}]
given a text "foo" I want to return all the tuples that have this substring. But I cannot figure out how to write the query for the same.
I followed this related answer but cannot figure out how to do LIKE.
This is what I have working right now:
SELECT r.res->>'name' AS feature_name, d.details::text
FROM restaurants r
, LATERAL (SELECT ARRAY (
SELECT * FROM json_populate_recordset(null::foo, r.res#>'{payload,
details}')
)
) AS d(details)
WHERE d.details #> '{cafe}';
Instead of passing the whole text of cafe I want to pass ca and get the results that match that text.
Your solution can be simplified some more:
SELECT r.res->>'name' AS feature_name, d.name AS detail_name
FROM restaurants r
, jsonb_populate_recordset(null::foo, r.res #> '{payload, details}') d
WHERE d.name LIKE '%oh%';
Or simpler, yet, with jsonb_array_elements() since you don't actually need the row type (foo) at all in this example:
SELECT r.res->>'name' AS feature_name, d->>'name' AS detail_name
FROM restaurants r
, jsonb_array_elements(r.res #> '{payload, details}') d
WHERE d->>'name' LIKE '%oh%';
db<>fiddle here
But that's not what you asked exactly:
I want to return all the tuples that have this substring.
You are returning all JSON array elements (0-n per base table row), where one particular key ('{payload,details,*,name}') matches (case-sensitively).
And your original question had a nested JSON array on top of this. You removed the outer array for this solution - I did the same.
Depending on your actual requirements the new text search capability of Postgres 10 might be useful.
I ended up doing this(inspired by this answer - jsonb query with nested objects in an array)
SELECT r.res->>'name' AS feature_name, d.details::text
FROM restaurants r
, LATERAL (
SELECT * FROM json_populate_recordset(null::foo, r.res#>'{payload, details}')
) AS d(details)
WHERE d.details LIKE '%oh%';
Fiddle here - http://sqlfiddle.com/#!15/f2027/5

how can i write this query with mongodb? [duplicate]

This question already has answers here:
How do I perform the SQL Join equivalent in MongoDB?
(19 answers)
Closed 6 years ago.
I want to write this query with mongodb
select *
from tab1 a, tab2 c
where a.a_id = 2
and c.c_id = 3
and a.a_id = c.c_fk_account_id_created_by
I tried this code but didn't get a response:
$cursor = $collection->find(array('$and' => array(array("a_id" => 2), array("c_id" => 3))));
I will assume you have two collections, named tab1 and tab2 in the form of
tab1
{
"_id" : ObjectId("58482a97a5fa273657ace535"),
"a_id" : NumberInt(2)
}
tab2
{
"_id" : ObjectId("58482acca5fa273657ace539"),
"c_id" : NumberInt(3),
"c_fk_account_id_created_by" : NumberInt(2)
}
You will need an aggregation query with two steps, first, $lookup to the second table, and second $match on the proper keys. Like this.
db.tab1.aggregate(
[
{
$lookup: {
"from" : "tab2",
"localField" : "a_id",
"foreignField" : "c_fk_account_id_created_by",
"as" : "c"
}
},
{
$match: {
"a_id": 2,
"c.c_id": 3
}
},
]
);
This will give you an output like this
{
"_id" : ObjectId("58482a97a5fa273657ace535"),
"a_id" : NumberInt(2),
"c" : [
{
"_id" : ObjectId("58482acca5fa273657ace539"),
"c_id" : NumberInt(3),
"c_fk_account_id_created_by" : NumberInt(2)
}
]
}
Good luck!
I wrote an article on just this type of query:
MongoDB Aggregation Framework for T-SQL Pros #3: The $lookup Operator
https://www.linkedin.com/pulse/mongodb-aggregation-framework-t-sql-pros-3-lookup-operator-finch
Essentially you are going to bring all documents from your second table into the results of the first table using the $lookup aggregation operator. You can then use the $match and $group operators to filter and aggregate your data.
It will go something like this:
db.tab1.aggregate([
{ $match:
{ "tab1.a_id": 2 }
},
{ $lookup:
{ from: "tab2",
localField: "a_id",
foreignField: "c_fk_account_id",
as: "tab2_results"
}
},
{ $match:
{ "tab2_results.c_id": 3 }
}
]}
The matching joined documents will be added to the base table's document as an array. It acts as a LEFT join in that null values from the remote table are ignored and your base table document is still returned, only missing remote data.
Hope this helps!
Bill
Let's assume tab1 and tab2 have 3 fields each as a_id, aa1, aa2 and c_id, c_fk_account_id_created_by, cc1
The query will be as follows
db.tab1.aggregate([{$match:{a_id:2}},{$lookup:{from:'tab2', localField:'c_fk_account_id_created_by', foreignField:'a_id', as:'ccArray'}},{$unwind:'$ccArray'},
{$project:{a_id:1,aa1:1, aa2:1, c_id:'$ccArray.c_id',c_fk_account_id_created_by:'$ccArray.c_fk_account_id_created_by',cc1:'$ccArray.cc1'}},{$match:{c_id:3}}])
Explanation of the above query:
As MongoDB doesn't allow to match from second table in the aggregation pipeline so we have to unwind the second table array and compare the value
select *
from tab1 a, tab2 c
where a.a_id = 2 ==> {$match:{a_id:2}}
and c.c_id = 3 ==> (Cannot be done at first so it can be acheived as ) ==> {$unwind:'$ccArray'},
{$project:{a_id:1,aa1:1, aa2:1, c_id:'$ccArray.c_id',c_fk_account_id_created_by:'$ccArray.c_fk_account_id_created_by',cc1:'$ccArray.cc1'}},{$match:{c_id:3}}
and a.a_id = c.c_fk_account_id_created_by ==> {$lookup:{from:'tab2', localField:'c_fk_account_id_created_by', foreignField:'a_id', as:'ccArray'}}