Querying Deep JSONb Information - PostgreSQL - sql

I have the following JSON array stored on a row:
{
"openings": [
{
"visibleFormData": {
"productName": "test"
}
}
]
}
I'm trying to get the value of productName. So far I've tried something like this:
SELECT tbl.column->'openings'->'0'->'visibleFormData'->>'productName'
The theory being that this would grab the first object (index 0) in the openings array and then grab the productName attribute from that object's visibleFormData object.
All I'm getting is null, though. I've tried multiple configurations of this. I'm thinking it has to do with the grabbing of index zero, but I am unsure. I am not a regular PSQL user, so it's proving a tad tricky to debug.

The json array index is integer, so use 0 instead of '0':
with tbl(col) as (
values
('{
"openings": [
{
"visibleFormData": {
"productName": "test"
}
}
]
}'::jsonb)
)
SELECT tbl.col->'openings'->0->'visibleFormData'->>'productName'
FROM tbl
?column?
----------
test
(1 row)

Related

JSON Parse in Snowflake SQL

I'm trying to convert the original json to the desired results below using SQL in Snowflake. How can I accomplish this?
I've tried parse_json(newFutureAllocations[0]:fundId) but this only brings back the first fundId element.
ORIGINAL
"newFutureAllocations": [
{
"fundId": 1,
"percentAllocation": 2500
},
{
"fundId": 5,
"percentAllocation": 7500
}
]
DESIRED
"newFutureAllocations": {
"1": 2500,
"5": 7500
}
You need to use flatten to turn your array elements in to rows, then use object_agg() to aggregate them back up again, as an object rather than an array. The exact syntax depends on the rest of your query, data, etc, and you haven't provided enough details about that.
The challenge here is to re-construct the object, I used string concatenations:
with data as (
select parse_json(
'[ { "fundId": 1, "percentAllocation": 2500 }
, { "fundId": 5, "percentAllocation": 7500 } ]') j
)
select parse_json('{'||
listagg('"'||x.value:fundId ||'"'||':'|| x.value:percentAllocation, ',')
||'}')
from data, table(flatten(j)) x
group by seq

Oracle SQL JSON_QUERY ignore key field

I have a json with several keys being a number instead of a fixed string. Is there any way I could bypass them in order to access the nested values?
{
"55568509":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"55501223",
"register":"001",
"readingValue":"5295",
"readingDate":"2021-02-25T00:00:00.000Z"
}
}
}
}
My expected output here would be 5295, but since 59668509 can vary from json to json, JSON_QUERY(data, '$."59668509".registers."001".readingValue) would not be an option. I'm not able to use regexp here because this is only a part of the original json, which contains more than this.
UPDATE: full json with multiple occurrences:
This is how my whole json looks like. I would like all the readingValue in brackets, in the example below, my expected output would be [32641, 00964].
WITH test_table ( data ) AS (
SELECT
'{
"session":{
"sessionStartDate":"2021-02-26T12:03:34+0000",
"interactionDate":"2021-02-26T12:04:19+0000",
"sapGuid":"369F01DFXXXXXXXXXX8553F40CE282B3",
"agentId":"USER001",
"channel":"XXX",
"bpNumber":"5551231234",
"contractAccountNumber":"55512312345",
"contactDirection":"",
"contactMethod":"Z08",
"interactionId":"5550848784",
"isResponsibleForPayingBill":"Yes"
},
"payload":{
"agentId":"USER001",
"contractAccountNumber":"55512312345",
"error":{
"55549271":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"55501223",
"register":"001",
"readingValue":"32641",
"readingDate":"2021-02-26T00:00:00.000Z"
}
},
"errors":[
{
"contractNumber":"55501231",
"language":"EN",
"errorCode":"62",
"errorText":"Error Text1",
"isHardError":false
},
{
"contractNumber":"55501232",
"language":"EN",
"errorCode":"62",
"errorText":"Error Text2",
"isHardError":false
}
],
"bpNumber":"5557273667"
},
"55583693":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"555121212",
"register":"001",
"readingValue":"00964",
"readingDate":"2021-02-26T00:00:00.000Z"
}
},
"errors":[
],
"bpNumber":"555123123"
}
}
}
}'
FROM
dual
)
SELECT
JSON_QUERY(data, '$.payload.error.*.registers.*[*].readingValue') AS reading_value
FROM
test_table;
UPDATE 2:
Solved, this would do the trick, upvoting the first comment.
JSON_QUERY(data, '$.payload.error.*.registers.*.readingValue' WITH WRAPPER) AS read_value
As I explained in the comment to your question, if you are getting that result from the JSON you posted, you are not using JSON_QUERY(); you must be using JSON_VALUE(). Either that, or there's something else you didn't share with us.
In any case, let's say you are using JSON_VALUE() with the arguments you showed. You are asking, how can you modify the path so that the top-level attribute name is not hard-coded. That is trivial: use asterisk (*) instead of the hard-coded name. (This would work the same with JSON_QUERY() - it's about JSON paths, not the specific function that uses them.)
with test_table (data) as (
select
'{
"59668509":{
"registers":{
"001":{
"isPlausible":false,
"deviceNumber":"40157471",
"register":"001",
"readingValue":"5295",
"readingDate":"2021-02-25T00:00:00.000Z"
}
}
}
}' from dual
)
select json_value (data, '$.*."registers"."001"."readingValue"'
returning number) as reading_value
from test_table
;
READING_VALUE
-------------
5295
As an aside that is not related to your question in any way: In your JSON you have an object with a single attribute named "registers", whose value is another object with a single attribute "001", and in turn, this object has an attribute named "register" with value "001". Does that make sense to you? It doesn't to me.

Cannot update document by index in FaunaDB

I'm attempting to update a document using an index in my FaunaDB collection using FQL.
Update(
Match(
Index('users_by_id'),
'user-1'
),
{
data: {
name: 'John'
}
}
)
This query gives me the following error:
Error: [
{
"position": [
"update"
],
"code": "invalid argument",
"description": "Ref expected, Set provided."
}
]
How can I update the document using the index users_by_id?
Match returns a set reference, not a document reference, because there could be zero or more matching documents.
If you are certain that there is a single document that matches, you can use Get. When you call Get with a set reference (instead of a document reference), the first item of the set is retrieved. Since Update requires a document reference, you can then use Select to retrieve the fetched document's reference.
For example:
Update(
Select(
"ref",
Get(Match(Index('users_by_id'), 'user-1'))
),
{
data: {
name: 'John'
}
}
)
If you have more than one match, you should use Paginate to "realize" the set into an array of matching documents, and then Map over the array to perform a bulk update:
Map(
Paginate(
Match(Index('users_by_id'), 'user-1')
),
Lambda(
"ref",
Update(
Var("ref"),
{
data: {
name: "John",
}
}
)
)
)
Note: For this to work, your index has to have an empty values definition, or it must explicitly define the ref field as the one and only value. If your index returns multiple fields, the Lambda function has to be updated to accept the same number of parameters as are defined in your index's values definition.

BigQuery: Get field names of a STRUCT

I have some data in a STRUCT in BigQuery. Below I have visualised an example of the data as JSON:
{
...
siblings: {
david: { a: 1 }
sarah: { b: 1, c: 1 }
}
...
}
I want to produce a field from a query that resembles ["david", "sarah"]. Essentially I just want to get the keys from the STRUCT (object). Note that every user will have different key names in the siblings STRUCT.
Is this possible in BigQuery?
Thanks,
A
Your structs schema must be consistent throughout the table. They can't change keys because they're part of the table schema. To get the keys you simply take a look at the table schema.
If values change, they're probably values in an array - I guess you might have something like this:
WITH t AS (
SELECT 1 AS id, [STRUCT('david' AS name, 33 as age), ('sarah', 42)] AS siblings
union all
SELECT 2, [('ken', 19), ('ryu',21), ('chun li',23)]
)
SELECT * FROM t
If you tried to introduce new keys in the second row or within the array, you'd get an error Array elements of types {...} do not have a common supertype at ....
The first element of the above example in json representation looks like this:
{
"id": "1",
"siblings": [
{
"name": "david",
"age": "33"
},
{
"name": "sarah",
"age": "42"
}
]
}

MongoDB like statement with multiple fields

With SQL we can do the following :
select * from x where concat(x.y ," ",x.z) like "%find m%"
when x.y = "find" and x.z = "me".
How do I do the same thing with MongoDB, When I use a JSON structure similar to this:
{
data:
[
{
id:1,
value : "find"
},
{
id:2,
value : "me"
}
]
}
The comparison to SQL here is not valid since no relational database has the same concept of embedded arrays that MongoDB has, and is provided in your example. You can only "concat" between "fields in a row" of a table. Basically not the same thing.
You can do this with the JavaScript evaluation of $where, which is not optimal, but it's a start. And you can add some extra "smarts" to the match as well with caution:
db.collection.find({
"$or": [
{ "data.value": /^f/ },
{ "data.value": /^m/ }
],
"$where": function() {
var items = [];
this.data.forEach(function(item) {
items.push(item.value);
});
var myString = items.join(" ");
if ( myString.match(/find m/) != null )
return 1;
}
})
So there you go. We optimized this a bit by taking the first characters from your "test string" in each word and compared the tokens to each element of the array in the document.
The next part "concatenates" the array elements into a string and then does a "regex" comparison ( same as "like" ) on the concatenated result to see if it matches. Where it does then the document is considered a match and returned.
Not optimal, but these are the options available to MongoDB on a structure like this. Perhaps the structure should be different. But you don't specify why you want this so we can't advise a better solution to what you want to achieve.