I need create only one item from recursive multiple pages:
Page A / Page B relationship: 1 to N
Page B / Page C relationship: 1 to N
And result item is:
{
A_1,
A_2,
A_3,
[
{
B_1,
B_2,
[
{C_1, C_2},
{C_1, C_2}
]
},
{
B_1,
B_2,
[
{C_1, C_2},
{C_1, C_2}
]
}
]
}
I tried to use inline_requests but only works on the first level
Related
I have the following SQL data that I am trying to output as a structured JSON string as follows:
Table data
TableId ContainerId MaterialId SizeId
848 1 1 1
849 1 1 2
850 1 2 1
851 1 2 2
852 1 3 1
853 1 4 1
854 2 2 1
855 2 2 2
856 2 2 3
JSON output
{
"container": [
{
"id": 1,
"material": [
{
"id": 1,
"size": [
{
"id": 1
},
{
"id": 2
}
]
},
{
"id": 2,
"size": [
{
"id": 1
},
{
"id": 2
}
]
},
{
"id": 3,
"size": [
{
"id": 1
}
]
},
{
"id": 4,
"size": [
{
"id": 1
}
]
}
]
},
{
"id": 2,
"material": [
{
"id": 2,
"size": [
{
"id": 1
},
{
"id": 2
},
{
"id": 3
}
]
}
]
}
]
}
I have tried several ways of outputting it but I am struggling to stop duplicated Container and Material Id records. Is anyone able to demonstrate the best working practices for extracting JSON from a table such as this please?
Well, it isn't pretty but this appears to work:
WITH
container As (SELECT distinct containerid As id FROM jsonArray1 As container)
, material As (SELECT distinct materialid As id, containerid As cid FROM jsonArray1 As material)
, size As (SELECT sizeid As id, materialid As tid, containerid As cid FROM jsonArray1 As size)
SELECT container.id id, material.id id, size.id id
FROM container
JOIN material ON material.cid = container.id
JOIN size ON size.tid = material.id AND size.cid = material.cid
FOR JSON AUTO, ROOT
sqlfiddle example
AUTO will structure JSON for you, but only by following the structure of the data tables used in the query. Since the data starts out "flat" in a single table, AUTO won't create any structure. So the trick I applied here was to use WITH CTE's to restructure this flat data into three virtual tables whose relationships had the necessary structure.
Everything here is super-sensitive in a way that normal relational SQL would not be. For instance, just changing the order of the JOINs will restructure the JSON hierarchy even though that would have no effect on a normal SQL query.
I also had to play around with the table and column aliases (a lot) to get it to put the right names on everything in the JSON.
You can use the following query
SELECT CONCAT('{"container": [',string_agg(json,','),']}') as json
FROM
(SELECT CONCAT('{"id:"',CAST(ContainerId as nvarchar(100)),
',"material":[{',string_agg(json,','),'}]}') as json,
dense_rank() over(partition by ContainerId order by ContainerId) rnk
FROM
(SELECT ContainerId ,MaterialId,CONCAT('"id":',CAST(MaterialId as nvarchar(100))
,',"size":[',string_agg('{"id":' + CAST(SizeId as nvarchar(100)) + '}',','),']') as json
FROM tb
GROUP BY ContainerId,MaterialId) T
GROUP BY ContainerId) T
GROUP BY rnk
demo in db<>fiddle
I have a jsonb column called data in a table called reports. Here is what report.id = 1 looks like
[
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Food123"
},
{
"productIDs": [
"EFG1"
],
"groupID": "Electronic123"
}
],
"Package": [
{
"groupID": "Electronic123"
}
],
"type": "Produce"
},
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Clothes123"
}
],
"Package": [
{
"groupID": "Food123"
}
],
"type": "Wearables"
}
]
and here is what report.id = 2 looks like:
[
{
"Product": [
{
"productIDs": [
"XYZ1",
"XYZ2"
],
"groupID": "Food123"
}
],
"Package": [],
"type": "Wearable"
},
{
"Product": [
{
"productIDs": [
"ABC1",
"ABC2"
],
"groupID": "Clothes123"
}
],
"Package": [
{
"groupID": "Food123"
}
],
"type": "Wearables"
}
]
I am trying to get a list of all entries in reports table where at least one of data column's element has following:
type = Produce AND
where any elements of Product array OR any elements of Product array's groupID start with Food
So from the example above this query will only return the first index since
The type = Produce
groupID starts with Food for first element of Product array
The second index will be filtered out because type is not Produce.
I am not sure how to query to do AND query for groupID. Here is what I have tried to get all entries for type Produce
select * from reports r, jsonb_to_recordset(r.data) as items(type text) where items.type like 'Produce';
Sample structure and result: dbfiddle
select r.*
from reports r
cross join jsonb_array_elements(r.data) l1
cross join jsonb_array_elements(l1.value -> 'Product') l2
where l1 ->> 'type' = 'Produce'
and l2.value ->> 'groupID' ~ '^Food';
I have a table that looks like the following in Snowflake:
ID | CODES
2 | [ { "list": [ { "item": "CODE1" }, { "item": "CODE2" } ] } ]
And I want to make it into:
ID | CODES
2 | 'CODE1'
2 | 'CODE2'
So far I've tried
SELECT ID,CODES[0]:list
FROM MY_TABLE
But that only gets me as far as:
ID | CODES
2 | [ { "item": "CODE1" }, { "item": "CODE2" } ]
How can I break out every 'item' element from every index of this list into its own row with each CODE as a string?
Update: Here is the answer I got working at the same time as the answer below, looks like we both used FLATTEN:
SELECT ID,f.value:item
FROM MY_TABLE,
lateral flatten(input => MY_TABLE.CODES[0]:list) f
So as you note you have hard coded your access into the codes, via codes[0] which gives you the first item from that array, if you use FLATTEN you can access all of the objects of the first array.
WITH my_table(id,codes) AS (
SELECT 2, parse_json('[ { "list": [ { "item": "CODE1" }, { "item": "CODE2" } ] } ]')
)
SELECT ID, c.*
FROM my_table,
table(flatten(codes)) c;
gives:
2 1 [0] 0 { "list": [ { "item": "CODE1" }, { "item": "CODE2" }]} [ { "list": [{"item": "CODE1"}, { "item": "CODE2" }]}]
so now you want to loop across the items in list, so we use another FLATTEN on that:
WITH my_table(id,codes) AS (
SELECT 2, parse_json('[ { "list": [ { "item": "CODE1" }, { "item": "CODE2" } ] } ]')
)
SELECT ID, c.value, l.value
FROM my_table,
table(flatten(codes)) c,
table(flatten(c.value:list)) l;
gives:
2 {"list":[{"item": "CODE1"},{"item":"CODE2"}]} {"item":"CODE1"}
2 {"list":[{"item": "CODE1"},{"item":"CODE2"}]} {"item":"CODE2"}
so you can pull apart that l.value how you need to access the parts you need.
I have some JSON data stored in a column. I want to parse the json data and extract all the values against a particular key.
Here's my sample data:
{
"fragments": [
{
"fragments": [
{
"fragments": [
{
"fragments": [],
"fragmentName": "D"
},
{
"fragments": [],
"fragmentName": "E"
},
{
"fragments": [],
"fragmentName": "F"
}
],
"fragmentName": "C"
}
],
"fragmentName": "B"
}
],
"fragmentName": "A"
}
Expected output:
D, E, F, C, B, A
I want to extract all fragmentName values from the above JSON.
I have gone through the below stacks, but haven't found anything useful:
Collect Recursive JSON Keys In Postgres
Postgres recursive query with row_to_json
Edited:
Here's one approach I have tried on the above stacks:
WITH RECURSIVE key_and_value_recursive(key, value) AS (
SELECT
t.key,
t.value
FROM temp_frg_mapping, json_each(temp_frg_mapping.info::json) AS t
WHERE id=2
UNION ALL
SELECT
t.key,
t.value
FROM key_and_value_recursive,
json_each(CASE
WHEN json_typeof(key_and_value_recursive.value) <> 'object' THEN '{}' :: JSON
ELSE key_and_value_recursive.value
END) AS t
)
SELECT *
FROM key_and_value_recursive;
Output:
Getting only 0 level nesting.
I would use a recursive query, but with jsonb_array_elements():
with recursive cte as (
select id, info ->> 'fragmentName' as val, info -> 'fragments' as info, 1 lvl
from mytable
where id = 2
union all
select c.id, x.info ->> 'fragmentName', x.info -> 'fragments', c.lvl + 1
from cte c
cross join lateral jsonb_array_elements(c.info) as x(info)
where c.info is not null
)
select id, val, lvl
from cte
where val is not null
The query traverses the object depth-first; at each step of the way, we unnest the json array and check if a fragment name is available. We don't need to check the types of the returned values: we just use the standard functions, until the data exhausts.
Demo on DB Fiddle
Sample data:
{
"fragments": [
{
"fragments": [
{
"fragments": [
{
"fragments": [
],
"fragmentName": "D"
},
{
"fragments": [
],
"fragmentName": "E"
},
{
"fragments": [
],
"fragmentName": "F"
}
],
"fragmentName": "C"
}
],
"fragmentName": "B"
}
],
"fragmentName": "A"
}
Results:
id | val | lvl
-: | :-- | --:
2 | A | 1
2 | B | 2
2 | C | 3
2 | D | 4
2 | E | 4
2 | F | 4
OK I have the Distance Matrix API working just fine for cities in the same country. It can be US or France. But when I put Paris,France and Austin,TX USA I get zero results. I have tried austin,tx only and with the USA - nothing. If I stay in the same country no problem. Any ideas?
{ destination_addresses: [ 'Austin, TX, USA' ],
origin_addresses: [ 'Paris, France' ],
rows: [ { elements: [Array] } ],
status: 'OK' }
[ { elements: [ [Object] ] } ]
[ { status: 'ZERO_RESULTS' } ]