Set a json objects values from an array in postgresql - sql

I have an Array of JSON objects in PostgreSQL inside data.json, That looks like this
[
{ "10" : [1,2,3,4,5] },
{ "8" : [5,4,3,1,2] },
{ "11" : [1,3,5,4,2] }
]
The data is taken from a select statement
SELECT
ARRAY((SELECT json_build_object(station_id,
ARRAY((SELECT
COALESCE((SELECT SUM(prodtakt_takt)
FROM psproductivity.prodtakt
WHERE prodtakt_start::date = generate_series.shipping_day_sort::date
AND station_id = prodtakt_station_id
),0)
FROM generate_series)) ) FROM psproductivity.station WHERE (SELECT COALESCE(SUM(prodtakt_takt),0) FROM psproductivity.prodtakt WHERE station_id = prodtakt_station_id) > 0
)) AS json, ...
Where generate_series is just a series of dates.
Now I need to that and turn it into this format of a JSON object
{
"x" : "x",
"jsondata" : {
"10" : [1,2,3,4,5]
"8" : [5,4,3,1,2]
"11" : [1,3,5,4,2]
}
}
the software I am working on uses c3.js to process data into graphs so I have to change this format. I am thinking that I need to start with something like
json_build_object( 'jsondata',( SELECT FROM json_each(unnest(data.json)) ) )
But I can think of no route with that logic. Adding the x into the JSON object is easy. I am confident I can do that part if I can just reorganize the array

Related

Handle partial Stringified Partial Key-value JSON VARIANT on SQL

I have a variant object like the below:
{
"build_num": "111",
"city_name": "Paris",
"rawData": "\"sku\":\"AAA\",\"price\":19.98,\"currency\":\"USD\",\"quantity\":1,\"size\":\"\"}",
"country": "France"
}
So you can see that parts of it are regular key-value pairs like build num and city name, but then we have raw data which its value is a stringified version of a JSON
I would like to create a variant from this that will look like:
{
"build_num": "111",
"city_name": "Paris",
"rawData": {
"sku":"AAA",
"price":19.98,
"currency":"USD",
"quantity":1}
"country": "France"
}
AND I would like to do this all in SQL (Snowflake) - is that possible?
so to poke the "data" into I have used a CTE, and escaped the sub json, so it is in the DB as you describe. I also had to add the missing start of object token {, to make rawData valid.
WITH data AS (
SELECT parse_json(json) as json
FROM VALUES
('{"build_num": "111","city_name": "Paris","country": "France","rawData": "{\\"sku\\":\\"AAA\\",\\"price\\":19.98,\\"currency\\":\\"USD\\",\\"quantity\\":1,\\"size\\":\\"\\"}"}')
v( json)
)
SELECT
json,
json:rawData as raw_data,
parse_json(raw_data) as sub_json,
OBJECT_INSERT(json, 'rawData', sub_json, true) as all_json
FROM data;
so this show step by step transforming the data, parsing it via PARSE_JSON, and re inject via OBJECT_INSERT the result into the original object.
WITH data AS (
SELECT parse_json(json) as json
FROM VALUES
('{"build_num": "111","city_name": "Paris","country": "France","rawData": "{\\"sku\\":\\"AAA\\",\\"price\\":19.98,\\"currency\\":\\"USD\\",\\"quantity\\":1,\\"size\\":\\"\\"}"}')
v( json)
)
SELECT
OBJECT_INSERT(json, 'rawData', parse_json(json:rawData), true) as all_json
FROM data;
TRY_PARSE_JSON will clear off additional slashes from JSON element and below is reference:
select
column1 as n,
try_parse_json(column1) as v
from
values
(
'{
"build_num": "111",
"city_name": "Paris",
"rawData": "{\"sku\":\"AAA\",\"price\":19.98,\"currency\":\"USD\",\"quantity\":1,\"size\":\"\"}",
"country": "France"}'
) as vals;

GET last element of array in json column of my Transact SQL table

Thanks for helping.
I have my table CONVERSATIONS structured in columns like this :
[ ID , JSON_CONTENT ]
In the column ID i have a simple id in Varchar
In the column JSON_CONTENT i something like this :
{
"id_conversation" : "25bc8cbffa8b4223a2ed527e30d927bf",
"exchanges": [
{
"A" : "...",
"B": "..."
},
{
"A" : "...",
"B": "..."
},
{
"A" : "...",
"Z" : "..."
}
]
}
I would like to query and get the id and the last element of exchanges :
[ ID , LAST_ELT_IN_EXCHANGE_IN_JSON_CONTENT]
I wanted to do this :
select TOP 3 ID, JSON_QUERY(JSON_CONTENT, '$.exchange[-1]')
from CONVERSATION
But of course Transact SQL is not Python.
I saw theses answers, but i don't know how to applicate to my problem.
Select last value from Json array
Thanks for helping <3
If I understand you correctly, you need an additional APPLY operator and a combination of OPENJSON() and ROW_NUMBER(). The result from the OPENJSON() call is a table with columns key, value and type and when the JSON content is an array, the key column returns the index of the element in the specified array:
Table:
SELECT ID, JSON_CONTENT
INTO CONVERSATION
FROM (VALUES
(1, '{"id_conversation":"25bc8cbffa8b4223a2ed527e30d927bf","exchanges":[{"A":"...","B":"..."},{"A":"...","B":"..."},{"A":"...","Z":"..."}]}')
) v (ID, JSON_CONTENT)
Statement:
SELECT c.ID, j.[value]
FROM CONVERSATION c
OUTER APPLY (
SELECT [value], ROW_NUMBER() OVER (ORDER BY CONVERT(int, [key]) DESC) AS rn
FROM OPENJSON(c.JSON_CONTENT, '$.exchanges')
) j
WHERE j.rn = 1
Result:
ID value
------------------------
1 {
"A" : "...",
"Z" : "..."
}
Notice, that -1 is not a valid array index in your path expression, but you can access the item in a JSON array by index (e.g. '$.exchanges[2]').

BigQuery: Convert record of repeated in repeated record

I have a BigQuery table represented by this JSON (Record of Repeated)
{
"createdBy": [
"foo",
"foo"
],
"fileName": [
"bar1",
"bar2"
]
}
that I need to convert to Repeated Record
[
{
"createdBy": "foo",
"fileName": "bar1"
},
{
"createdBy": "foo",
"fileName": "bar2"
}
]
To make this conversion you use the index 0 for every array and you created the first object, use the 1 index for the second object, ...
I performed this kind of transformation using a UDF, but the problem is due to BigQuery limits I'm unable to save a VIEW that performs this transformation:
No support for CREATE TEMPORARY FUNCTION statements inside views
Following the full statement to generate a sample table and the function
CREATE TEMP FUNCTION filesObjectArrayToArrayObject(filesJson STRING)
RETURNS ARRAY<STRUCT<createdBy STRING, fileName STRING>>
LANGUAGE js AS """
function filesObjectArrayToArrayObject_execute(files) {
var createdBy = files["createdBy"];
var fileName = files["fileName"];
var output = [];
for(var i=0 ; i<createdBy.length ; i++) {
output.push({ "createdBy" : createdBy[i], "fileName" : fileName[i] });
}
return output;
}
return filesObjectArrayToArrayObject_execute(JSON.parse(filesJson));
""";
WITH sample_table AS (
SELECT STRUCT<createdBy ARRAY<STRING>, fileName ARRAY<STRING>>(
["foo", "foo"],
["bar1", "bar2"]
) AS files
)
SELECT
files AS filesOriginal,
filesObjectArrayToArrayObject(TO_JSON_STRING(files)) AS filesConverted
FROM sample_table
Is there a way to perform the same kind of task using native BigQuery statements?
Please note that:
The real data has more than 2 keys, but those are fixed in names
The length of the array is not fixed, can be 0, 1, 10, 20, ...
Below is for BigQuery Standard SQL
#standardSQL
WITH sample_table AS (
SELECT STRUCT<createdBy ARRAY<STRING>, fileName ARRAY<STRING>>(
["foo", "foo"],
["bar1", "bar2"]
) AS files
)
SELECT
ARRAY(
SELECT STRUCT(createdBy, fileName)
FROM t.files.createdBy AS createdBy WITH OFFSET
JOIN t.files.fileName AS fileName WITH OFFSET
USING(OFFSET)
) files
FROM `sample_table` t
with output
Row files.createdBy files.fileName
1 foo bar1
foo bar2

SQL Server - "for json path" statement does not return more than 2984 lines of JSON string

I'm trying to generate huge amount of data in a complex and nested JSON string using "for json path" statement, and I'm using multiple functions to create different parts of this JSON string, as follow:
declare #queue nvarchar(max)
select #queue = (
select x.ID as layoutID
, l.Title as layoutName
, JSON_QUERY(queue_objects (#productID, x.ID)) as [objects]
from Layouts x
inner join LayoutLanguages l on l.LayoutID = x.ID
where x.ID = #layoutid
group by x.ID, l.Title
for json path
)
select #queue as JSON
Thus far, JSON would be:
{
"root": [{
"layouts": [{
"layoutID": 5
, "layoutName": "foo"
, "objects": []
}]
}]
}
and the "queue_objects" function then would be called to fill out 'objects' array:
queue_objects
select 0 as objectID
, case when (select inherited_counter(#layoutID,0)) > 0 then 'false' else 'true' end as editable
, JSON_QUERY(queue_properties (p.Table2ID)) as propertyObjects
, JSON_QUERY('[]') as inherited
from productList p
where p.Table1ID = #productID
group by p.Table2ID
for json path
And then JSON would be:
{
"root": [{
"layouts": [{
"layoutID": 5
, "layoutName": "foo"
, "objects": [{
"objectID": 1000
, "editable": "true"
, "propertyObjects": []
, "inherited": []
}, {
"objectID": 2000
, "editable": "false"
, "propertyObjects": []
, "inherited": []
}]
}]
}]
}
Also "inherited_counter" and "queue_properties" functions would be called to fill corresponding keys.
This is just a sample, the code won't work as I'm not putting functions here.
But my question is: is it the functions that simultaneously call each other, makes the server return broken JSON string? or it's the server itself that can't handle JSON strings more than 2984 lines?
EDIT: what I mean by 2984 lines, is that I use beautifier on JSON, the server won't return the string line by line, it returns JSON broken, but after beautifying it happens to be 2984 lines of string.
As I wrote in my comment to the OP, this is probably due to SSMS has a limit of how many characters to display in a column in the result grid. It has no impact on the actual result, e.g. the result has all data, it is just that SSMS doesn't display it all.
To fix this, you can increase the number of characters SSMS retrieves:
I would not recommend that - "how long is a piece of string", but instead select the result into a nvarchar(max) variable, and PRINT that variable. That should give you the whole text.
Hope this helps!

jsonb LIKE query on nested objects in an array

My JSON data looks like this:
[{
"id": 1,
"payload": {
"location": "NY",
"details": [{
"name": "cafe",
"cuisine": "mexican"
},
{
"name": "foody",
"cuisine": "italian"
}
]
}
}, {
"id": 2,
"payload": {
"location": "NY",
"details": [{
"name": "mbar",
"cuisine": "mexican"
},
{
"name": "fdy",
"cuisine": "italian"
}
]
}
}]
given a text "foo" I want to return all the tuples that have this substring. But I cannot figure out how to write the query for the same.
I followed this related answer but cannot figure out how to do LIKE.
This is what I have working right now:
SELECT r.res->>'name' AS feature_name, d.details::text
FROM restaurants r
, LATERAL (SELECT ARRAY (
SELECT * FROM json_populate_recordset(null::foo, r.res#>'{payload,
details}')
)
) AS d(details)
WHERE d.details #> '{cafe}';
Instead of passing the whole text of cafe I want to pass ca and get the results that match that text.
Your solution can be simplified some more:
SELECT r.res->>'name' AS feature_name, d.name AS detail_name
FROM restaurants r
, jsonb_populate_recordset(null::foo, r.res #> '{payload, details}') d
WHERE d.name LIKE '%oh%';
Or simpler, yet, with jsonb_array_elements() since you don't actually need the row type (foo) at all in this example:
SELECT r.res->>'name' AS feature_name, d->>'name' AS detail_name
FROM restaurants r
, jsonb_array_elements(r.res #> '{payload, details}') d
WHERE d->>'name' LIKE '%oh%';
db<>fiddle here
But that's not what you asked exactly:
I want to return all the tuples that have this substring.
You are returning all JSON array elements (0-n per base table row), where one particular key ('{payload,details,*,name}') matches (case-sensitively).
And your original question had a nested JSON array on top of this. You removed the outer array for this solution - I did the same.
Depending on your actual requirements the new text search capability of Postgres 10 might be useful.
I ended up doing this(inspired by this answer - jsonb query with nested objects in an array)
SELECT r.res->>'name' AS feature_name, d.details::text
FROM restaurants r
, LATERAL (
SELECT * FROM json_populate_recordset(null::foo, r.res#>'{payload, details}')
) AS d(details)
WHERE d.details LIKE '%oh%';
Fiddle here - http://sqlfiddle.com/#!15/f2027/5