I want to combine json rows in t-sql into single json row - sql

I have a table
id
json
1
{"url":"url2"}
2
{"url":"url2"}
I want to combine these into a single statement where the output is :
{
"graphs": [
{
"id": "1",
"json": [
{
"url": "url1"
}
]
},
{
"id": "2",
"json": [
{
"url": "url2"
}
]
}
]
}
I am using T-SQL, I've notice there is some stuff in postgres but can't find much on tsql.
Any help would be greatly appreciated..

You need to use JSON_QUERY on the json column to ensure it is not escaped.
SELECT
id,
JSON_QUERY('[' + json + ']')
FROM YourTable t
FOR JSON PATH, ROOT('graphs');
db<>fiddle

Related

There's a count difference between Druid Native Query and Druid SQL when using query

I have a problem with Druid Query.
I wanted to get data count with hour granularity.
So, I used Druid SQL like this.
SELECT TIME_FLOOR(__time, 'PT1H') AS t, count(1) AS cnt FROM mydatasource GROUP BY 1
then I got response like this.
[
{
"t": "2022-08-31T09:00:00.000Z",
"cnt": 12427
},
{
"t": "2022-08-31T10:00:00.000Z",
"cnt": 16693
},
{
"t": "2022-08-31T11:00:00.000Z",
"cnt": 16694
},
...
But, When using native query like this,
{
"queryType": "timeseries",
"dataSource": "mydatasource",
"intervals": "2022-08-31T07:01Z/2022-09-01T07:01Z",
"granularity": {
"type": "period",
"period": "PT1H",
"timeZone": "Etc/UTC"
},
"aggregations": [
{
"name": "count",
"type": "longSum",
"fieldName": "count"
}
],
"context": {
"skipEmptyBuckets": "true"
}
}
There's a difference result.
[
{
"timestamp": "2022-08-31T09:00:00.000Z",
"result": {
"count": 1288965
}
},
{
"timestamp": "2022-08-31T10:00:00.000Z",
"result": {
"count": 1431215
}
},
{
"timestamp": "2022-08-31T11:00:00.000Z",
"result": {
"count": 1545258
}
},
...
I want to use the result of Native Query.
What's the problem in my Druid SQL query??
How do I create a query to get native query results?
I found what's difference.
when using longSum type aggregation, I get result like native query.
So, I want to know how to query aggregate like below using sql.
"aggregations": [
{
"type": "longSum",
"name": "count",
"fieldName": "count"
}
]
I found solution.
Query like this.
SELECT TIME_FLOOR(__time, 'PT1H') AS t, sum("count") AS cnt FROM mydatasource GROUP BY 1
Given that your datasource has a "count" column, I'm assuming it comes from an ingestion that uses rollup. This means that the original raw rows have been aggregated and the "count" column contains the count of raw rows that were summarized into the each aggregate row.
The Native query is using the longSum function over the "count" column.
The original SQL you used, is just counting the aggregate rows.
So yes, the correct way to get the count of raw rows is SUM("count").

How to update value in nested json Postgres

I have following JSON stored in "Info" column
{
"customConfig": {
"isCustomGoods": 1
},
"new_addfields": {
"data": [
{
"val": {
"items": [
{
"Code": "calorie",
"Value": "365.76"
},
{
"Code": "protein",
"Value": "29.02"
},
{
"Code": "fat",
"Value": "23.55"
},
{
"Code": "carbohydrate",
"Value": "6.02"
},
{
"Code": "spirit",
"Value": "1.95"
}
],
"storageConditions": "",
"outQuantity": "100"
},
"parameterType": "Nutrition",
"name": "00000000-0000-0000-0000-000000000001",
"label": "1"
},
{
"name": "b4589168-5235-4ec5-bcc7-07d4431d14d6_Для ресторанов",
"val": "true"
}
]
}
}
I want to update value of nested json
{
"name": "b4589168-5235-4ec5-bcc7-07d4431d14d6_Для ресторанов",
"val": "true"
}
and set "val"to "Yes" str so the result should be like
{
"name": "b4589168-5235-4ec5-bcc7-07d4431d14d6_Для ресторанов",
"val": "Yes"
}
How can i do that ? Assuming that i need to update this value in json for many records in database
Considering you have a constant JSON Structure and a primary key in your table. Idea is to get the exact path of element val having value true (which can be at any index in the array) then replace it with desired value. So you can write your query like below:
with cte as (
select
id,
('{new_addfields,data,'||index-1||',val}')::text[] as json_path
from
test,
jsonb_array_elements(info->'new_addfields'->'data')
with ordinality arr(vals,index)
where
arr.vals->>'val' ilike 'true'
)
update test
set info = jsonb_set(info,cte.json_path,'"Yes"',false)
from cte
where test.id=cte.id;
DEMO
We can use jsonb_set() which is available from Postgres 9.5+
From Docs:
jsonb_set(target jsonb, path text[], new_value jsonb [, create_missing boolean])
Query to update the nested object:
UPDATE temp t
SET info = jsonb_set(t.info,'{new_addfields,data,1,val}', jsonb '"Yes"')
where id = 1;
It can also be used in select query:
SELECT
jsonb_set(t.info,'{new_addfields,data,1,val}', jsonb '"Yes"')
FROM temp t
LIMIT 1;

How to select field values from array of objects?

I have a JSON column with following JSON
{
"metadata": { "value": "JABC" },
"force": false,
"users": [
{ "id": "111", "comment": "abc" },
{ "id": "222", "comment": "abc" },
{ "id": "333" }
]
}
I am expecting list of IDs from the query output ["111","222", "333"]. I tried following query but getting null value.
select colName->'users'->>'id' ids from tableName
How to get this specific field value from the array of object?
You need to extract the array as rows and then get the id:
select json_array_elements(colName->'users')->>'id' ids from tableName;
If you're using jsonb rather than json, the function is jsonb_array_elements.

Query to extract ids from a deeply nested json array object in Presto

I'm using Presto and trying to extract all 'id' from 'source'='dd' from a nested json structure as following.
{
"results": [
{
"docs": [
{
"id": "apple1",
"source": "dd"
},
{
"id": "apple2",
"source": "aa"
},
{
"id": "apple3",
"source": "dd"
}
],
"group": 99806
}
]
}
expected to extract the ids [apple1, apple3] into a column in Presto
I am wondering what is the right way to achieve this in Presto Query?
If your data has a regular structure as in the example you posted, you can use a combination of parsing the value as JSON, casting it to a structured SQL type (array/map/row) and the using array processing functions to filter, transform and extract the elements you want:
WITH data(value) AS (VALUES '{
"results": [
{
"docs": [
{
"id": "apple1",
"source": "dd"
},
{
"id": "apple2",
"source": "aa"
},
{
"id": "apple3",
"source": "dd"
}
],
"group": 99806
}
]
}'),
parsed(value) AS (
SELECT cast(json_parse(value) AS row(results array(row(docs array(row(id varchar, source varchar)), "group" bigint))))
FROM data
)
SELECT
transform( -- extract the id from the resulting docs
filter( -- filter docs with source = 'dd'
flatten( -- flatten all docs arrays into a single doc array
transform(value.results, r -> r.docs) -- extract the docs arrays from the result array
),
doc -> doc.source = 'dd'),
doc -> doc.id)
FROM parsed
The query above produces:
_col0
------------------
[apple1, apple3]
(1 row)

Take json value in sql without JSON_VALUE

i want to take value (In SQL Server) in json object without JSON_VALUE
The json value :
{{
"Url": "****",
"Token": "",
"Data": {
"role_id": 1001,
"data": {
"stringvalue": [
{
"minage": "21"
},
{
"maxage": "55"
},
{
"primary_identity_file": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAgAAAQABAAD/7QCcUGhvdG9zaG9wIDMuMAA4QklNBAQAAAAAAIAcAmcAFHpFZEQyY1ZzbGVyRzNrcF8yTjhHHAIoAGJGQk1EMDEwMDBhYzAwMzAwMDAwMjMxMDAwMDQxNzQwMDAwYjQ3YjAwMDBhMTgyMDAwMDliY2EwMDAwMjkyMDAxMDBmODJhMDEwMGFiMzQwMTAwMDgzZTAxMDBjZGM0MDEwMP/iAhxJQ0NfUFJPRklMRQABAQAAAgxsY21zAhAAAG1udHJSR0IgWFlaIA"
}
]
}
}
}}
what i trying to do is to take "primary_identity_file" value
the result should be :
data:image/jpeg;base64,/9j/4AAQSkZJRgABAgAAAQABAAD/7QCcUGhvdG9zaG9wIDMuMAA4QklNBAQAAAAAAIAcAmcAFHpFZEQyY1ZzbGVyRzNrcF8yTjhHHAIoAGJGQk1EMDEwMDBhYzAwMzAwMDAwMjMxMDAwMDQxNzQwMDAwYjQ3YjAwMDBhMTgyMDAwMDliY2EwMDAwMjkyMDAxMDBmODJhMDEwMGFiMzQwMTAwMDgzZTAxMDBjZGM0MDEwMP/iAhxJQ0NfUFJPRklMRQABAQAAAgxsY21zAhAAAG1udHJSR0IgWFlaIA
** NOTE primary_identity_file value is more than 10K character
I'd try something like that:
select substring(vlv3, 1, charindex('"', vlv3)-1) as vlv4
from (
select substring(vlv2, charindex('"', vlv2)+1, len(vlv2)) as vlv3
from (
select substring(vlv, charindex('"primary_identity_file"', vlv)+23, len(vlv)) as vlv2
from test
) as test2
) as test3
You can rewrite in more readable way as stored procedure
Sample fiddle http://sqlfiddle.com/#!18/a122b/7