Reading JSON array containing JSON object into rows - sql

Based from this answer I found one problem. JSON object is return as NULL.
Suppose that I have a JSON like this:
{
"array_in_json": [
{ "number": 1, "character": "A", "some_object": { "code": 65 } },
{ "number": 2, "character": "B", "some_object": { "code": 66 } },
{ "number": 3, "character": "C", "some_object": { "code": 67 } },
{ "number": 4, "character": "D", "some_object": { "code": 68 } }
]
}
With a query like this:
DECLARE #json NVARCHAR(MAX)
SET #json = '{
"array_in_json": [
{ "number": 1, "character": "A", "some_object": { "code": 65 } },
{ "number": 2, "character": "B", "some_object": { "code": 66 } },
{ "number": 3, "character": "C", "some_object": { "code": 67 } },
{ "number": 4, "character": "D", "some_object": { "code": 68 } }
]
}'
SELECT
a.[number],
a.[character],
a.[some_object]
FROM
OPENJSON(#json) WITH (
Actions NVARCHAR(MAX) '$.array_in_json' AS JSON
) AS i
CROSS APPLY (
SELECT * FROM
OPENJSON(i.Actions) WITH (
[number] NVARCHAR(MAX) '$.number',
[character] NVARCHAR(MAX) '$.character',
[some_object] NVARCHAR(MAX) '$.some_object'
)
) a
This's the result:
number | character | some_object
-------------------------------------------------
1 | 'A' | NULL
2 | 'B' | NULL
3 | 'C' | NULL
4 | 'D' | NULL
I want to know is there a way to get a result like this? (Return JSON as escaped string instead of NULL)
number | character | some_object
-------------------------------------------------
1 | 'A' | '{ "code": 65 }'
2 | 'B' | '{ "code": 66 }'
3 | 'C' | '{ "code": 67 }'
4 | 'D' | '{ "code": 68 }'

You need to use AS JSON option in your schema definition to specify that the $.some_object property contains an inner JSON object:
SELECT
a.[number],
a.[character],
a.[some_object]
FROM
OPENJSON(#json) WITH (
Actions NVARCHAR(MAX) '$.array_in_json' AS JSON
) AS i
CROSS APPLY (
SELECT * FROM
OPENJSON(i.Actions) WITH (
[number] NVARCHAR(MAX) '$.number',
[character] NVARCHAR(MAX) '$.character',
[some_object] NVARCHAR(MAX) '$.some_object' AS JSON
)
) a

Related

BigQuery : best use of UNNEST Arrays

I really need some help, I have a big file JSON that I ingested into BigQuery, I want to write a query that uses UNNEST twice, namely I have this like :
{
"categories": [
{
"id": 1,
"name" : "C0",
"properties": [
{
"name": "Property_1",
"value": {
"type": "String",
"value": "11111"
}
},
{
"name": "Property_2",
"value": {
"type": "String",
"value": "22222"
}
}
]}
]}
And I want to do a query that give's me something like this result
---------------------------------------------------------------------
| Category_ID | Name_ID | Property_1 | Property_2 |
------------------------------------------------------------------
| 1 | C0 | 11111 | 22222 |
---------------------------------------------------------------------
I already made something like but it's not working :
SELECT
c.id as Category_ID,
c.name as Name_ID,
p.value.value as p.name
From `DataBase-xxxxxx` CROSS JOIN
UNNEST(categories) AS c,
UNNEST(c.properties) AS p;
Thank you more 🙏

SQL Query to get the count of Json Array

I have the below Json object. How do I get the count of Object Array.
{
"Model": [
{
"ModelName": "Test Model",
"Object": [
{
"ID": 1,
"Name": "ABC"
},
{
"ID": 11,
"Name": "ABCD"
},
]
}]}
I tried the below query but seems JSON_Length was not available.
SELECT ModelName,
JSON_LENGTH(JsonData, '$.Model[0].Object')
FROM TabA
The expected output should be
ModelName COUNT
Test Model 2
If you have valid JSON (at the moment you have a trailing comma (,_ after one of your closing braces (})), then you could use OPENJSON and COUNT:
DECLARE #JSON nvarchar(MAX) = N'{
"Model": [
{
"ModelName": "Test Model",
"Object": [
{
"ID": 1,
"Name": "ABC"
},
{
"ID": 11,
"Name": "ABCD"
}
]
}]}';
SELECT M.ModelName,
COUNT(O.[key]) AS [Count]
FROM (VALUES(#JSON))V(J)
CROSS APPLY OPENJSON(V.J)
WITH(ModelName varchar(20) '$.Model[0].ModelName',
[Object] nvarchar(MAX) '$.Model[0].Object' AS JSON) M
CROSS APPLY OPENJSON(M.[Object]) O
GROUP BY M.ModelName;

Get keys into JSON column in SQL Server 2017

I have this Json in a column of type NVARCHAR(MAX) in SQL Server 2017:
{
"coreTimes": {
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_OUT_DATETIME": {
"value": 1
},
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_IN_DATETIME": {
"value": 2
},
"TMP_CLINICAL_TIMES_SURGICAL_OUT_DATETIME": {
"value": 3
},
"TMP_CLINICAL_TIMES_ROOM_IN_DATETIME": {
"value": 4
},
"TMP_CLINICAL_TIMES_ROOM_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_IN_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_SURGICAL_IN_DATETIME": {
"value": null
}
}
}
I need this result:
Column
{"value":1}
{"value":2}
{"value":3}
{"value":4}
{"value":null}
{"value":null}
{"value":null}
What SQL Server 2017 function can I use to get this result if json properties can change dynamically (the key 'coreTimes' is fixed)?
In Oracle I have used:
SELECT res.*
FROM sopinterventionsaux ,
JSON_TABLE ( operating_times, '$.coreTimes.*'
COLUMNS (
value VARCHAR2 ( 2000 ) FORMAT JSON PATH '$'
)
) res
In PostgreSQL:
select value from table, jsonb_each(column-> 'coreTimes')
And in SQL Server?
If I understand you correctly, you need to use OPENJSON() with default schema to parse the input JSON. In this case OPENJSON() returns a table with columns key, value and type:
Table:
DECLARE #json nvarchar(max) = N'{
"coreTimes": {
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_OUT_DATETIME": {
"value": 1
},
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_IN_DATETIME": {
"value": 2
},
"TMP_CLINICAL_TIMES_SURGICAL_OUT_DATETIME": {
"value": 3
},
"TMP_CLINICAL_TIMES_ROOM_IN_DATETIME": {
"value": 4
},
"TMP_CLINICAL_TIMES_ROOM_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_IN_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_SURGICAL_IN_DATETIME": {
"value": null
}
}
}'
CREATE TABLE Data (JsonData nvarchar(max))
INSERT INTO Data (JsonData) VALUES (#json)
Statement:
SELECT j.[value]
FROM Data d
CROSS APPLY OPENJSON(d.JsonData, '$.coreTimes') j
Result:
---------------
value
---------------
{"value": 1}
{"value": 2}
{"value": 3}
{"value": 4}
{"value": null}
{"value": null}
{"value": null}
{"value": null}
If you want to get the exact values for the value keys, you need to use OPENJSON() with explicit schema (with columns definitions) and additional APPLY operator.
Statement:
SELECT j2.[value]
FROM Data d
CROSS APPLY OPENJSON(d.JsonData, '$.coreTimes') j1
CROSS APPLY OPENJSON(j1.[value]) WITH ([value] int '$.value') j2
Result:
-----
value
-----
1
2
3
4
NULL
NULL
NULL
NULL
Please try following:
declare #j nvarchar(max) = '{
"coreTimes": {
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_OUT_DATETIME": {
"value": 1
},
"TMP_CLINICAL_TIMES_ANESTHESIOLOGY_IN_DATETIME": {
"value": 2
},
"TMP_CLINICAL_TIMES_SURGICAL_OUT_DATETIME": {
"value": 3
},
"TMP_CLINICAL_TIMES_ROOM_IN_DATETIME": {
"value": 4
},
"TMP_CLINICAL_TIMES_ROOM_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_OUT_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_BLOCK_IN_DATETIME": {
"value": null
},
"TMP_CLINICAL_TIMES_SURGICAL_IN_DATETIME": {
"value": null
}
}
}'
select L1.[Value] from OPENJSON (#j, '$.coreTimes') AS L1

Transform Json Nested Object Array To Table Row

I have a json like:
[
{
"Id": "1234",
"stockDetail": [
{
"Number": "10022_1",
"Code": "500"
},
{
"Number": "10022_1",
"Code": "600"
}
]
},
{
"Id": "1235",
"stockDetail": [
{
"Number": "10023_1",
"Code": "100"
},
{
"Number": "10023_1",
"Code": "100"
}
]
}
]
How to convert it in sql table like below:
+------+---------+------+
| Id | Number | Code |
+------+---------+------+
| 1234 | 10022_1 | 500 |
| 1234 | 10022_1 | 600 |
| 1235 | 10023_1 | 100 |
| 1235 | 10023_1 | 100 |
+------+---------+------+
If you need to define typed columns you can use OPENJSON with WITH clause:
DECLARE #j nvarchar(max) = N'[
{
"Id": "1234",
"stockDetail": [
{ "Number": "10022_1",
"Code": "500"
},
{ "Number": "10022_1",
"Code": "600"
}
]
},
{
"Id": "1235",
"stockDetail": [
{ "Number": "10023_1",
"Code": "100"
},
{ "Number": "10023_1",
"Code": "100"
}
]
}
]'
select father.Id, child.Number, child.Code
from openjson (#j)
with (
Id int,
stockDetail nvarchar(max) as json
) as father
cross apply openjson (father.stockDetail)
with (
Number nvarchar(100),
Code nvarchar(100)
) as child
Result:
In your case you may try to CROSS APPLY the JSON child node with the parent node:
DECLARE #json nvarchar(max)
SET #json = N'
[
{
"Id": "1234",
"stockDetail": [
{
"Number": "10022_1",
"Code": "500"
},
{
"Number": "10022_1",
"Code": "600"
}
]
},
{
"Id": "1235",
"stockDetail": [
{
"Number": "10023_1",
"Code": "100"
},
{
"Number": "10023_1",
"Code": "100"
}
]
}
]'
SELECT
JSON_Value (i.value, '$.Id') as ID,
JSON_Value (d.value, '$.Number') as [Number],
JSON_Value (d.value, '$.Code') as [Code]
FROM OPENJSON (#json, '$') as i
CROSS APPLY OPENJSON (i.value, '$.stockDetail') as d
Output:
ID Number Code
1234 10022_1 500
1234 10022_1 600
1235 10023_1 100
1235 10023_1 100

Convert table in SQL server into JSON string for migration into DocumentDB

I have a table called DimCompany in SQL Server like so:
+----+---------+--------+
| id | Company | Budget |
+----+---------+--------+
| 1 | abc | 111 |
| 2 | def | 444 |
+----+---------+--------+
I would like to convert this table into a json file like so:
{
"DimCompany":{
"id":1,
"companydetails": [{
"columnid": "1",
"columnfieldname": "Company",
"columnfieldvalue: "abc"
}
{
"columnid": "2",
"columnfieldname": "Budget",
"columnfieldvalue: "111"
}]
}
},
{
"DimCompany":{
"id":2,
"companydetails": [{
"columnid": "1",
"columnfieldname": "Company",
"columnfieldvalue: "def"
}
{
"columnid": "2",
"columnfieldname": "Budget",
"columnfieldvalue: "444"
}]
}
}
where columnid is a value from sys.columns against the column field name. I've tried doing this by unpivoting the table and joining sys.columns on fieldname where sys.objects.name=DimCompany and putting this in a view, then querying on the view to get json output for migration into DocumentDB.
However I would like to not use unpivot and just directly form a query to get desired output.
I'm just curious whether this is possible in SQL server or in any other tool.
Without using UNPIVOT and doing it yourself, the following SQL:
if object_id(N'dbo.DimCompany') is not null drop table dbo.DimCompany;
create table dbo.DimCompany (
id int not null identity(1,1),
Company nvarchar(50) not null,
Budget float not null
);
insert dbo.DimCompany (Company, Budget) values
('abc', 111),
('def', 444);
go
select id as 'DimCompany.id',
(
select columnid=cast(sc.column_id as nvarchar), columnfieldname, columnfieldvalue
from (
select N'Company', Company from dbo.DimCompany DC2 where DC2.id = DC1.id
union
select N'Budget', cast(Budget as nvarchar) from dbo.DimCompany DC2 where DC2.id = DC1.id
) keyValues (columnfieldname, columnfieldvalue)
join sys.columns sc on sc.object_id=object_id(N'dbo.DimCompany') and sc.name=columnfieldname
for json path
) as 'DimCompany.companydetails'
from dbo.DimCompany DC1
for json path, without_array_wrapper;
Produces the following JSON as per your example:
{
"DimCompany": {
"id": 1,
"companydetails": [
{
"columnid": "2",
"columnfieldname": "Company",
"columnfieldvalue": "abc"
},
{
"columnid": "3",
"columnfieldname": "Budget",
"columnfieldvalue": "111"
}
]
}
},
{
"DimCompany": {
"id": 2,
"companydetails": [
{
"columnid": "2",
"columnfieldname": "Company",
"columnfieldvalue": "def"
},
{
"columnid": "3",
"columnfieldname": "Budget",
"columnfieldvalue": "444"
}
]
}
}
Things to note:
The sys.columns columnid values start at 1 for the dbo.DimCompany.id column. Subtract 1 before casting if that's a requirement.
Using without_array_wrapper removes the surrounding [] characters, per your example, but isn't really valid JSON as a result.
I doubt this would be scalable for tables with large numbers of columns.