I have following statement that I am trying to execute in Sql Server 2016 CTP 3:
DECLARE #json nvarchar(max)
set #json = '[
{ "name": "John" },
{ "name": "Jane", "surname": "Doe" }
]'
select
'othervalue' as o,
#json as j
for json path
The problem is when I execute these statements I get the following Json string (with escaped characters):
[{"o":"othervalue","j":"[ \r\n { \"name\": \"John\" },\r\n { \"name\": \"Jane\", \"surname\": \"Doe\" }\r\n]"}]
My question is how can select a Json string with a select statement correctly (without escaped characters).
Thanks
Wrap #json variable with JSON_QUERY:
select
'othervalue' as o,
JSON_QUERY(#json) as j
for json path
See frequently asked questions on MSDN https://msdn.microsoft.com/en-us/library/mt631706.aspx
Related
I am trying to convert a JSON file into a SQL table in SQL Server and running across an error that I cannot seem to find a solution for to save my life.
In addition to the error resolution, I would also like to go a step further and breakdown the employee ID values to not be in a single cell but rather split into separate rows.
Below is the code I am using which includes a sample JSON structure:
drop table if exists newtable1;
declare #json NVARCHAR(MAX)
--select #json=BulkColumn
--from openrowset (bulk 'C:\Users\hamza\Documents\Price Transparency\banking_sample.json',single_clob) as j;
= N'{
"comapny_name": "chase",
"company_type": "banking",
"last_updated_on": "2020-08-27",
"institutional":[{
"region_id": 1,
"groups": [{
"employee_id": [1111111111, 2222222222, 3333333333, 4444444444, 5555555555],
"site":{
"type": "atm",
"id": "11-1111111"
}
},{
"employee_id": [1111111111, 2222222222, 3333333333, 4444444444, 5555555555],
"site":{
"type": "branch",
"id": "22-2222222"
}
}]
},{
"region_id": 2,
"location": "new york city, ny"
}]
}';
select
JSON_VALUE(a.value,'$.company_name') as company_name,
JSON_VALUE(a.value,'$.company_type') as reporting_entity_type,
JSON_VALUE(a.value,'$.last_updated_on') as plan_name,
JSON_VALUE(b.value,'$.region_id') as prov_grp_id,
JSON_VALUE(b.value,'$.location') as loc,
JSON_VALUE(c.value,'$.employee_id') as npi,
JSON_VALUE(c.value,'$.site.type') as tin_type,
JSON_VALUE(c.value,'$.site.id') as tin_value
into newtable1
from openjson (#json) as a
cross apply openjson(a.value,'$.institutional') as b
cross apply openjson(b.value,'$.groups') as c;
I have a nested JSON array consisting of outer keys that are numbers, each of which contain inner arrays that I need to import into a table in SQL Server. The JSON file is setup like so:
{
"121212": {
"name": name of item,
"subject": item subject
},
"343434": {
"name": name of item,
"subject": item subject
}
}
I can use the SQL Server function OPENJSON() to import a single array without issue like so:
DECLARE #arrayVariable VARCHAR(MAX)
SELECT #arrayVariable = BulkColumn FROM OPENROWSET(BULK 'array.json', SINGLE_BLOB) JSON
INSERT INTO ArrayTable (arrayName, arraySubject)
SELECT * FROM OPENJSON(#arrayVariable, '$."121212"')
WITH (
arrayName VARCHAR(MAX) '$.name',
arraySubject VARCHAR(MAX) '$.subject'
)
The above code successfully imports array 121212 into the ArrayTable. However, I would like to know if there is a solution that can utilize wildcards as an argument for OPENJSON in order to import in all numeric array keys from the JSON array, that way they don't have to be imported individually. I have tried using wildcards but none of the formatting options I've tried have worked so far. For example:
OPENJSON(#arrayVariable, '$."[0-9]%"')
What would be the best way to import all of the numerically titled JSON arrays using OPENJSON()?
Try this
DECLARE #arrayVariable VARCHAR(MAX) = N'{
"121212": {
"name": "name of item1",
"subject": "item subject1"
},
"343434": {
"name": "name of item2",
"subject": "item subject2"
}
}'
SELECT v.arrayName, v.arraySubject
FROM OPENJSON(#arrayVariable) AS r
CROSS APPLY OPENJSON(r.value)
WITH (
arrayName VARCHAR(MAX) '$.name',
arraySubject VARCHAR(MAX) '$.subject'
) AS v
WHERE r.[key] LIKE '[0-9]%'
I have this nvarchar field with a JSON like this:
{"BOARD":"QC_Reference_Phone","SERIAL":"LGM700c2eee454","VERSION.INCREMENTAL":"1901817521542","CPU_ABI2":"armeabi","HOST":"CLD-BLD3-VM1-16","TIME":"1547801577000","MODEL":"LG-M700","MANUFACTURER":"LGE","USER":"jenkins","CPU_ABI":"armeabi-v7a","BRAND":"lge","DISPLAY":"OPM1.171019.026","FINGERPRINT":"lge/mh_global_com/mh:8.1.0/OPM1.171019.026/1901817521542:user/release-keys","HARDWARE":"mh","PRODUCT":"mh_global_com","BOOTLOADER":"unknown","VERSION.RELEASE":"8.1.0","ID":"OPM1.171019.026","UNKNOWN":"unknown","TYPE":"user","VERSION.SDK.NUMBER":"27","TAGS":"release-keys"}
And so my syntax is:
select JSON_VALUE(DeviceHardwareData,'$.VERSION.SDK.NUMBER') SDKVersion_nbr
FROM MyTable
It will work with all the other values within the JSON field but for "VERSION.SDK.NUMBER".
It returns a NULL Result for every row in my table.
I can actually get the value with the OPENJSON function, but I would like to know why it's specifically not returning the value for that attribute using JSON_Value
It doesn't work as you expect because in JSON Path syntax, the full stop character means "going one level down to a nested element under the following name". In order to extract the value with your path expression, your JSON structure should resemble the following:
"VERSION": {
"SDK": {
"NUMBER": 14
}
}
However, enclosing the element name in doublequotes in the path expression apparently does the trick:
declare #j nvarchar(max) = N'{
"VERSION.SDK.NUMBER": "27",
"VERSION": {
"SDK": {
"NUMBER": 14
}
}
}';
select json_value(#j, '$."VERSION.SDK.NUMBER"') as [TopValue],
json_value(#j, '$.VERSION.SDK.NUMBER') as [NestedValue];
I have an exercise to extract some data from a larger JSON object however the data is added as multiple objects or perhaps an array of sorts.
An example below;
DECLARE #json NVARCHAR(MAX) = '{
"N.data.-ce731645-e4ef-4784-bc02-bb90b4c9e9e6": "Some Data",
"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
}
]
}'
I need to extract these datetime entries from the "date_1" identifier into ideally a CSV list. From that I can do my own manipulations.
2018-10-20T23:00:00.000Z, 2018-10-21T23:00:00.000Z
I am familiar with JSON_VALUE() however not with its use outside of a simple piece of one dimensional data.
What I have so far;
DECLARE #json NVARCHAR(MAX) = '{
"N.data.-ce731645-e4ef-4784-bc02-bb90b4c9e9e6": "Some Data",
"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
}
]
}'
SELECT value FROM OPENJSON(#json)
Is there a way to achive the expected output outside of complex substring() and replace() uses?
Using SQL Server 2017
Microsoft SQL Server 2017 (RTM) - 14.0.1000.169 (X64) Aug 22 2017 17:04:49 Copyright (C) 2017 Microsoft Corporation Express Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 <X64> (Build 9600: ) (Hypervisor)
Thanks
Since SQL Server 2017, the extraction can be done via native OPENJSON:
DECLARE #json NVARCHAR(MAX) = '{
"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
}
]
}'
SELECT
JSON_VALUE(child_value.value, '$.date_1') AS [key]
FROM OPENJSON(#json, '$') AS nda
cross apply openjson(nda.value, '$') as child_value
Results to:
key
2018-10-20T23:00:00.000Z
2018-10-21T23:00:00.000Z
Is there a way to adjust this to extract the values for a specific
key, "N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f" in the
example
In this case, that query can be slightly simplified to:
DECLARE #id nvarchar(200) = 'N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f'
SELECT
JSON_VALUE(nda.value, '$.date_1') AS [key]
FROM OPENJSON(#json, concat('$."',#id,'"')) AS nda
or without parametrization:
SELECT
JSON_VALUE(nda.value, '$.date_1') AS [key]
FROM OPENJSON(#json, '$."N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f"') AS nda
Use a cross apply with OPENJSON() using a with_clause:
DECLARE #json NVARCHAR(MAX) = '{
"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
}
]
}';
SELECT [b].*
FROM OPENJSON(#json) [a]
CROSS APPLY
OPENJSON([a].[Value])
WITH (
[date_1] DATETIME '$.date_1'
) [b];
Another possible approach, using OPENJSON(). With this approach you can get key/value pairs from your nested JSON array, even if this array has different key names.
DECLARE #json nvarchar(max)
SET #json =
N'{"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
},
{
"date_2": "2019-10-21T23:00:00.000Z"
}
]
}'
SELECT
x.[key] AS SessionData,
z.[key],
z.[value]
FROM OPENJSON(#json) x
CROSS APPLY (SELECT * FROM OPENJSON(x.[value])) y
CROSS APPLY (SELECT * FROM OPENJSON(y.[value])) z
--WHERE z.[key] = 'date_1'
Output:
SessionData key value
N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f date_1 2018-10-20T23:00:00.000Z
N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f date_1 2018-10-21T23:00:00.000Z
N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f date_2 2019-10-21T23:00:00.000Z
Update:
If you want to filter by key name, next may help:
DECLARE #json NVARCHAR(MAX) = '{
"N.data.-ce731645-e4ef-4784-bc02-bb90b4c9e9e6": "Some Data",
"N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f": [
{
"date_1": "2018-10-20T23:00:00.000Z"
},
{
"date_1": "2018-10-21T23:00:00.000Z"
}
]
}'
SELECT z.[value]
--SELECT STRING_AGG(z.[value], ', ') [Data] -- with string aggregation
FROM OPENJSON(#json) x
CROSS APPLY (SELECT * FROM OPENJSON(x.[value])) y
CROSS APPLY (SELECT * FROM OPENJSON(y.[value])) z
WHERE x.[key] = 'N.data.sessionDates-7f1790d3-9175-43aa-962b-161ee3b8615f'
Output:
value
2018-10-20T23:00:00.000Z
2018-10-21T23:00:00.000Z
-- With string aggregation
--Data
--2018-10-20T23:00:00.000Z, 2018-10-21T23:00:00.000Z
I am trying to concatenate nested JSON in SQL, how to do that.
Please help me out with this.
Example: I have JSON Like below.
[
{
"Name": "Cards",
"Value": [
"Pack of 24 Dare Cards"
],
"SourceSpecified": false,
"Any": []
},
{
"Name": "Boppers and Shot Glasses",
"Value": [
"12 Willy Shot Glass and 12 Hen Boppers"
],
"SourceSpecified": false,
"Any": []
}
]
I want Out Put:
Pack of 24 Dare Cards-12 Willy Shot Glass and 12 Hen Boppers
If Value always contains one element
DECLARE #JSON NVARCHAR(MAX) = ' [
{
"Name": "Cards",
"Value": [
"Pack of 24 Dare Cards"
],
"SourceSpecified": false,
"Any": []
},
{
"Name": "Boppers and Shot Glasses",
"Value": [
"12 Willy Shot Glass and 12 Hen Boppers"
],
"SourceSpecified": false,
"Any": []
}
]'
SELECT STRING_AGG(Val, '-')
FROM OPENJSON(#JSON)
WITH (Val NVARCHAR(MAX) '$.Value[0]')
Support of multiple values is also easy.
SELECT STRING_AGG(Val, '-')
FROM OPENJSON(#JSON)
WITH (VALUE NVARCHAR(MAX) '$.Value' AS JSON)
OUTER APPLY OPENJSON(VALUE) WITH (Val NVARCHAR(MAX) '$')
This one I achieved with creating a scalar-valued function to populate concatenated value.
Create FUNCTION [dbo].[test]
(
#VariationData nvarchar(max)
)
RETURNS nvarchar(max)
AS
BEGIN
Declare #val nvarchar(max)
select #val = COALESCE(#val+', ','')+ Value from OPENJSON(#VariationData)
With ([Value] NVARCHAR(MAX) N'$.Value[0]')
return #val
END
Call function with nested JSON value i.e
Select [dbo].[test] (#JSON);
This worked for me.