This is an awkward thing to word, so if this has been asked before sorry, I couldn't find any answers.
I'm taking JSON which is a list of objects and storing the fields in a table, however the JSON is from a third party and is prone to adding in new fields, so I'd like to also save the raw JSON of the object. So that if any unhandled fields were added we have the raw JSON, and can update the DB for those fields and run through the stored JSON, so no data is lost.
I can't however find or figure out how to just get the raw JSON of the current list item.
Basic OPENJSON I have:
DECLARE #JSON NVARCHAR(MAX) = '{"data": [
{ "id": 1,
"name": "foo",
"type": "bar",
"random": "Potato"},
{ "id": 2,
"name": "cake",
"type": "special",
"random": "unhandled field"}]}'
SELECT *
FROM OPENJSON(#JSON, '$.data')
WITH (
[id] INT '$.id',
[type] NVARCHAR(50) '$.type',
[name] NVARCHAR(50) '$.name'
) jsonData
but I can't find anywhere documenting how to also as a field get the entire JSON for the item.
My ideal tabled output being:
id
name
type
json
1
foo
bar
{ "id": 1,"name": "foo", "type": "bar", "random": "Potato"}
2
cake
special
{ "id": 2, "name": "cake", "type": "special", "random": "unhandled field"}
Related
I am working on a sensitive migration. The scenario is as follows:
I have a new table that I need to populate with data
There is an existing table, which contains a column (type = json), which contains an array of objects such as:
[
{
"id": 0,
"name": "custom-field-0",
"label": "When is the deadline for a response?",
"type": "Date",
"options": "",
"value": "2020-10-02",
"index": 1
},
{
"id": 1,
"name": "custom-field-1",
"label": "What territory does this relate to?",
"type": "Dropdown",
"options": "UK, DE, SE, DK, BE, NL, IT, FR, ES, AT, CH, NO, US, SG, Other",
"value": " DE",
"index": 2
}
]
I need to essentially map these values in this column to my new table. I have worked with JSON data in PostgresQL before, where I was dealing with a single object in the JSON, but never with arrays of objects and on such a large scale.
So just to summarise, how does someone iterate every row, and every object in an array, and insert that data into a new table?
EDIT
I have been experimenting with some functions, and I found one that seems promising json_array_elements_text or json_array_elements. As this allowed me to add multiple rows to the new table using this array of objects.
However, my issue is that I need to map certain values to the new table.
INSERT INTO form_field_value ("name", "label", "inputType", "options", "form" "workspace")
SELECT <<HERE IS WHERE I NEED TO EXTRACT VALUES FROM THE JSON ARRAY>>, task.form, task.workspace
FROM task;
EDIT 2
I have been playing around some more with the above functions, but reached a slight issue.
INSERT INTO form_field_value ("name", "label", "inputType", "options", "form" "workspace")
SELECT cf ->> 'name',
(cf ->> 'label')
...
FROM jsonb_array_elements(task."customFields") AS t(cf);
My issue lies in the FROM clause, so customFields is the array of objects, but I also need to get the form and workspace attribute from this table too. Plus I a pretty sure that the FROM clause would not work anyway, as it probably will complain about the task."customFields" not being specified or something.
Here is the select statement that uses json_array_elements and a lateral join in the from clause to flatten the data.
select j ->> 'name' as "name", j ->> 'label' as "label",
j ->> 'type' as "inputType", j ->> 'options' as "options", form, workspace
from task
cross join lateral json_array_elements("customFields") as l(j);
The from clause can be less verbose
from task, json_array_elements("customFields") as l(j)
you can try to use json_to_recordset:
select * from json_to_recordset('
[
{
"id": 0,
"name": "custom-field-0",
"label": "When is the deadline for a response?",
"type": "Date",
"options": "",
"value": "2020-10-02",
"index": 1
},
{
"id": 1,
"name": "custom-field-1",
"label": "What territory does this relate to?",
"type": "Dropdown",
"options": "UK, DE, SE, DK, BE, NL, IT, FR, ES, AT, CH, NO, US, SG, Other",
"value": " DE",
"index": 2
}
]
') as x(id int, name text,label text,type text,options text,value text,index int)
for insert record you can use an sql like this:
INSERT INTO form_field_value ("name", "label", "inputType", "options", "form" "workspace")
SELECT name, label, type, options, form, workspace
FROM
task,
json_to_record(task) AS
x (id int, name text,label text,type text,options text,value text,index int)
Let's say I have a JSON input parameter called #ImageURL in my procedure with the following structure which is used for storing and manipulating my Uploader section.
[
{
"Title": "some title",
"Urls": "some url",
"Type": "application/pdf",
"IsPrimary": true
},
...
]
This JSON string always has more than one set of Title, URLs and etc. but there are some cases in which I insist to check that it contains only one since I can't be sure if front-end developers will send the JSON for me in a right format.
For checking this, I have used the following code which seems to be working but I feel like it's unclean and there might be a better way to write this than creating a TMP table and checking it's COUNT.
DECLARE #JSONImageFile TABLE(
Level1ID INT,
JSONImageFile NVARCHAR(MAX),
JSONImageFileTitle NVARCHAR(1000),
JSONImageFileType NVARCHAR(100),
JSONImageFileIsPrimary BIT
)
INSERT INTO #JSONImageFile(
Level1ID,
JSONImageFile,
JSONImageFileTitle,
JSONImageFileType,
JSONImageFileIsPrimary
)
SELECT
#Level1ID,
Urls,
Title,
Type,
IsPrimary
FROM OPENJSON(#ImageURL) WITH (
Urls NVARCHAR(MAX),
Title NVARCHAR(1000),
Type NVARCHAR(100),
IsPrimary BIT
)
IF (SELECT COUNT(*) FROM #JSONImageFile) = 1 /* the main code */
Thanks in advance for your answers.
In case of a JSON array, you may try to parse this array with OPENSJON() and default schema and check the max value of the returned key column:
JSON:
DECLARE #ImageUrl nvarchar(max) =
N'[
{
"Title": "some title",
"Urls": "some url",
"Type": "application/pdf",
"IsPrimary": true
},
{
"Title": "some title",
"Urls": "some url",
"Type": "application/pdf",
"IsPrimary": true
}
]'
Statement:
IF EXISTS (
SELECT 1
FROM OPENJSON(#ImageUrl)
HAVING MAX(CONVERT(int, [key])) = 0
)
PRINT 'Success'
ELSE
PRINT 'Error'
I want to get a specific string from a column. How can I get that.
Here is the json with column name json with table name my_table
I want to fetch "extensionAttribute.simSerial": "310240000029929".
{
"name": "urn:imei:930000001801583",
"type": "DEVICE",
"sourceId": "P-n1000USCqT4",
"consumers": "CDM",
"crudStatus": {
"status": "SUCCESS",
"operation": "UPDATE"
},
"targetName": "urn:imei:930000001801583",
"deviceTypeId": "dgs11b74714f7020ctoogu3zfugc6",
"createdUserId": "a8aacc5d978d494eb54ae4243e714646",
"onboardStatus": "DONE",
"consrStatus": "",
"deviceTypeName": "performance_device_type_2",
"lwm2mPskSecret": "49443",
"createdUserName": "manager",
"bootstrapRequest": true,
"lwm2mPskIdentity": "urn:imei:930000001801583",
"boostrapPskSecret": "49443",
"deviceTypeVersion": 1,
"lastUpdatedUserId": "",
"lwm2mSecurityMode": "psk",
"consumersForUpdate": "",
"bootstrapPskIdentity": "urn:imei:930000001801583",
"pureCoapSecurityMode": "NONE",
"bootstrapSecurityMode": "psk",
"extensionAttribute.imsi": "310240000029929",
"extensionAttribute.msisdn": "310240000029929",
"extensionAttribute.simSerial": "310240000029929",
"extensionAttribute.msisdnStatus": "active"
}
Solution
You can do it as follows:
select column_name->>'extensionAttribute.simSerial' as simSerial from my_table;
where column_name is your column name in a table.
If you will have JSON with higher depth you can do something like:
column_name->'key_in_json'->'another_key_in_json'->>'last_key_in_json'
Manual
JSON Functions and Operators
PostgreSQL JSON Tutorial
I am trying to build a query which combines rows of one table into a JSON array, I then want that array to be part of the return.
I know how to do a simple query like
SELECT *
FROM public.template
WHERE id=1
And I have worked out how to produce the JSON array that I want
SELECT array_to_json(array_agg(to_json(fields)))
FROM (
SELECT id, name, format, data
FROM public.field
WHERE template_id = 1
) fields
However, I cannot work out how to combine the two, so that the result is a number of fields from public.template with the output of the second query being one of the returned fields.
I am using PostGreSQL 9.6.6
Edit, as requested more information, a definition of field and template tables and a sample of each queries output.
Currently, I have a JSONB row on the template table which I am using to store an array of fields, but I want to move fields to their own table so that I can more easily enforce a schema on them.
Template table contains:
id
name
data
organisation_id
But I would like to remove data and replace it with the field table which contains:
id
name
format
data
template_id
At the moment the output of the first query is:
{
"id": 1,
"name": "Test Template",
"data": [
{
"id": "1",
"data": null,
"name": "Assigned User",
"format": "String"
},
{
"id": "2",
"data": null,
"name": "Office",
"format": "String"
},
{
"id": "3",
"data": null,
"name": "Department",
"format": "String"
}
],
"id_organisation": 1
}
This output is what I would like to recreate using one query and both tables. The second query outputs this, but I do not know how to merge it into a single query:
[{
"id": 1,
"name": "Assigned User",
"format": "String",
"data": null
},{
"id": 2,
"name": "Office",
"format": "String",
"data": null
},{
"id": 3,
"name": "Department",
"format": "String",
"data": null
}]
The feature you're looking for is json concatenation. You can do that by using the operator ||. It's available since PostgreSQL 9.5
SELECT to_jsonb(template.*) || jsonb_build_object('data', (SELECT to_jsonb(field) WHERE template_id = templates.id)) FROM template
Sorry for poorly phrasing what I was trying to achieve, after hours of Googling I have worked it out and it was a lot more simple than I thought in my ignorance.
SELECT id, name, data
FROM public.template, (
SELECT array_to_json(array_agg(to_json(fields)))
FROM (
SELECT id, name, format, data
FROM public.field
WHERE template_id = 1
) fields
) as data
WHERE id = 1
I wanted the result of the subquery to be a column in the ouput rather than compiling the entire output table as a JSON.
I have an existing database that has an important column that's called InDays with nvarchar(150) datatype.
In the existing data there's an Array that has an Object inside and looks like that:
InDays
----------------------------------------------------------------------------------------------
[{ "day": 1, "from": "12:00am", "to": "2:00am"},{ "day": 4, "from": "2:00am", "to": "4:00am"}]
The Objects inside can be more than one.
I tried inserting it as it is, but i get [object Object] instead of the value.
EDIT--
The insert code.
DECLARE #InDays nvarchar(150) = [{ "day": 1, "from": "12:00am", "to": "2:00am"},{ "day": 4, "from": "2:00am", "to": "4:00am"}]
INSERT INTO Course (
InDays
)
VALUES
(
#InDays
)
I have ... an important column that's ... nvarchar(150) datatype.
So use that type with your insert:
DECLARE #InDays nvarchar(150) = '[{ "day": 1, "from": "12:00am", "to": "2:00am"},{ "day": 4, "from": "2:00am", "to": "4:00am"}]'
Though I have my doubts 150 will be large enough if you could end up with many of these. Stepping through just the first object, assuming it's typical, you'll run out of space already at just the 5th member of the array.
For future explorers
I simply had to convert the JSON format into a String.
Which is done by JSON.stringify([{"value": value}]) then directly store it into the sql database
Then JSON.parse("value") to convert it to JSON again.