I've got some problems with extracting values from nested json values in column.
I've got a column of data with values that looks almost like nested json, but some of jsons got \ between values and I need to clean them.
JSON looks like this:
{"mopub_json":
"{\"currency\":\"USD\",
\"country\":\"US\",
\"publisher_revenue\":0.01824}
"}
I need to get currency and publisher revenue as different columns and try this:
SET json_serialization_enable TO true;
SET json_serialization_parse_nested_strings TO true;
SELECT
JSON_EXTRACT_PATH_TEXT(column_name, 'mopub_json', 'publisher_revenue') as revenue_mopub,
JSON_EXTRACT_PATH_TEXT(column_name, 'mopub_json', 'currency') as currency_mopub
FROM(
SELECT replace(column_name, "\t", '')
FROM table_name)
I receive the next error:
[Amazon](500310) Invalid operation: column "\t" does not exist in events
When I'm trying this:
SET json_serialization_parse_nested_strings TO true;
SELECT
JSON_EXTRACT_PATH_TEXT(column_name, 'mopub_json', 'publisher_revenue') as revenue_mopub,
JSON_EXTRACT_PATH_TEXT(column_name, 'mopub_json', 'currency') as currency_mopub
FROM(
SELECT replace(column_name, chr(92), '')
FROM table_name)
I receive
Invalid operation: JSON parsing error
When I'm trying to extract values without replacing , I'm receiving empty columns.
Thank you for your help!
So your json isn't valid. JSON doesn't allow multiline text strings but I expect that the issue. Based on your query I think you don't want a single key and string but the whole structure. The reason the that quotes are backslashed is because they are inside a string. The json should look like:
{
"mopub_json": {
"currency": "USD",
"country": "US",
"publisher_revenue": 0.01824
}
}
Then the SQL you have should work.
Related
I'm working with SQL Presto in Athena and in a table I have a column named "data.input.additional_risk_data.basket" that has a json like this:
[
{
"data.input.additional_risk_data.basket.val.brand":null,
"data.input.additional_risk_data.basket.val.category":null,
"data.input.additional_risk_data.basket.val.item_reference":"26484651",
"data.input.additional_risk_data.basket.val.name":"Nike Force 1",
"data.input.additional_risk_data.basket.val.product_name":null,
"data.input.additional_risk_data.basket.val.published_date":null,
"data.input.additional_risk_data.basket.val.quantity":"1",
"data.input.additional_risk_data.basket.val.size":null,
"data.input.additional_risk_data.basket.val.subCategory":null,
"data.input.additional_risk_data.basket.val.unit_price":769.0,
"data.input.additional_risk_data.basket.val.upc":null,
"data.input.additional_risk_data.basket.val.url":null
}
]
I need to extract some of the data there, for example data.input.additional_risk_data.basket.val.item_reference. I'm not used to working with jsons but I tried a few things:
json_extract("data.input.additional_risk_data.basket", '$.data.input.additional_risk_data.basket.val.item_reference')
json_extract_scalar("data.input.additional_risk_data.basket", '$.data.input.additional_risk_data.basket.val.item_reference)
They all returned null. I'm wondering what is the correct way to get the values from that json
Thank you!
There are multiple "problems" with your data and json path selector. Keys are not conventional (and I have not found a way to tell athena to escape them) and your json is actually an array of json objects. What you can do - cast data to an array and process it. For example:
-- sample data
WITH dataset (json_val) AS (
VALUES (json '[
{
"data.input.additional_risk_data.basket.val.brand":null,
"data.input.additional_risk_data.basket.val.category":null,
"data.input.additional_risk_data.basket.val.item_reference":"26484651",
"data.input.additional_risk_data.basket.val.name":"Nike Force 1",
"data.input.additional_risk_data.basket.val.product_name":null,
"data.input.additional_risk_data.basket.val.published_date":null,
"data.input.additional_risk_data.basket.val.quantity":"1",
"data.input.additional_risk_data.basket.val.size":null,
"data.input.additional_risk_data.basket.val.subCategory":null,
"data.input.additional_risk_data.basket.val.unit_price":769.0,
"data.input.additional_risk_data.basket.val.upc":null,
"data.input.additional_risk_data.basket.val.url":null
}
]')
)
--query
select arr[1]['data.input.additional_risk_data.basket.val.item_reference'] item_reference -- or use unnest if there are actually more than 1 element in array expected
from(
select cast(json_val as array(map(varchar, json))) arr
from dataset
)
Output:
item_reference
"26484651"
A proprietary third-party application stores JSON strings in it's database like this one:
{"state":"complete","timestamp":1614776473000}
I need the timestamp and found out that
DB2 offers JSON functions. Since it's stored as string in the PROF_VALUE column, I guess that converting with SYSTOOLS.JSON2BSON is required, before I can use JSON_VAL to fetch the timestamp:
SELECT SYSTOOLS.JSON_VAL(SYSTOOLS.JSON2BSON(PROF_VALUE), "timestamp", "f")
FROM EMPINST.PROFILE_EXTENSIONS ext
WHERE PROF_PROPERTY_ID = 'touchpointState'
This causes an error that timestamp is invalid in the used context ( SQLCODE=-206, SQLSTATE=42703, DRIVER=4.26.14). The same error is thown when I remove the JSON2BSON call like this
SELECT SYSTOOLS.JSON_VAL(PROF_VALUE, "timestamp", "f")
Also not working with the same error (different data-types):
SELECT SYSTOOLS.JSON_VAL(SYSTOOLS.JSON2BSON(PROF_VALUE), "state", "s:1000")
SELECT SYSTOOLS.JSON_VAL(PROF_VALUE) "state", "s:1000")
I don't understand this error. My syntax is like the documented JSON_VAL ( json-value , search-string , result-type) and it is the same like in the examples, where they show how to fetch the name field of an object.
I also played around a bit with JSON_TABLE to use raw input data for testing (instead of the database data), but it seems not suiteable for that.
SELECT *
FROM TABLE(SYSTOOLS.JSON_TABLE( SYSTOOLS.JSON2BSON('{"state":"complete","timestamp":1614776473000}'), 'state','s:32')) DATA
This gave me a table with one row: Type = 2 and Value = complete.
I had two problems in my query: First it seems that double quotes " are for object references. I wasn't aware that there is any difference, because in most databases I used yet, both single ' and double quotes " are equal.
The second problem is, that JSON_VAL needs to be called without SYSTOOLS, but the reference is still needed on SYSTOOLS.JSON2BSON(PROF_VALUE).
With those changes, the following query worked:
SELECT JSON_VAL(SYSTOOLS.JSON2BSON(PROF_VALUE), 'timestamp', 'f')
FROM EMPINST.PROFILE_EXTENSIONS ext
WHERE PROF_PROPERTY_ID = 'touchpointState'
I got this error.
ERROR: column "name" does not exist LINE 6:
SELECT JS, location, location->>Name
^ SQL state: 42703 Character: 278
from this query
WITH JsonTable AS
(
SELECT
'[
{
"Id":1,
"Name":"A01",
"Address":"aaa",
"SearchVector":null,
"ParentLocationId":null
},
{
"Id":4,
"Name":"B-01",
"Address":"bbb",
"SearchVector":null,
"ParentLocationId":null
}
]'::jsonb AS JS
)
SELECT JS, location, location->>Name
FROM JsonTable, jsonb_array_elements(JS) x (location)
How can I select JSON value?
You are missing quotes around the name of the JSON attribute that you want to select. Just like object keys must always be quoted when the JSON object is declared, they need to be quoted when they are accessed.
See the Postgres documentation for the JSON datatype and JSON Functions and Operators.
You would need to change this:
SELECT JS, location, location->>Name
To:
SELECT JS, location, location->>'Name'
Demo on DB Fiddle.
In SQL Server 2017, I'd like to "SELECT" a JSON object embedded within another as a string so we can store/process them later.
eg JSON:
[
{"key1":"value1",
"level2_Obj":{"key2":"value12"}
},
{"key1":"value2",
"level2_Obj":{"key22":"value22"}
},
]
From above JSON, I'd like to SELECT whole of the level2Obj JSON object, see below for what I'd like to see the "selection" result.
value1 |{"key2" :"value12"}
value2 |{"key22":"value22"}
I tried below with no luck:
SELECT * FROM
OPENJSON(#json,'$."data1"')
WITH(
[key1] nvarchar(50),
[embedded_json] nvarchar(max) '$."level2Obj"'
) AS DAP
Can some one please help how I select the contents of the 2nd level JSON object as a string?
The idea is to Write 1st level JSON properties into individual cells and rest of JSON levels into a single column of type nvarchar(max) (i.e whole of sub-level JSON object into a single column as a string for further processing in later stages).
Good day,
Firstly, Your JSON text is not properly formatted. There is extra comma after the last object in the array. I will remove this extra comma for the sake of the answer, but if this is the format you have then first step will be to clear the text and make sure that is is well formatted.
Please check if this solve your needs:
declare #json nvarchar(MAX) = '
[
{
"key1":"value1",
"level2_Obj":{"key2":"value12"}
}
,
{
"key1":"value2",
"level2_Obj":{"key22":"value22"}
}
]
'
SELECT JSON_VALUE (t1.[value], '$."key1"'), JSON_QUERY (t1.[value], '$."level2_Obj"')
FROM OPENJSON(#json,'$') t1
I have JSON string in a column like below.
oracle version - 12c
{
[
{
"Employee_id": 1
,"Salary":3206.93
}
]
}
How do I remove first flower brace and the square bracket..
Result should look like below..
{
"Employee_id": 1
,"Salary":3206.93
}
Tried using regex like SELECT regexp_substr('"abc{[{def}]}ghi"', '\[(.+)\]') match FROM dual;
But it didn't work..
select replace(replace(myColumn, '{[{', '{'), '}]}', '}')
from myTable
Note, this method will ONLY work if this column is always going to contain a single element of a JSON array, and if the JSON object doesn't contain any other JSON objects... otherwise it could break. Use at your own risk :)