I have a VARCHAR field like this:
[
{
"config": 0,
"type": "0
},
{
"config": x,
"type": "1"
},
{
"config": "",
"type": ""
},
{
"config": [
{
"address": {},
"category": "",
"merchant": {
"data": [
10,12,23
],
"file": 0
},
"range_id": 1,
"shop_id_info": null
}
],
"type": "new"
}
]
And I need to extract merchant data from this. Desirable output is:
10
12
23
Please advise. I keep getting Cannot cast VARCHAR to array/unnest type VARCHAR
You can try using json path $.*.config.*.merchant.data.* but if it does not work for you (as for me in Athena version, where arrays in json path are not supported well) you can cast your json to ARRAY(JSON) and do some manipultaions from there (needed to fix your JSON a little bit):
Test data:
WITH dataset AS (
SELECT * FROM (VALUES
(JSON '[
{
"config": {},
"type": "0"
},
{
"config": "x",
"type": "1"
},
{
"config": "",
"type": ""
},
{
"config": [
{
"address": {},
"category": "",
"merchant": {
"data": [
10,12,23
],
"file": 0
},
"range_id": 1,
"shop_id_info": null
}
],
"type": "new"
}
]')
) AS t (json_value))
And query:
SELECT flatten(
transform(
flatten(
transform(
CAST(json_value AS ARRAY(JSON))
, json_object -> try(CAST(json_extract(json_object, '$.config') AS ARRAY(JSON))))),
json_config -> CAST(json_extract(json_config, '$.merchant.data') as ARRAY(INTEGER))))
FROM dataset
Which will give you array of numbers:
_col0
[10, 12, 23]
And from there you can continue with unnest and so on if needed.
Related
A table I called raw_data with three columns: ID, timestamp, payload, the column paylod is a json type having values such as:
{
"data": {
"author_id": "1461871206425108480",
"created_at": "2022-08-17T23:19:14.000Z",
"geo": {
"coordinates": {
"type": "Point",
"coordinates": [
-0.1094,
51.5141
]
},
"place_id": "3eb2c704fe8a50cb"
},
"id": "1560043605762392066",
"text": " ALWAYS # London, United Kingdom"
},
"matching_rules": [
{
"id": "1560042248007458817",
"tag": "london-paris"
}
]
}
From this I want to select rows where the coordinates is available, such as [-0.1094,51.5141]in this case.
SELECT *
FROM raw_data, json_each(payload)
WHERE json_extract(json_each.value, '$.data.geo.') IS NOT NULL
LIMIT 20;
Nothing was returned.
EDIT
NOT ALL json objects have the coordinates node. For example this value:
{
"data": {
"author_id": "1556031969062010881",
"created_at": "2022-08-18T01:42:21.000Z",
"geo": {
"place_id": "006c6743642cb09c"
},
"id": "1560079621017796609",
"text": "Dear Desperate sister say husband no dey oo."
},
"matching_rules": [
{
"id": "1560077018183630848",
"tag": "kaduna-kano-katsina-dutse-zaria"
}
]
}
The correct path is '$.data.geo.coordinates.coordinates' and there is no need for json_each():
SELECT *
FROM raw_data
WHERE json_extract(payload, '$.data.geo.coordinates.coordinates') IS NOT NULL;
See the demo.
I need to get the email address from this 'facets' table I created from my firehose logs (JSON).
Now, I am using Athena to get particular information.
I need to get the email addresses from this:
This is my out of 'facets' when I pass-
SELECT * FROM "sampledb"."facets" limit 10
{email_channel={mail_event={mail={message_id=oadfosadu6237864237615, message_send_timestamp=1622696691764, from_address=abcd#jk.com, destination=[abcd#jk.com], headers_truncated=false, headers=[{name=From, value=abcd#jk.com}, {name=To, value=abcd#jk.com}, {name=MIME-Version, value=1.0}], common_headers={from=ghjk#li.com, to=[abcd#jk.com]}}, send={}, rendering_failure=null}}}
Assuming you have one column which stores json in provided format you can use json_extract with needed paths (and maybe some casts):
with dataset1 as (
select * from (values(JSON
'{
"email_channel": {
"mail_event": {
"mail": {
"message_id": "oadfosadu6237864237615",
"message_send_timestamp": 1622696691764,
"from_address": "abcd#jk.com",
"destination": [
"abcd#jk.com"
],
"headers_truncated": false,
"headers": [
{
"name": "From",
"value": "abcd#jk.com"
},
{
"name": "To",
"value": "abcd#jk.com"
},
{
"name": "MIME-Version",
"value": "1.0"
}
],
"common_headers": {
"from": "ghjk#li.com",
"to": [
"abcd#jk.com"
]
}
},
"send": {},
"rendering_failure": null
}
}
}')) as facets(facet))
select
json_extract(facet, '$.email_channel.mail_event.mail.from_address') mail_from,
CAST(json_extract(facet, '$.email_channel.mail_event.mail.destination') AS ARRAY(VARCHAR)) destination
from dataset1
And output:
mail_from
destination
"abcd#jk.com"
{abcd#jk.com}
Very similar to this post, but I struggle to adapt from their solution..
My table : public.challenge, column lines JSONB
My initial JSON in lines :
[
{
"line": 1,
"blocs": [
{
"size": 100,
"name": "abc"
},
{
"size": 100,
"name": "def"
},
{
"size": 100,
"name": "ghi"
}
]
},
{
"line": 2,
"blocs": [
{
"size": 100,
"name": "xyz"
}
]
}
]
Desired update :
[
{
"line": 1,
"blocs": [
{
"size": 100,
"name": "abc",
"type": "regular"
},
{
"size": 100,
"name": "def",
"type": "regular"
},
{
"size": 100,
"name": "ghi",
"type": "regular"
}
]
},
{
"line": 2,
"blocs": [
{
"size": 100,
"name": "xyz",
"type": "regular"
}
]
}
]
So basically I need to add the type key+value in every object of blocs, for each element of the root array.
My unsuccessful attempt looks like this :
UPDATE public.challenge SET lines = jsonb_set(lines, '{}', (
SELECT jsonb_set(line, '{blocs}', (
SELECT jsonb_agg( bloc || '{"type":"regular"}' )
FROM jsonb_array_elements(line->'{blocs}') bloc
))
FROM jsonb_array_elements(lines) line
))
;
(currently it sets the whole column as null, maybe due to jsonb_set(lines, '{}' while my json begins as an array ?)
Thanks!
Use jsonb_array_elements to unnest all the array elements and then add the required json and use jsonb_agg to aggregate it again:
with cte as
(select id,
jsonb_agg(jsonb_set(val1,
'{blocs}',
(select jsonb_agg(arr2 || '{"type": "regular"}')
from jsonb_array_elements(arr1.val1 - >
'blocs') arr2)))
from challenge,
jsonb_array_elements(lines) arr1(val1)
group by 1)
update challenge
set lines = (cte.jsonb_agg)
from cte
where challenge.id = cte.id
DEMO
I am still learning Snowflake, any help would be really appreciated.
I have a column, let's call it 'result'.
{
"catalog": [
{
"img_href": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179361.jpg",
"name": "ADITI HAND BLOCKED PRINT",
"price": 16
},
{
"img_href": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179330.jpg",
"name": "TORBAY HAND BLOCKED PRINT",
"price": 17
},
{
"img_href": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179362.jpg",
"name": "ADITI HAND BLOCKED PRINT",
"price": 18
}
],
"datetime": 161878993658
"catalog_id": 1
}
I would like to flatten it and reconstruct as below
[
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179361.jpg"
},
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179330.jpg"
},
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179362.jpg"
},
]
The following will do this. You won't need the CTE, so delete it and replace uses of tbl with the name of your table and uses of json with your variant column.
/*delete this line*/ with tbl as (select parse_json($1) json from values('{"catalog":[{"img_href":"https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179361.jpg","name":"ADITI HAND BLOCKED PRINT","price":16},{"img_href":"https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179330.jpg","name":"TORBAY HAND BLOCKED PRINT","price":17},{"img_href":"https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179362.jpg","name":"ADITI HAND BLOCKED PRINT","price":18}],"datetime":161878993658,"catalog_id":1}'))
select array_agg(new_col) reconstructed
from (
/* replace json and tbl */ select object_construct('datetime', json:datetime, 'url', obj.value:img_href) new_col, json:catalog_id catalog_id
from tbl, lateral flatten(json:catalog) obj
) group by catalog_id;
It outputs
[
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179361.jpg"
},
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179330.jpg"
},
{
"datetime": 161878993658,
"url": "https://schumacher-webassets.s3.amazonaws.com/Web%20Catalog-600/179362.jpg"
}
]
I have a json string containing list of data, I want to write a query to get a single value based on the condition. but it is returning list of values. please help me to write a valid query in oracle database.
My json string looks like
[
{
"Key": [
{
"obj": {
"xyz":"cdf"
},
"Info": [
{
"Code": "",
"tax": "",
"rate": "",
"taxAmount": {
"formattedAmount": "",
"Amount": ""
}
},
{
"Code": "qwer",
"tax": "ggs",
"rate": "0",
"taxAmount": {
"formattedAmount": "10.00",
"Amount": "10.00"
},
"key": "qwer"
},
{
"Code": "poiu",
"tax": "ggs",
"rate": "0",
"taxAmount": {
"formattedAmount": "20.00",
"Amount": "20.00"
},
"key": "poiu"
},
{
"coverageCode": "zxcv",
"tax": "ggs",
"rate": "0",
"taxAmount": {
"formattedAmount": "30.00",
"Amount": "30.00"
},
"key": "zxcv"
}
]
},
{
"status": "S"
}
]
}
]
I want to get formattedAmount value "10.00". Written a query like
SELECT json_query(details, '$.Info.taxAmount.formattedAmount' WITH WRAPPER)
FROM details_table where json_query(details, '$.Info.Code' WITH WRAPPER) = 'qwer';
returns no value. without where clause i'll get all the formattedAmont in list [,"10.00","20.00","30.00"]