How to extract a JSON value in Hive - sql

I Have a JSON string that is stored in a single cell in the DB corresponding to a parent ID
{"profileState":"ACTIVE","isDefault":"true","joinedOn":"2019-03-24T15:19:52.639Z","profileType":"ADULT","id":"abc","signupDeviceId":"1"}||{"profileState":"ACTIVE","isDefault":"true","joinedOn":"2021-09-05T07:47:00.245Z","imageId":"19","profileType":"KIDS","name":"Kids","id":"efg","signupDeviceId":"1"}
Now I want to use the above JSON to extract the id from this. Let say we have data like
Parent ID | Profile JSON
1 | {profile_json} (see above string)
I want the output to look like this
Parent ID | ID
1 | abc
1 | efg
Now, I've tried a couple of iterations to solve this
First Approach:
select
get_json_object(p.profile, '$$.id') as id,
test.parent_id
from (
select split(
regexp_replace(
regexp_extract(profiles, '^\\[(.+)\\]$$',1),
'\\}\\,\\{', '\\}\\|\\|\\{'),
'\\|\\|') as profile_list,
parent_id ,
from source_table) test
lateral view explode(test.profile_list) p as profile
)
But this is returning the id column as having NULL values. Is there something I'm missing here.
Second Approach:
with profiles as(
select regexp_replace(
regexp_extract(profiles, '^\\[(.+)\\]$$',1),
'\\}\\,\\{', '\\}\\|\\|\\{') as profile_list,
parent_id
from source_table
)
SELECT
get_json_object (t1.profile_list,'$.id')
FROM profiles t1
The second approach is only returning the first id (abc) as per the above JSON string.

I tried to replicate this in apache hive v4.
Data
+----------------------------------------------------+------------------+
| data | parent_id |
+----------------------------------------------------+------------------+
| {"profileState":"ACTIVE","isDefault":"true","joinedOn":"2019-03-24T15:19:52.639Z","profileType":"ADULT","id":"abc","signupDeviceId":"1"}||{"profileState":"ACTIVE","isDefault":"true","joinedOn":"2021-09-05T07:47:00.245Z","imageId":"19","profileType":"KIDS","name":"Kids","id":"efg","signupDeviceId":"1"} | 1.0 |
+----------------------------------------------------+------------------+
Sql
select pid,get_json_object(expl_jid,'$.id') json_id from
(select parent_id pid,split(data,'\\|\\|') jid from tabl1)a
lateral view explode(jid) exp_tab as expl_jid;
+------+----------+
| pid | json_id |
+------+----------+
| 1.0 | abc |
| 1.0 | efg |
+------+----------+

Solve this. Was using a extract $ in the First Approach
select
get_json_object(p.profile, '$.id') as id,
test.parent_id
from (
select split(
regexp_replace(
regexp_extract(profiles, '^\\[(.+)\\]$$',1),
'\\}\\,\\{', '\\}\\|\\|\\{'),
'\\|\\|') as profile_list,
parent_id ,
from source_table) test
lateral view explode(test.profile_list) p as profile
)

Related

How to get a value inside of a JSON that is inside a column in a table in Oracle sql?

Suppose that I have a table named agents_timesheet that having a structure like this:
ID | name | health_check_record | date | clock_in | clock_out
---------------------------------------------------------------------------------------------------------
1 | AAA | {"mental":{"stress":"no", "depression":"no"}, | 6-Dec-2021 | 08:25:07 |
| | "physical":{"other_symptoms":"headache", "flu":"no"}} | | |
---------------------------------------------------------------------------------------------------------
2 | BBB | {"mental":{"stress":"no", "depression":"no"}, | 6-Dec-2021 | 08:26:12 |
| | "physical":{"other_symptoms":"no", "flu":"yes"}} | | |
---------------------------------------------------------------------------------------------------------
3 | CCC | {"mental":{"stress":"no", "depression":"severe"}, | 6-Dec-2021 | 08:27:12 |
| | "physical":{"other_symptoms":"cancer", "flu":"yes"}} | | |
Now I need to get all agents having flu at the day. As for getting the flu from a single JSON in Oracle SQL, I can already get it by this SQL statement:
SELECT * FROM JSON_TABLE(
'{"mental":{"stress":"no", "depression":"no"}, "physical":{"fever":"no", "flu":"yes"}}', '$'
COLUMNS (fever VARCHAR(2) PATH '$.physical.flu')
);
As for getting the values from the column health_check_record, I can get it by utilizing the SELECT statement.
But How to get the values of flu in the JSON in the health_check_record of that table?
Additional question
Based on the table, how can I retrieve full list of other_symptoms, then it will get me this kind of output:
ID | name | other_symptoms
-------------------------------
1 | AAA | headache
2 | BBB | no
3 | CCC | cancer
You can use JSON_EXISTS() function.
SELECT *
FROM agents_timesheet
WHERE JSON_EXISTS(health_check_record, '$.physical.flu == "yes"');
There is also "plain old way" without JSON parsing only treting column like a standard VARCHAR one. This way will not work in 100% of cases, but if you have the data in the same way like you described it might be sufficient.
SELECT *
FROM agents_timesheet
WHERE health_check_record LIKE '%"flu":"yes"%';
How to get the values of flu in the JSON in the health_check_record of that table?
From Oracle 12, to get the values you can use JSON_TABLE with a correlated CROSS JOIN to the table:
SELECT a.id,
a.name,
j.*,
a."DATE",
a.clock_in,
a.clock_out
FROM agents_timesheet a
CROSS JOIN JSON_TABLE(
a.health_check_record,
'$'
COLUMNS (
mental_stress VARCHAR2(3) PATH '$.mental.stress',
mental_depression VARCHAR2(3) PATH '$.mental.depression',
physical_fever VARCHAR2(3) PATH '$.physical.fever',
physical_flu VARCHAR2(3) PATH '$.physical.flu'
)
) j
WHERE physical_flu = 'yes';
db<>fiddle here
You can use "dot notation" to access data from a JSON column. Like this:
select "DATE", id, name
from agents_timesheet t
where t.health_check_record.physical.flu = 'yes'
;
DATE ID NAME
----------- --- ----
06-DEC-2021 2 BBB
Note that this approach requires that you use an alias for the table name (so you can use it in accessing the JSON data).
For testing I used the data posted by MT0 on dbfiddle. I am not a big fan of double-quoted column names; use something else for "DATE", such as dt or date_.

Count string occurances within a list column - Snowflake/SQL

I have a table with a column that contains a list of strings like below:
EXAMPLE:
STRING User_ID [...]
"[""null"",""personal"",""Other""]" 2122213 ....
"[""Other"",""to_dos_and_thing""]" 2132214 ....
"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]" 4342323 ....
QUESTION:
I want to be able to get a count of the amount of times each unique string appears (strings are seperable within the strings column by commas) but only know how to do the following:
SELECT u.STRING, count(u.USERID) as cnt
FROM table u
group by u.STRING
order by cnt desc;
However the above method doesn't work as it only counts the number of user ids that use a specific grouping of strings.
The ideal output using the example above would like this!
DESIRED OUTPUT:
STRING COUNT_Instances
"null" 1223
"personal" 543
"Other" 324
"to_dos_and_thing" 221
"getting_things_done" 146
"Work!!!!!" 22
Based on your description, here is my sample table:
create table u (user_id number, string varchar);
insert into u values
(2122213, '"[""null"",""personal"",""Other""]"'),
(2132214, '"[""Other"",""to_dos_and_thing""]"'),
(2132215, '"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]"' );
I used SPLIT_TO_TABLE to split each string as a row, and then REGEXP_SUBSTR to clean the data. So here's the query and output:
select REGEXP_SUBSTR( s.VALUE, '""(.*)""', 1, 1, 'i', 1 ) extracted, count(*) from u,
lateral SPLIT_TO_TABLE( string , ',' ) s
GROUP BY extracted
order by count(*) DESC;
+---------------------+----------+
| EXTRACTED | COUNT(*) |
+---------------------+----------+
| Other | 2 |
| null | 1 |
| personal | 1 |
| to_dos_and_thing | 1 |
| getting_things_done | 1 |
| TO_dos_and_thing | 1 |
| Work!!!!! | 1 |
+---------------------+----------+
SPLIT_TO_TABLE https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
REGEXP_SUBSTR https://docs.snowflake.com/en/sql-reference/functions/regexp_substr.html

Hive: merge or tag multiple rows based on neighboring rows

I have the following table and want to merge multiple rows based on neighboring rows.
INPUT
EXPECTED OUTPUT
The logic is that since "abc" is connected to "abcd" in the first row and "abcd" is connected to "abcde" in the second row and so on, thus "abc", "abcd", "abcde", "abcdef" are connected and put in one array. The same applied to the rest rows. The number of connected neighboring rows are arbitrary.
The question is how to do that using Hive script without any UDF. Do I have to use Spark for this type of operation? Thanks very much.
One idea I had is to tag rows first as
How to do that using Hive script only?
This is an example of a CONNECT BY query which is not supported in HIVE or SPARK, unlike DB2 or ORACLE, et al.
You can simulate such a query with Spark Scala, but it is far from handy. Putting a tag in means the question is less relevant then, imo.
Here is a work-around using Hive script to get the intermediate table.
drop table if exists step1;
create table step1 STORED as orc as
with src as
(
select split(u.tmp,",")[0] as node_1, split(u.tmp,",")[1] as node_2
from
(select stack (7,
"abc,abcd",
"abcd,abcde",
"abcde,abcdef",
"bcd,bcde",
"bcde,bcdef",
"cdef,cdefg",
"def,defg"
) as tmp
) u
)
select node_1, node_2, if(node_2 = lead(node_1, 1) over (order by node_1), 1, 0) as tag, row_number() OVER (order by node_1) as row_num
from src;
drop table if exists step2;
create table step2 STORED as orc as
SELECT tag, row_number() over (ORDER BY tag) as row_num
FROM (
SELECT cast(v.tag as int) as tag
FROM (
SELECT
split(regexp_replace(repeat(concat(cast(key as string), ","), end_idx-start_idx), ",$",""), ",") as tags --repeat the row number by the number of rows
FROM (
SELECT COALESCE(lag(row_num, 1) over(ORDER BY row_num), 0) as start_idx, row_num as end_idx, row_number() over (ORDER BY row_num) as key
FROM step1 where tag=0
) a
) b
LATERAL VIEW explode(tags) v as tag
) c ;
drop table if exists step3;
create table step3 STORED as orc as
SELECT
a.node_1, a.node_2, b.tag
FROM step1 a
JOIN step2 b
ON a.row_num=b.row_num;
The final table looks like
select * from step3;
+---------------+---------------+------------+
| step3.node_1 | step3.node_2 | step3.tag |
+---------------+---------------+------------+
| abc | abcd | 1 |
| abcd | abcde | 1 |
| abcde | abcdef | 1 |
| bcd | bcde | 2 |
| bcde | bcdef | 2 |
| cdef | cdefg | 3 |
| def | defg | 4 |
+---------------+---------------+------------+
The third column can be used to collect node pairs.

Return an array(Repeated Field) from a query in BigQuery

I am new to BigQuery and SQL. I have a table with following details
Schema
ID : String : Nullable
BCats : String : Repeated
ID can be repeated
Preview
ID BCats
|-----------------------|
| ABCD | BCat25 |
| | BCat24 |
| | BCat23 |
|_______________________|
| PQRS | BCat8 |
| | BCat9 |
|_______________________|
| ABCD | BCat23 |
| | BCat25 |
| | BCat24 |
|_______________________|
| MNOP | BCat12 |
| | BCat13 |
|_______________________|
| PQRS | BCat8 |
| | BCat9 |
|-----------------------|
I am trying to group the table based on ID using the following query
Query
SELECT BCats,ID
FROM (SELECT (GROUP_CONCAT(BCats)) as BCats,ID
FROM(
SELECT
UNIQUE(BCats) as BCats,ID FROM
my_table
GROUP BY
ID
)
GROUP BY
ID
)
OUTPUT from the query in JSON Format is
Output from Query
{"BCats":"BCat25,BCat24,BCat23","ID":"ABCD"}
{"BCats":"BCat8,BCat9","ID":"PQRS"}
{"BCats":"BCat12,BCat13","ID":"MNOP"}
My question is how can I output Array from the Query, like this
Expecting Output
{"BCats" : ["BCat25","BCat24","BCat23"],"ID":"ABCD"}
Currently I am getting BCats as a String.
I need to output this data into new table with BCats as Repeated.
Please Help.
Preview :
Try below. Note: in Web UI you need not only set Destination Table but also set/check-on Allow Large Results checkbox and uncheck Flatten Results checkbox
SELECT NEST(UNIQUE(BCats)) AS BCats, ID
FROM my_table
GROUP BY ID
You should instead use standard SQL. If you are familiar with legacy SQL, there is a migration guide that talks about the differences between the two dialects. After enabling standard SQL (uncheck "Use Legacy SQL" under "Show Options" in the UI) you can run e.g.:
WITH my_table AS (
SELECT 'ABCD' AS ID, ['BCat25', 'BCat24', 'BCat23'] AS BCats UNION ALL
SELECT 'PQRS', ['BCat8', 'BCat9'] UNION ALL
SELECT 'ABCD', ['BCat23', 'BCat25', 'BCat24'] UNION ALL
SELECT 'MNOP', ['BCat12', 'BCat13'] UNION ALL
SELECT 'PQRS', ['BCat8', 'BCat9']
)
SELECT
ID,
ARRAY_AGG(DISTINCT BCat) AS BCats
FROM my_table, UNNEST(BCats) AS BCat
GROUP BY ID;

Search an SQL table that already contains wildcards?

I have a table that contains patters for phone numbers, where x can match any digit.
+----+--------------+----------------------+
| ID | phone_number | phone_number_type_id |
+----+--------------+----------------------+
| 1 | 1234x000x | 1 |
| 2 | 87654311100x | 4 |
| 3 | x111x222x | 6 |
+----+--------------+----------------------+
Now, I might have 511132228 which will match with row 3 and it should return its type. So, it's kind of like SQL wilcards, but the other way around and I'm confused on how to achieve this.
Give this a go:
select * from my_table
where '511132228' like replace(phone_number, 'x', '_')
select *
from yourtable
where '511132228' like (replace(phone_number, 'x','_'))
Try below query:
SELECT ID,phone_number,phone_number_type_id
FROM TableName
WHERE '511132228' LIKE REPLACE(phone_number,'x','_');
Query with test data:
With TableName as
(
SELECT 3 ID, 'x111x222x' phone_number, 6 phone_number_type_id from dual
)
SELECT 'true' value_available
FROM TableName
WHERE '511132228' LIKE REPLACE(phone_number,'x','_');
The above query will return data if pattern match is available and will not return any row if no match is available.