I'm using psql and I have a table that looks like this:
id | dashboard_settings
-----------------------
1 | {"query": {"year_end": 2018, "year_start": 2015, "category": ["123"]}}
There are numerous rows, but for every row the "category" value is an array with one integer (in string format).
Is there a way I can 'unpackage' the category object? So that it just has 123 as an integer?
I've tried this but had no success:
SELECT jsonb_extract_path_text(dashboard_settings->'query', 'category') from table
This returns:
jsonb_extract_path_text | ["123"]
when I want:
jsonb_extract_path_text | 123
You need to use the array access operator for which is simply ->> followed by the array index:
select jsonb_extract_path(dashboard_settings->'query', 'category') ->> 0
from the_table
alternatively:
select dashboard_settings -> 'query' -> 'category' ->> 0
from the_table
Consider:
select dashboard_settings->'query'->'category'->>0 c from mytable
Demo on DB Fiddle:
| c |
| :-- |
| 123 |
Related
I struggle with Unnesting an array in this format -> btw newbie alert! Use Case: I want to count all v=1234 in a table custom_fields = {f=[{v=1234}, {v=[]}]}
I tried to use:
select custom_fields[safe_offset(1)]
from database
limit 10
it gives me the column, but still everything is nested.
Then i tried this:
SELECT tickets.id, cf
FROM db.tickets
CROSS JOIN UNNEST(tickets.custom_fields) AS cf
limit 10
same behaviour as first code.
I tried this [][]:
SELECT
custom_fields[1][1]
FROM db.tickets
limit 10
*Array element access with array[position] is not supported. Use
array[OFFSET(zero_based_offset)] or array[ORDINAL(one_based_ordinal)]
but jeah thats the query at the beginning of this message.
I am pretty lost.. Anyone an idea?
Not sure I fully understood your question, but I replicated your example adding an id column and the json_col containing the json. The following statement extracts each v value in a different row and is still related to the id
with my_tbl as (
select 1 id, '{"f":[{"v":1234}, {"v":2345}, {"v":7777}]}'::jsonb as json_col UNION ALL
select 2 id, '{"f":[{"v":6789}, {"v":3333}]}'::jsonb as json_col
)
select * from my_tbl, jsonb_to_recordset(jsonb_extract_path(json_col, 'f')) as x(v int);
The sql, uses the JSONB_EXTRACT_PATH to extract the f part, and the JSONB_TO_RECORDSET to create a row for each v value. More info on JSON functions in the documentation
id | json_col | v
----+------------------------------------------------+------
1 | {"f": [{"v": 1234}, {"v": 2345}, {"v": 7777}]} | 1234
1 | {"f": [{"v": 1234}, {"v": 2345}, {"v": 7777}]} | 2345
1 | {"f": [{"v": 1234}, {"v": 2345}, {"v": 7777}]} | 7777
2 | {"f": [{"v": 6789}, {"v": 3333}]} | 6789
2 | {"f": [{"v": 6789}, {"v": 3333}]} | 3333
(5 rows)
I have a table with a column that contains a list of strings like below:
EXAMPLE:
STRING User_ID [...]
"[""null"",""personal"",""Other""]" 2122213 ....
"[""Other"",""to_dos_and_thing""]" 2132214 ....
"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]" 4342323 ....
QUESTION:
I want to be able to get a count of the amount of times each unique string appears (strings are seperable within the strings column by commas) but only know how to do the following:
SELECT u.STRING, count(u.USERID) as cnt
FROM table u
group by u.STRING
order by cnt desc;
However the above method doesn't work as it only counts the number of user ids that use a specific grouping of strings.
The ideal output using the example above would like this!
DESIRED OUTPUT:
STRING COUNT_Instances
"null" 1223
"personal" 543
"Other" 324
"to_dos_and_thing" 221
"getting_things_done" 146
"Work!!!!!" 22
Based on your description, here is my sample table:
create table u (user_id number, string varchar);
insert into u values
(2122213, '"[""null"",""personal"",""Other""]"'),
(2132214, '"[""Other"",""to_dos_and_thing""]"'),
(2132215, '"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]"' );
I used SPLIT_TO_TABLE to split each string as a row, and then REGEXP_SUBSTR to clean the data. So here's the query and output:
select REGEXP_SUBSTR( s.VALUE, '""(.*)""', 1, 1, 'i', 1 ) extracted, count(*) from u,
lateral SPLIT_TO_TABLE( string , ',' ) s
GROUP BY extracted
order by count(*) DESC;
+---------------------+----------+
| EXTRACTED | COUNT(*) |
+---------------------+----------+
| Other | 2 |
| null | 1 |
| personal | 1 |
| to_dos_and_thing | 1 |
| getting_things_done | 1 |
| TO_dos_and_thing | 1 |
| Work!!!!! | 1 |
+---------------------+----------+
SPLIT_TO_TABLE https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
REGEXP_SUBSTR https://docs.snowflake.com/en/sql-reference/functions/regexp_substr.html
If I have a table with a single jsonb column and the table has data like this:
[{"body": {"project-id": "111"}},
{"body": {"my-org.project-id": "222"}},
{"body": {"other-org.project-id": "333"}}]
Basically it stores project-id differently for different rows.
Now I need a query where the data->'body'->'etc'., from different rows would coalesce into a single field 'project-id', how can I do that?
e.g.: if I do something like this:
select data->'body'->'project-id' projectid from mytable
it will return something like:
| projectid |
| 111 |
But I also want project-id's in other rows too, but I don't want additional columns in the results. i.e, I want this:
| projectid |
| 111 |
| 222 |
| 333 |
I understand that each of your rows contains a json object, with a nested object whose key varies over rows, and whose value you want to acquire.
Assuming the 'body' always has a single key, you could do:
select jsonb_extract_path_text(t.js -> 'body', x.k) projectid
from t
cross join lateral jsonb_object_keys(t.js -> 'body') as x(k)
The lateral join on jsonb_object_keys() extracts all keys in the object as rows. Then we use jsonb_extract_path_text() to get the corresponding value.
Demo on DB Fiddle:
with t as (
select '{"body": {"project-id": "111"}}'::jsonb js
union all select '{"body": {"my-org.project-id": "222"}}'::jsonb
union all select '{"body": {"other-org.project-id": "333"}}'::jsonb
)
select jsonb_extract_path_text(t.js -> 'body', x.k) projectid
from t
cross join lateral jsonb_object_keys(t.js -> 'body') as x(k)
| projectid |
| :--------- |
| 111 |
| 222 |
| 333 |
I have a postgres table with jsonb column, which has the value as follows:
id | messageStatus | payload
-----|----------------------|-------------
1 | 123 | {"commissionEvents":[{"id":1,"name1":"12","name2":15,"name4":"apple","name5":"fruit"},{"id":2,"name1":"22","name2":15,"name4":"sf","name5":"fdfjkd"}]}
2 | 124 | {"commissionEvents":[{"id":3,"name1":"32","name2":15,"name4":"sf","name5":"fdfjkd"},{"id":4,"name1":"42","name2":15,"name4":"apple","name5":"fruit"}]}
3 | 125 | {"commissionEvents":[{"id":5,"name1":"42","name2":15,"name4":"apple","name5":"fdfjkd"},{"id":6,"name1":"52","name2":15,"name4":"sf","name5":"fdfjkd"},{"id":7,"name1":"62","name2":15,"name4":"apple","name5":"fdfjkd"}]}
here payload column is a jsonb datatype, I want to write a postgres query to fetch name1 from commissionEvents where name4 = apple.
So my result will be like:
Since I was new to this jsonb, can anyone please suggest me some solution for it.
You need to unnest all array elements, then you can apply a WHERE condition on that to filter out those with the desired name.
select t.id, x.o ->> 'name1'
from the_table t
cross join lateral jsonb_array_elements(t.payload -> 'commissionEvents') as x(o)
where x.o ->> 'name4' = 'apple'
Online example: https://rextester.com/XWHG26387
I want to check the existence of an attribute in a JSONB column using SQL.
Using this is I can check if attribute equals value:
SELECT count(*) AS "count" FROM "table" WHERE column->'node' #> '[{"Attribute":"value"}]'
What syntax do I use to check the existence of Attribute?
Usually you'll check for null:
SELECT count(*) AS "count" FROM "table"
WHERE column->'node'->'Attribute' is not null
The ? operator means Does the string exist as a top-level key within the JSON value? However, you want to check whether a key exists in a nested json array of objects, so you cannot use the operator directly. You have to unnest arrays.
Sample data:
create table my_table(id serial primary key, json_column jsonb);
insert into my_table (json_column) values
('{"node": [{"Attribute":"value"}, {"other key": 0}]}'),
('{"node": [{"Attribute":"value", "other key": 0}]}'),
('{"node": [{"Not Attribute":"value"}]}');
Use jsonb_array_elements() in a lateral join to find out whether a key exists in any element of the array:
select
id,
value,
value ? 'Attribute' as key_exists_in_object
from my_table
cross join jsonb_array_elements(json_column->'node')
id | value | key_exists_in_object
----+----------------------------------------+----------------------
1 | {"Attribute": "value"} | t
1 | {"other key": 0} | f
2 | {"Attribute": "value", "other key": 0} | t
3 | {"Not Attribute": "value"} | f
(4 rows)
But this is not exactly what you are expecting. You need to aggregate results for arrays:
select
id,
json_column->'node' as array,
bool_or(value ? 'Attribute') as key_exists_in_array
from my_table
cross join jsonb_array_elements(json_column->'node')
group by id
order by id
id | array | key_exists_in_array
----+--------------------------------------------+---------------------
1 | [{"Attribute": "value"}, {"other key": 0}] | t
2 | [{"Attribute": "value", "other key": 0}] | t
3 | [{"Not Attribute": "value"}] | f
(3 rows)
Well, this looks a bit complex. You can make it easier using the function:
create or replace function key_exists_in_array(key text, arr jsonb)
returns boolean language sql immutable as $$
select bool_or(value ? key)
from jsonb_array_elements(arr)
$$;
select
id,
json_column->'node' as array,
key_exists_in_array('Attribute', json_column->'node')
from my_table
id | array | key_exists_in_array
----+--------------------------------------------+---------------------
1 | [{"Attribute": "value"}, {"other key": 0}] | t
2 | [{"Attribute": "value", "other key": 0}] | t
3 | [{"Not Attribute": "value"}] | f
(3 rows)