I need to extract only specific keys from postgres json, Let us consider the following json
{"aaa":1,"bbb":2,"ccc":3,"ddd":7}
From the above json i need to select keys 'bbb' and 'ccc', that is
{"bbb":2,"ccc":3}
I used the following query , but it's deleting the keys
SELECT jsonb '{"aaa":1,"bbb":2,"ccc":3,"ddd":7}' - 'ddd}'
How can I select only specified keys?
you can explicitely specify keys, like here:
t=# with c(j) as (SELECT jsonb '{"aaa":1,"bbb":2,"ccc":3,"ddd":7}' - 'ddd}')
select j,jsonb_build_object('aaa',j->'aaa','bbb',j->'bbb') from c;
j | jsonb_build_object
------------------------------------------+----------------------
{"aaa": 1, "bbb": 2, "ccc": 3, "ddd": 7} | {"aaa": 1, "bbb": 2}
(1 row)
WITH data AS (
SELECT jsonb '{"aaa":1,"bbb":2,"ccc":3,"ddd":7}' col
)
SELECT kv.*
FROM data,
LATERAL (
SELECT jsonb_object(ARRAY_AGG(keyval.key::TEXT), ARRAY_AGG(keyval.value::TEXT))
FROM jsonb_each(col) keyval
WHERE keyval.key IN ('aaa', 'bbb', 'ccc')) kv
The solution works by expanding a JSONB (or JSON) object, filtering the keys, aggregating the filtered keys & values to create the final JSONB (or JSON) object.
However, this solution does not preserve nulls, i.e. if data had a row where col had value jsonb '{"aaa":1,"bbb":2, "ddd":7}', then the above solution would return jsonb '{"aaa":1,"bbb":2}'
To preserve nulls, the following form could be used.
WITH data AS (
SELECT jsonb '{"aaa":1,"bbb":2,"ccc":3,"ddd":7}' col
), keys(k) AS (
VALUES ('aaa'), ('bbb'), ('ccc')
)
SELECT col, jsonb_object(ARRAY_AGG(k), ARRAY_AGG(col->>k))
FROM data, keys
GROUP BY 1
Related
I have a JSON value like the one below in a certain column of my table:
{"values":[1, 2, null, 4, null]}
What I want is to convert the value in a bigquery ARRAY: ARRAY<INT64>
I tried JSON_VALUE_ARRAY but it throws an error because the final output cannot be anarray with NULLs.
Said that, what should be the correct approach for that?
You can unnest an array with null elements. For building a new array you can provided the flag ignore nulls to remove null values.
with tbl as (select JSON '{"values":[1, 2, null, 4, null]}' as data union all select JSON ' {"values":[ ] }')
select *,
((Select array_agg(x ignore nulls) from unnest(JSON_VALUE_ARRAY (data.values ) ) x))
from tbl
I have a column with inconsistent data format, some of them are a list of array [], some of them are JSON_like objects {}
id
prices
1
[100,100,110]
2
{200,210,190}
create table test(id integer, prices varchar(255));
insert into test
values
(1,'[100,100,110]'),
(2,'{200,210,190}');
When I tried to unnest, my query works fine for the first row, but it fails on the second row. Is there a way I can convert the {} to a list of array []?
This is my query:
select id,prices,price from test
cross join UNNEST(cast(json_parse(prices) as array<varchar>)) as t (price)
You can use replace and then parse the data into array:
select json_parse(replace(replace('{200,210,190}', '}', ']'), '{', '['))
Output:
_col0
[200,210,190]
I'm trying to group BigQuery columns using an array like so:
with test as (
select 1 as A, 2 as B
union all
select 3, null
)
select *,
[A,B] as grouped_columns
from test
However, this won't work, since there is a null value in column B row 2.
In fact this won't work either:
select [1, null] as test_array
When reading the documentation on BigQuery though, it says Nulls should be allowed.
In BigQuery, an array is an ordered list consisting of zero or more
values of the same data type. You can construct arrays of simple data
types, such as INT64, and complex data types, such as STRUCTs. The
current exception to this is the ARRAY data type: arrays of arrays are
not supported. Arrays can include NULL values.
There doesn't seem to be any attributes or safe prefix to be used with ARRAY() to handle nulls.
So what is the best approach for this?
Per documentation - for Array type
Currently, BigQuery has two following limitations with respect to NULLs and ARRAYs:
BigQuery raises an error if query result has ARRAYs which contain NULL elements, although such ARRAYs can be used inside the query.
BigQuery translates NULL ARRAY into empty ARRAY in the query result, although inside the query NULL and empty ARRAYs are two distinct values.
So, as of your example - you can use below "trick"
with test as (
select 1 as A, 2 as B union all
select 3, null
)
select *,
array(select cast(el as int64) el
from unnest(split(translate(format('%t', t), '()', ''), ', ')) el
where el != 'NULL'
) as grouped_columns
from test t
above gives below output
Note: above approach does not require explicit referencing to all involved columns!
My current solution---and I'm not a fan of it---is to use a combo of IFNULL(), UNNEST() and ARRAY() like so:
select
*,
array(
select *
from unnest(
[
ifnull(A, ''),
ifnull(B, '')
]
) as grouping
where grouping <> ''
) as grouped_columns
from test
An alternative way, you can replace NULL value to some NON-NULL figures using function IFNULL(null, 0) as given below:-
with test as (
select 1 as A, 2 as B
union all
select 3, IFNULL(null, 0)
)
select *,
[A,B] as grouped_columns
from test
I have a column in jsonb storing a map, like {'a':1,'b':2,'c':3} where the number of keys is different in each row.
I want to count it -- jsonb_object_keys can retrieve the keys but it is in setof
Are there something like this?
(select count(jsonb_object_keys(obj) from XXX )
(this won't work as ERROR: set-valued function called in context that cannot accept a set)
Postgres JSON Functions and Operators Document
json_object_keys(json)
jsonb_object_keys(jsonb)
setof text Returns set of keys in the outermost JSON object.
json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}')
json_object_keys
------------------
f1
f2
Crosstab isn't feasible as the number of key could be large.
Shortest:
SELECT count(*) FROM jsonb_object_keys('{"a": 1, "b": 2, "c": 3}'::jsonb);
Returns 3
If you want all json number of keys from a table, it gives:
SELECT (SELECT COUNT(*) FROM jsonb_object_keys(myJsonField)) nbr_keys FROM myTable;
Edit: there was a typo in the second example.
You could convert keys to array and use array_length to get this:
select array_length(array_agg(A.key), 1) from (
select json_object_keys('{"f1":"abc","f2":{"f3":"a", "f4":"b"}}') as key
) A;
If you need to get this for the whole table, you can just group by primary key.
While a sub select must be used to convert the JSON keys set to rows, the following tweaked query might run faster by skipping building the temporary array:
SELECT count(*) FROM
(SELECT jsonb_object_keys('{"a": 1, "b": 2, "c": 3}'::jsonb)) v;
and it's a bit shorter ;)
To make it a function:
CREATE OR REPLACE FUNCTION public.count_jsonb_keys(j jsonb)
RETURNS bigint
LANGUAGE sql
AS $function$
SELECT count(*) from (SELECT jsonb_object_keys(j)) v;
$function$
Alternately, you could simply return the upper bounds of the keys when listed as an array:
SELECT
ARRAY_UPPER( -- Grab the upper bounds of the array
ARRAY( -- Convert rows into an array.
SELECT JSONB_OBJECT_KEYS(obj)
),
1 -- The array's dimension we're interested in retrieving the count for
) AS count
FROM
xxx
Using '{"a": 1, "b": 2, "c": 3}'::jsonb as obj, count would result in a value of three (3).
Pasteable example:
SELECT
ARRAY_UPPER( -- Grab the upper bounds of the array
ARRAY( -- Convert rows into an array.
SELECT JSONB_OBJECT_KEYS('{"a": 1, "b": 2, "c": 3}'::jsonb)
),
1 -- The array's dimension we're interested in retrieving the count for
) AS count
I have the following data:
name id url
John 1 someurl.com
Matt 2 cool.com
Sam 3 stackoverflow.com
How can I write an SQL statement in Postgres to select this data into a multi-dimensional array, i.e.:
{{John, 1, someurl.com}, {Matt, 2, cool.com}, {Sam, 3, stackoverflow.com}}
I've seen this kind of array usage before in Postgres but have no idea how to select data from a table into this array format.
Assuming here that all the columns are of type text.
You cannot use array_agg() to produce multi-dimensional arrays, at least not up to PostgreSQL 9.4.
(But the upcoming Postgres 9.5 ships a new variant of array_agg() that can!)
What you get out of #Matt Ball's query is an array of records (the_table[]).
An array can only hold elements of the same base type. You obviously have number and string types. Convert all columns (that aren't already) to text to make it work.
You can create an aggregate function for this like I demonstrated to you here before.
CREATE AGGREGATE array_agg_mult (anyarray) (
SFUNC = array_cat
,STYPE = anyarray
,INITCOND = '{}'
);
Call:
SELECT array_agg_mult(ARRAY[ARRAY[name, id::text, url]]) AS tbl_mult_arr
FROM tbl;
Note the additional ARRAY[] layer to make it a multidimensional array (2-dimenstional, to be precise).
Instant demo:
WITH tbl(id, txt) AS (
VALUES
(1::int, 'foo'::text)
,(2, 'bar')
,(3, '}b",') -- txt has meta-characters
)
, x AS (
SELECT array_agg_mult(ARRAY[ARRAY[id::text,txt]]) AS t
FROM tbl
)
SELECT *, t[1][3] AS arr_element_1_1, t[3][4] AS arr_element_3_2
FROM x;
You need to use an aggregate function; array_agg should do what you need.
SELECT array_agg(s) FROM (SELECT name, id, url FROM the_table ORDER BY id) AS s;