How to do group by on repeated field in BigQuery - google-bigquery

In BigQuery, I have created a table with the below schema
id INTEGER NULLABLE
visits INTEGER NULLABLE
dimensions RECORD REPEATED
dimensions.value STRING
dimensions.key STRING
How to get sum(visits) by grouping device and state values?
Example data:
{"id": 1, visits: 100, "dimensions": [{"key":"device","value":"mobile"}, {"key":"state","value":"CA"}]}
{"id": 1, visits: 500, "dimensions": [{"key":"device","value":"desktop"}, {"key":"state","value":"CA"}]}
{"id": 1, visits: 200, "dimensions": [{"key":"device","value":"mobile"}, {"key":"state","value":"NY"}]}
{"id": 2, visits: 100, "dimensions": [{"key":"device","value":"mobile"}, {"key":"state","value":"CA"}]}
{"id": 2, visits: 500, "dimensions": [{"key":"device","value":"desktop"}, {"key":"state","value":"CA"}]}
{"id": 2, visits: 200, "dimensions": [{"key":"device","value":"mobile"}, {"key":"state","value":"NY"}]}
{"id": 2, visits: 780, "dimensions": [{"key":"device","value":"desktop"}, {"key":"state","value":"NY"}]}
I want id, device, state, sum(visits) in the output.
I could do a group by using a single dimension with below query but do not know how to do it for multiple dimensions.
SELECT id,d.value, sum(visits) FROM dataset.tabe_name,UNNEST(dimensions) as d where d.key = "device" group by id, d.value LIMIT 1000
And also is it possible to write a generic query when key values are not known in advance?

Below is for BigQuery Standard SQL
#standardSQL
SELECT
id,
(SELECT value FROM UNNEST(dimensions) WHERE key = "device") AS device,
(SELECT value FROM UNNEST(dimensions) WHERE key = "state") AS state,
SUM(visits) AS visits
FROM `dataset.tabe_name`
GROUP BY id, device, state
LIMIT 1000
You can try / play it with dummy data from your example as below
#standardSQL
WITH data AS (
SELECT 1 AS id, 100 AS visits, ARRAY<STRUCT<key STRING, value STRING>>[("device", "mobile"), ("state", "CA")] AS dimensions UNION ALL
SELECT 1, 100, [STRUCT<key STRING, value STRING>("device", "mobile"), ("state", "CA")] UNION ALL
SELECT 1, 500, [STRUCT<key STRING, value STRING>("device", "desktop"), ("state", "CA")] UNION ALL
SELECT 1, 200, [STRUCT<key STRING, value STRING>("device", "mobile"), ("state", "NY")] UNION ALL
SELECT 2, 100, [STRUCT<key STRING, value STRING>("device", "mobile"), ("state", "CA")] UNION ALL
SELECT 2, 500, [STRUCT<key STRING, value STRING>("device", "desktop"), ("state", "CA")] UNION ALL
SELECT 2, 200, [STRUCT<key STRING, value STRING>("device", "mobile"), ("state", "NY")] UNION ALL
SELECT 2, 780, [STRUCT<key STRING, value STRING>("device", "desktop"), ("state", "NY")]
)
SELECT
id,
(SELECT value FROM UNNEST(dimensions) WHERE key = "device") AS device,
(SELECT value FROM UNNEST(dimensions) WHERE key = "state") AS state,
SUM(visits) AS visits
FROM data
GROUP BY id, device, state
-- ORDER BY id, device, state

Related

Concatenate column by condition (oracle)

source_id
source_groupid
source_nm
category_id
level_id
12345
34
ABC
7
2
67549
GI
5
1
24751
BL
6
Result
{"id": 12345, "groupid": 34, "name": ABC, "category_id": 7, "level_id": 2}
{"id": 67549, "groupid": , "name": GI, "category_id": 5, "level_id": 1}
SELECT CONCAT ('{','"id": ', source_id,', ', '"groupid": ', source_groupid,', ','"name": ',source_nm,', ','"category_id": ',category_id,', ', '"level_id": ', level_id, '}') as full_info
FROM table
I need to do column concatenation according to the following pattern. If, for example, there is no entry in the group_id or category_id, then how to write the code so that the template changes and looks for lines 2 and 3 as follows.
{"id": 12345, "groupid": 34, "name": ABC, "category_id": 7, "level_id": 2}
{"id": 67549, "name": GI, "category_id": 5, "level_id": 1}
{"id": 24751, "name": BL, "level_id": 6}
Well, in Oracle (which tag you've used), CONCAT function accepts only two arguments, and - therefore - that code won't work.
Instead, use a double pipe || operator. As of your main problem, CASE it is.
SQL> with test (source_id, source_groupid, source_nm, category_id, level_id) as
2 (select 12345, 34, 'ABC', 7, 2 from dual union all
3 select 67549, null, 'GI' , 5, 1 from dual union all
4 select 24751, null, 'BL' , null, 6 from dual
5 )
6 select '{' || '"id": ' || source_id ||
7 case when source_groupid is not null then ', "groupid": ' || source_groupid end ||
8 case when source_nm is not null then ', "name": ' || source_nm end ||
9 case when category_id is not null then ', "category_id": ' || category_id end ||
10 case when level_id is not null then ', "level_id": ' || level_id end || '}'
11 as result
12 from test;
RESULT
--------------------------------------------------------------------------------
{"id": 12345, "groupid": 34, "name": ABC, "category_id": 7, "level_id": 2}
{"id": 67549, "name": GI, "category_id": 5, "level_id": 1}
{"id": 24751, "name": BL, "level_id": 6}
SQL>
From Oracle 12, don't build JSON by hand; use the JSON_OBJECT function and then you can use ABSENT ON NULL:
SELECT JSON_OBJECT(
KEY 'id' VALUE source_id,
KEY 'groupid' VALUE source_groupid,
KEY 'name' VALUE source_nm,
KEY 'category_id' VALUE category_id,
KEY 'level_id' VALUE level_id
ABSENT ON NULL
) As json
FROM table_name;
Which, for the sample data:
CREATE TABLE table_name (source_id, source_groupid, source_nm, category_id, level_id) AS
SELECT 12345, 34, 'ABC', 7, 2 FROM DUAL UNION ALL
SELECT 67549, NULL, 'GI', 5, 1 FROM DUAL UNION ALL
SELECT 24751, NULL, 'BL', NULL, 6 FROM DUAL;
Outputs:
JSON
{"id":12345,"groupid":34,"name":"ABC","category_id":7,"level_id":2}
{"id":67549,"name":"GI","category_id":5,"level_id":1}
{"id":24751,"name":"BL","level_id":6}
db<>fiddle here

Insert after TRUE in condition in PostgreSQL

I need to insert my values if only they are not presented in my table.
I wrote the function:
do
$$
declare
v_video_config_bundle_id bigint;
v_are_records_exist boolean;
begin
select id from config_bundle into v_video_config_bundle_id where code = 'video';
select count(id) > 0 from config_bundle into v_are_records_exist
where config_bundle_id = v_video_config_bundle_id
and preference = 'true' and amount = 0 and repeatability in (1,7,14,21,30,45) and format='day';
case
when (v_are_records_exist = false) then
insert into config_plan(config_bundle_id, amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default)
values (v_video_config_bundle_id, 0, 7, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
(v_video_config_bundle_id, 0, 14, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
(v_video_config_bundle_id, 0, 21, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false);
end;
end;
$$
But I still get an exception ERROR:
syntax error at or near ";"
Position: 1420
How to fix it?
Let SQL make all decisions; put all the determination logic into a single SQL statement. You can do this by converting the filtering logic into NOT EXISTS (SELECT ... structure. So something like:
insert into config_plan(config_bundle_id, amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default)
with new_config ( amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default) as
( values ( 0, 7, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
( 0, 14, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false),
( 0, 21, 'day', 0, 'personal', true, false, 2, 'ACTIVE', false)
)
select amount, repeatability, format, payment_amount, preference_type, preference, trial, weight, status, is_default
from new_config nc
where not exists ( select null
from config_plan cp
where (cp.preference, cp.amount , cp.repeatability ,cp.format) =
(nc.preference, nc.amount , nc.repeatability ,nc.format)
) ;
The above is not tested as you did not supply table description and sample data. However, see here for an example of the technique.

Only extract json if field not null

I want to extract a key value from a (nullable) JSONB field. If the field is NULL, I want the record still present in my result set, but with a null field.
customer table:
id, name, phone_num, address
1, "john", 983, [ {"street":"23, johnson ave", "city":"Los Angeles", "state":"California", "current":true}, {"street":"12, marigold drive", "city":"Davis", "state":"California", "current":false}]
2, "jane", 9389, null
3, "sally", 352, [ "street":"90, park ave", "city":"Los Angeles", "state":"California", "current":true} ]
Current PostgreSQL query:
select id, name, phone_num, items.city
from customer,
jsonb_to_recordset(customer) as items(city str, current bool)
where items.current=true
It returns:
id, name, phone_num, city
1, "john", 983, "Los Angeles"
3, "sally", 352, "Los Angeles"
Required Output:
id, name, phone_num, city
1, "john", 983, "Los Angeles"
2, "jane", 9389, null
3, "sally", 352, "Los Angeles"
How do I achieve the above output?
Use a left join lateral instead of an implicit lateral join:
select c.id, c.name, c.phone_num, i.city
from customer c
left join lateral jsonb_to_recordset(c.address) as i(city str, current bool)
on i.current=true

folding over large BigQuery result

Is there any easy way for me to do something like Ocaml's fold_left on a result of a BigQuery query, where each iteration corresponds to one row in the result?
What product or approach would be the easiest way? It would be great if:
all I need to do is to supply the initial state and the 'folder' function
preferably, I'd like to write the 'folder' function in a functional language
I don't need to install any GCP package
Since I don't know which product or language would work, I cannot be more specific, but pseudocode would be like:
let my_init = []
let my_folder = fun state row ->
// append for now, but it will be complicated. I need to do some set operations here. The point is that I need some way of transferring "state" across rows, when I iterate over rows in a predefined order.
row.col1 :: state
let query = "SELECT col1, col2, col3 FROM table1 ORDER BY timestamp"
query |> List.fold my_folder my_init
The result that I want to get from this simplified example is the final "state".
--- UPDATED ---
There is no bound on the number of rows---if we receive more, we get more rows. Typically, the number is more than a few millions but it can be larger than that.
Here's a simplified example that shows the major problem I'm encountering. We have a table with a few columns:
timestamp
user_id: a string id
operation_json: a stringified JSON object, which is a list of operations, each of which corresponds to either:
add user_id to a set
remove user_id from a set
For example, the followings are valid rows:
----------+---------+----------------------------------------------
timestamp | user_id | operation_json
----------+---------+----------------------------------------------
1 | id1 | [ { "op": "add", "set": "set1" } ]
2 | id2 | [ { "op": "add", "set": "set1" } ]
3 | id1 | [ { "op": "add", "set": "set2" } ]
4 | id3 | [ { "op": "add", "set": "set2" } ]
5 | id1 | [ { "op": "remove", "set": "set1" } ]
----------+---------+----------------------------------------------
As a result, I'd like to get sets of users; i.e.,
set1 |-> { id2 }
set2 |-> { id1, id3 }
I thought fold_left-like operation would be convenient. The state would be map>, and the initial-state would be an empty map.
Below [quick and simple] example for BigQuery Standard SQL
#standardSQL
CREATE TEMP FUNCTION fold(arr ARRAY<INT64>, init INT64)
RETURNS FLOAT64
LANGUAGE js AS """
const reducer = (accumulator, currentValue) => accumulator + parseInt(currentValue);
return arr.reduce(reducer, 5);
""";
WITH `project.dataset.table` AS (
SELECT 1 id, [1, 2, 3, 4] arr, 5 initial_state UNION ALL
SELECT 2, [1, 2, 3, 4, 5, 6, 7], 10
)
SELECT id, fold(arr, initial_state) result
FROM `project.dataset.table`
output is
Row id result
1 1 15.0
2 2 33.0
I think it is self-explanatory enough
See more for JS UDF
folding list of rows
See below extension of above
Here you are assembling array from the result's rows before applying fold function (of course you have some limits for UDF here to have in mind and also on how big your ARRAY of rows can go, etc.
#standardSQL
CREATE TEMP FUNCTION fold(arr ARRAY<INT64>, init INT64)
RETURNS FLOAT64
LANGUAGE js AS """
const reducer = (accumulator, currentValue) => accumulator + parseInt(currentValue);
return arr.reduce(reducer, 5);
""";
WITH `project.dataset.table` AS (
SELECT 1 id, 1 item UNION ALL
SELECT 1, 2 UNION ALL
SELECT 1, 3 UNION ALL
SELECT 1, 4 UNION ALL
SELECT 2, 1 UNION ALL
SELECT 2, 2 UNION ALL
SELECT 2, 3 UNION ALL
SELECT 2, 4 UNION ALL
SELECT 2, 5 UNION ALL
SELECT 2, 6 UNION ALL
SELECT 2, 7
)
SELECT id, fold(ARRAY_AGG(item), 5) result
FROM `project.dataset.table`
GROUP BY id
Note, if you need to include more than one field from each row - you can use ARRAY of STRUCT as below example
ARRAY_AGG(STRUCT(id , item) ORDER by id)
Of course, you will need to adjust respectively signature of fold UDF
For example:
#standardSQL
CREATE TEMP FUNCTION fold(arr ARRAY<STRUCT<id INT64, item INT64>>, init INT64)
RETURNS FLOAT64
LANGUAGE js AS """
const reducer = (accumulator, currentValue) => accumulator + parseInt(currentValue.item);
return arr.reduce(reducer, 5);
""";
WITH `project.dataset.table` AS (
SELECT 1 id, 1 item UNION ALL
SELECT 1, 2 UNION ALL
SELECT 1, 3 UNION ALL
SELECT 1, 4 UNION ALL
SELECT 2, 1 UNION ALL
SELECT 2, 2 UNION ALL
SELECT 2, 3 UNION ALL
SELECT 2, 4 UNION ALL
SELECT 2, 5 UNION ALL
SELECT 2, 6 UNION ALL
SELECT 2, 7
)
SELECT id, fold(ARRAY_AGG(t), 5) result
FROM `project.dataset.table` t
GROUP BY id
Below approach has nothing to do with folding per se, but rather attempt to translate your challenge into set-based one (which is more natural for when you dealing with sql) by identifying the latest op action for each user per set and if it is "remove" just eliminate that user from further consideration - if it is "add" just use the latest "add" for that user / set. This in assumption that there cannot be multiple consecutive "add" action for the same user / set - rather - it can be add /remove / add and so on. of course this can be further adjusted based on real use case
So having above in mind - below example for BigQuery Standard SQL
#standardSQL
WITH `project.dataset.table` AS (
SELECT 1 ts, 'id1' user_id, '[ { "op": "add", "set": "set1" } ]' operation_json UNION ALL
SELECT 2, 'id2', '[ { "op": "add", "set": "set1" } ]' UNION ALL
SELECT 3, 'id1', '[ { "op": "add", "set": "set2" } ]' UNION ALL
SELECT 4, 'id3', '[ { "op": "add", "set": "set2" } ]' UNION ALL
SELECT 5, 'id1', '[ { "op": "remove", "set": "set1" } ]'
)
SELECT bin, STRING_AGG(user_id, ',' ORDER BY ts) result
FROM (
SELECT user_id, bin, ARRAY_AGG(ts ORDER BY ts DESC LIMIT 1)[OFFSET(0)] ts
FROM (
SELECT ts, user_id, op, bin, LAST_VALUE(op) OVER(win) fin
FROM (
SELECT ts, user_id,
JSON_EXTRACT_SCALAR(REGEXP_REPLACE(operation_json, r'^\[|\]$', ''), '$.op') op,
JSON_EXTRACT_SCALAR(REGEXP_REPLACE(operation_json, r'^\[|\]$', ''), '$.set') bin
FROM `project.dataset.table`
)
WINDOW win AS (
PARTITION BY user_id, bin
ORDER BY ts
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
)
WHERE fin = 'add'
GROUP BY user_id, bin
)
GROUP BY bin
-- ORDER BY bin
output is
Row bin result
1 set1 id2
2 set2 id1,id3
if to apply to below dummy data
WITH `project.dataset.table` AS (
SELECT 1 ts, 'id1' user_id, '[ { "op": "add", "set": "set1" } ]' operation_json UNION ALL
SELECT 2, 'id2', '[ { "op": "add", "set": "set1" } ]' UNION ALL
SELECT 3, 'id1', '[ { "op": "add", "set": "set2" } ]' UNION ALL
SELECT 4, 'id3', '[ { "op": "add", "set": "set2" } ]' UNION ALL
SELECT 5, 'id1', '[ { "op": "remove", "set": "set1" } ]' UNION ALL
SELECT 6, 'id1', '[ { "op": "add", "set": "set1" } ]' UNION ALL
SELECT 7, 'id1', '[ { "op": "remove", "set": "set1" } ]' UNION ALL
SELECT 8, 'id1', '[ { "op": "add", "set": "set1" } ]' UNION ALL
SELECT 9, 'id1', '[ { "op": "remove", "set": "set2" } ]' UNION ALL
SELECT 10, 'id1', '[ { "op": "add", "set": "set2" } ]'
)
result will be
Row bin result
1 set1 id2,id1
2 set2 id3,id1

In PostgreSQL, what's the best way to select an object from a JSONB array?

Right now, I have an an array that I'm able to select off a table.
[{"_id": 1, "count: 3},{"_id": 2, "count: 14},{"_id": 3, "count: 5}]
From this, I only need the count for a particular _id. For example, I need the count for
_id: 3
I've read the documentation but I haven't been able to figure out the correct way to get the object.
WITH test_array(data) AS ( VALUES
('[
{"_id": 1, "count": 3},
{"_id": 2, "count": 14},
{"_id": 3, "count": 5}
]'::JSONB)
)
SELECT val->>'count' AS result
FROM
test_array ta,
jsonb_array_elements(ta.data) val
WHERE val #> '{"_id":3}'::JSONB;
Result:
result
--------
5
(1 row)