How to create new columns from key: value pairs in big query? - sql

I have a table A with the structure:
id
key
value
1
"abc"
"123"
1
"def"
"346"
1
"xyz"
"789"
2
"abc"
"462"
2
"def"
"464"
3
"abc"
"362"
However this representation proves to be a bit difficult to process in our infra pipelines, so I'm trying to create a new table/materialized view in big query with the following structure:
id
"abc"
"def"
"xyz"
1
"123"
"346"
"789"
2
"462"
"464"
NULL
3
"362"
NULL
NULL
The number of keys is guaranteed to be small (<100). What's the best way to do this using only SQL/JavaScript UDFs in Big Query?

Consider below approach
execute immediate format('''
select * from your_table
pivot (any_value(value) for key in (%s))
''', (select string_agg(distinct "'" || key || "'") from your_table))
if applied to sample data in your question - output is

Related

SQL select all rows within a group by

With the following table design:
Table "devices"
model
serial_number
active
A
11111
1
A
22222
1
A
33333
1
A
44444
0
B
XXXXX
1
B
YYYYY
1
I would like to retrieve the model, a count of the number of active devices (active = 1) for each model, and a list of all serial numbers for each model.
Expected output would be something like this:
[{
"model": "A",
"count": 3,
"serials": ["11111", "22222", "33333"]
}, {
"model": "B",
"count": 2,
"serials": ["XXXXX", "YYYYY"]
}]
I am able to retrieve the (grouped) models and count but how do I get the serial numbers?
SELECT model, count(*) as count
FROM devices
WHERE active = 1
GROUP BY model
I suspect I need a sub-query but I can't wrap my head around this. Thanks.
You can try to use FOR JSON PATH with STRING_AGG function which will group connect string serial_number from each model
QUOTENAME will help you make [] array brackets
SELECT model,
count(*) 'count',
JSON_QUERY(QUOTENAME(STRING_AGG('"' + STRING_ESCAPE(serial_number, 'json') + '"', ','))) serial_number
FROM devices
WHERE active = 1
GROUP BY model
FOR JSON PATH
sqlfiddle
If you don't want to get JSON result you might use STRING_AGG function directly.
SELECT model,
count(*) 'count',
STRING_AGG(serial_number, ',') serial_number
FROM devices
WHERE active = 1
GROUP BY model

How to extract value from json in Postgress based on a key pattern?

In Postgres v9.6.15, I need to get the SUM of all the values using text patterns of certain keys.
For example, having table "TABLE1" with 4 rows:
|group |col1
======================================
Row #1:|group1 |{ "json_key1.id1" : 1 }
Row #2:|group1 |{ "json_key1.id2" : 1 }
Row #3:|group1 |{ "json_key2.idX" : 1 }
Row #4:|group1 |{ "not_me" : 2 }
I'd like to get the int values using a pattern of the first part of the keys ( "json_key1" and "json_key2" ) and SUM them all using a CASE block like so:
SELECT table1.group as group,
COALESCE(sum(
CASE
WHEN table1.col1 = 'col1_val1' THEN (table1.json_col->>'json_key1.%')::bigint
WHEN table1.col1 = 'col1_val2' THEN (table1.json_col->>'json_key2.%')::bigint
ELSE 0::bigint
END), 0)::bigint AS my_result
FROM table1 as table1
GROUP BY table1.group;
I need "my_result" to look like:
|group |my_result
======================================
Row #1:|group1 |3
Is there a way to collect the values using regex or something like that. Not sure if I am checking the right documentation ( https://www.postgresql.org/docs/9.6/functions-json.html ), but I am not finding anything that can help me to achieve the above OR if that is actually possible..
Use jsonb_each_text() in a lateral join to get pairs (key, value) from json objects:
select
group_col as "group",
coalesce(sum(case when key like 'json_k%' then value::numeric end), 0) as my_result
from table1
cross join jsonb_each_text(col1)
group by group_col
Db<>fiddle.

SQL Rows to Columns if column values are unknown

I have a table that has demographic information about a set of users which looks like this:
User_id Category IsMember
1 College 1
1 Married 0
1 Employed 1
1 Has_Kids 1
2 College 0
2 Married 1
2 Employed 1
3 College 0
3 Employed 0
The result set I want is a table that looks like this:
User_Id|College|Married|Employed|Has_Kids
1 1 0 1 1
2 0 1 1 0
3 0 0 0 0
In other words, the table indicates the presence or absence of a category for each user. Sometimes the user will have a category where the value if false, sometimes the user will have no row for a category, in which case IsMember is assumed to be false.
Also, from time to time additional categories will be added to the data set, and I'm wondering if its possible to do this query without knowing up front all the possible category names, in other words, I won't be able to specify all the column names I want to count in the result. (Note only user 1 has category "has_kids" and user 3 is missing a row for category "married"
(using Postgres)
Thanks.
You can use jsonb funcions.
with titles as (
select jsonb_object_agg(Category, Category) as titles,
jsonb_object_agg(Category, -1) as defaults
from demog
),
the_rows as (
select null::bigint as id, titles as data
from titles
union
select User_id, defaults || jsonb_object_agg(Category, IsMember)
from demog, titles
group by User_id, defaults
)
select id, string_agg(value, '|' order by key)
from (
select id, key, value
from the_rows, jsonb_each_text(data)
) x
group by id
order by id nulls first
You can see a running example in http://rextester.com/QEGT70842
You can replace -1 with 0 for the default value and '|' with ',' for the separator.
You can install tablefunc module and use the crosstab function.
https://www.postgresql.org/docs/9.1/static/tablefunc.html
I found a Postgres function script called colpivot here which does the trick. Ran the script to create the function, then created the table in one statement:
select colpivot ('_pivoted', 'select * from user_categories', array['user_id'],
array ['category'], '#.is_member', null);

Transpose rows into columns in BigQuery (Pivot implementation) [duplicate]

This question already has answers here:
How to Pivot table in BigQuery
(7 answers)
Closed 2 years ago.
I want to generate a new table and place all key value pairs with keys as column names and values as their respective values using BigQuery.
Example:
**Key** **Value**
channel_title Mahendra Guru
youtube_id ugEGMG4-MdA
channel_id UCiDKcjKocimAO1tV
examId 72975611-4a5e-11e5
postId 1189e340-b08f
channel_title Ab Live
youtube_id 3TNbtTwLY0U
channel_id UCODeKM_D6JLf8jJt
examId 72975611-4a5e-11e5
postId 0c3e6590-afeb
I want to convert it to:
**channel_title youtube_id channel_id examId postId**
Mahendra Guru ugEGMG4-MdA UCiDKcjKocimAO1tV 72975611-4a5e-11e5 1189e340-b08f
Ab Live 3TNbtTwLY0U UCODeKM_D6JLf8jJt 72975611-4a5e-11e5 0c3e6590-afeb
How to do it using BigQuery?
BigQuery does not support yet pivoting functions
You still can do this in BigQuery using below approach
But first, in addition to two columns in input data you must have one more column that would specify groups of rows in input that needs to be combined into one row in output
So, I assume your input table (yourTable) looks like below
**id** **Key** **Value**
1 channel_title Mahendra Guru
1 youtube_id ugEGMG4-MdA
1 channel_id UCiDKcjKocimAO1tV
1 examId 72975611-4a5e-11e5
1 postId 1189e340-b08f
2 channel_title Ab Live
2 youtube_id 3TNbtTwLY0U
2 channel_id UCODeKM_D6JLf8jJt
2 examId 72975611-4a5e-11e5
2 postId 0c3e6590-afeb
So, first you should run below query
SELECT 'SELECT id, ' +
GROUP_CONCAT_UNQUOTED(
'MAX(IF(key = "' + key + '", value, NULL)) as [' + key + ']'
)
+ ' FROM yourTable GROUP BY id ORDER BY id'
FROM (
SELECT key
FROM yourTable
GROUP BY key
ORDER BY key
)
Result of above query will be string that (if to format) will look like below
SELECT
id,
MAX(IF(key = "channel_id", value, NULL)) AS [channel_id],
MAX(IF(key = "channel_title", value, NULL)) AS [channel_title],
MAX(IF(key = "examId", value, NULL)) AS [examId],
MAX(IF(key = "postId", value, NULL)) AS [postId],
MAX(IF(key = "youtube_id", value, NULL)) AS [youtube_id]
FROM yourTable
GROUP BY id
ORDER BY id
you should now copy above result (note: you don't really need to format it - i did it for presenting only) and run it as normal query
Result will be as you would expected
id channel_id channel_title examId postId youtube_id
1 UCiDKcjKocimAO1tV Mahendra Guru 72975611-4a5e-11e5 1189e340-b08f ugEGMG4-MdA
2 UCODeKM_D6JLf8jJt Ab Live 72975611-4a5e-11e5 0c3e6590-afeb 3TNbtTwLY0U
Please note: you can skip Step 1 if you can construct proper query (as in step 2) by yourself and number of fields small and constant or if it is one time deal. But Step 1 just helper step that makes it for you, so you can create it fast any time!
If you are interested - you can see more about pivoting in my other posts.
How to scale Pivoting in BigQuery?
Please note – there is a limitation of 10K columns per table - so you are limited with 10K organizations.
You can also see below as simplified examples (if above one is too complex/verbose):
How to transpose rows to columns with large amount of the data in BigQuery/SQL?
How to create dummy variable columns for thousands of categories in Google BigQuery?
Pivot Repeated fields in BigQuery

How to label a big set of “transitive groups” with a constraint?

EDIT after #NealB solution: the #NealB's solution is very very fast comparated with any another one, and dispenses this new question about "add a constraint to improve performance". The #NealB's not need any improve, have O(n) time and is very simple.
The problem of "label transitive groups with SQL" have an elegant solution using recursion and CTE... But this solution consumes an exponential time (!). I need to work with 10000 itens: with 1000 itens need 1 second, with 2000 need 1 day...
Constraint: in my case is possible to break the problem into pieces of ~100 itens or less, but only to select one group of ~10 itens, and discard all the other ~90 labeled itens...
There are a generic algotithm to add and use this kind of "pre-selection", to reduce the quadratic, O(N^2), time? Perhaps, as showed by comments and #wildplasser, a O(N log(N)) time; but I expect, with "pre-selection" to reduce to O(N) time.
(EDIT)
I try to use alternative algorithm, but it need some improvement to use as solution here; or, to really increase performance (to O(N) time), need to use "pre-selection".
The "pre-selection" (constraint) is based on a "super-set grouping"... Stating by the original "How to label 'transitive groups' with SQL?" question t1 table,
table T1
(original T1 augmented by "super-set grouping label" ssg, and more one row)
ID1 | ID2 | ssg
1 | 2 | 1
1 | 5 | 1
4 | 7 | 1
7 | 8 | 1
9 | 1 | 1
10 | 11 | 2
So there are three groups,
g1: {1,2,5,9} because "1 t 2", "1 t 5" and "9 t 1"
g2: {4,7,8} because "4 t 7" and "7 t 8"
g3: {10,11} because "10 t 11"
The super-group is only a auxiliary grouping,
ssg1: {g1,g2}
ssg2: {g3}
If we have M super-group-items and N total T1 items, the average group length will be less tham N/M. We can suppose (for my typical problem) also that ssg maximum length is ~N/M.
So, the "label algorithm" need to run only M times with ~N/M items if it use the ssg constraint.
An SQL only soulution appears to be a bit of a problem here. With the help of some procedural
programming on top of SQL the solution appears to be failry simple and efficient. Here is a brief outline
of a solution as could be implemented using any procedural language invoking SQL.
Declare table R with primary key ID where ID corresponds the same domain as ID1 and ID2 of table T1.
Table R contains one other non-key column, a Label number
Populate table R with the range of values found in T1. Set Label to zero (no label).
Using your example data, the initial setup for R would look like:
Table R
ID Label
== =====
1 0
2 0
4 0
5 0
7 0
8 0
9 0
Using a host language cursor plus an auxiliary counter, read each row from T1. Lookup ID1 and ID2 in R. You will find one of
four cases:
Case 1: ID1.Label == 0 and ID2.Label == 0
In this case neither one of these IDs have been "seen" before: Add 1 to the counter and then update both
rows of R to the value of the counter: update R set R.Label = :counter where R.ID in (:ID1, :ID2)
Case 2: ID1.Label == 0 and ID2.Label <> 0
In this case, ID1 is new but ID2 has already been assigned a label. ID1 needs to be assigned to the
same label as ID2: update R set R.Lablel = :ID2.Label where R.ID = :ID1
Case 3: ID1.Label <> 0 and ID2.Label == 0
In this case, ID2 is new but ID1 has already been assigned a label. ID2 needs to be assigned to the
same label as ID1: update R set R.Lablel = :ID1.Label where R.ID = :ID2
Case 4: ID1.Label <> 0 and ID2.Label <> 0
In this case, the row contains redundant information. Both rows of R should contain the same Label value. If not,
there is some sort of data integrity problem. Ahhhh... not quite see edit...
EDIT I just realized that there are situations where both Label values here could be non-zero and different. If both are non-zero and different then two Label groups need to be merged at this point. All you need to do is choose one Label and update the others to match with something like: update R set R.Label to ID1.Label where R.Label = ID2.Label. Now both groups have been merged with the same Label value.
Upon completion of the cursor, table R will contain Label values needed to update T2.
Table R
ID Label
== =====
1 1
2 1
4 2
5 1
7 2
8 2
9 1
Process table T2
using something along the lines of: set T2.Label to R.Label where T2.ID1 = R.ID. The end result should be:
table T2
ID1 | ID2 | LABEL
1 | 2 | 1
1 | 5 | 1
4 | 7 | 2
7 | 8 | 2
9 | 1 | 1
This process is puerly iterative and should scale to fairly large tables without difficulty.
I suggest you check this and use some
general-purpose language for solving it.
http://en.wikipedia.org/wiki/Disjoint-set_data_structure
Traverse the graph, maybe run DFS or BFS from each node,
then use this disjoint set hint. I think this should work.
The #NealB solution is the faster(!) See an example of PostgreSQL implementation here.
Below an example of another "brute force algorithm", only for curiosity!
As #peter.petrov and #RBarryYoung suggested, some performance problems can be avoided abandoning the CTE recursion... I do some issues at the basic labeler, and, abover I add the constraint for grouping by a super-set label. This new transgroup1_loop() function is working!
PS: this solution still have performance limitations, please post your answer with better, or with some adaptation of this one.
-- DROP table transgroup1;
CREATE TABLE transgroup1 (
id serial NOT NULL PRIMARY KEY,
items integer[], -- two or more items in the transitive relationship
ssg_label varchar(12), -- the super-set gropuping label
dels integer[] DEFAULT array[]::integer[]
);
INSERT INTO transgroup1(items,ssg_label) values
(array[1, 2],'1'),
(array[1, 5],'1'),
(array[4, 7],'1'),
(array[7, 8],'1'),
(array[9, 1],'1'),
(array[10, 11],'2');
-- or SELECT array[id1, id2],ssg_label FROM t1, with 10000 items
them, with these two functions we can solve the problem,
CREATE FUNCTION transgroup1_loop(p_ssg varchar, p_max_i integer DEFAULT 100)
RETURNS integer AS $funcBody$
DECLARE
cp_dels integer[];
i integer;
BEGIN
i:=1;
LOOP
UPDATE transgroup1
SET items = array_uunion(transgroup1.items,t2.items),
dels = transgroup1.dels || t2.id
FROM transgroup1 AS t1, transgroup1 AS t2
WHERE transgroup1.id=t1.id AND t1.ssg_label=$1 AND
t1.id>t2.id AND t1.items && t2.items;
cp_dels := array(
SELECT DISTINCT unnest(dels) FROM transgroup1
); -- ensures all itens to del
RAISE NOTICE '-- bug, repeting dels, item-%; % dels! %', i, array_length(cp_dels,1), array_to_string(cp_dels,';','*');
EXIT WHEN i>p_max_i OR array_length(cp_dels,1)=0;
DELETE FROM transgroup1
WHERE ssg_label=$1 AND id IN (SELECT unnest(cp_dels));
UPDATE transgroup1 SET dels=array[]::integer[];
i:=i+1;
END LOOP;
UPDATE transgroup1 -- only to beautify
SET items = ARRAY(SELECT unnest(items) ORDER BY 1 desc);
RETURN i;
END;
$funcBody$ LANGUAGE plpgsql VOLATILE;
to run and see results, you can use
SELECT transgroup1_loop('1'); -- run with ssg-1 items only
SELECT transgroup1_loop('2'); -- run with ssg-2 items only
-- show all with a sequential group label:
SELECT *, dense_rank() over (ORDER BY id) AS group_label from transgroup1;
results:
id | items | ssg_label | dels | group_label
----+-----------+-----------+------+-------------
4 | {8,7,4} | 1 | {} | 1
5 | {9,5,2,1} | 1 | {} | 2
6 | {11,10} | 2 | {} | 3
PS: the function array_uunion() is the same as original,
CREATE FUNCTION array_uunion(anyarray,anyarray) RETURNS anyarray AS $$
-- ensures distinct items of a concatemation
SELECT ARRAY(SELECT unnest($1) UNION SELECT unnest($2))
$$ LANGUAGE sql immutable;