Dynamic Column Names in BigQuery SQL Query - sql

I have a BigQuery table in which every row is a visit of a user in a country. The schema is something like this:
UserID | Place | StartDate | EndDate | etc ...
---------------------------------------------------------------
134 | Paris | 234687432 | 23648949 | etc ...
153 | Bangkok | 289374897 | 2348709 | etc ...
134 | Paris | 9287324892 | 3435438 | etc ...
The values of the "Place" columns can be no more than tens of options, but I don't know them all in advance.
I want to query this table so that in the resulted table the columns are named as all the possibilities of the Place column, and the values are the total number of visits per user in this place.
The end result should look like this:
UserID | Paris | Bangkok | Rome | London | Rivendell | Alderaan
----------------------------------------------------------------
134 | 2 | 0 | 0 | 0 | 0 | 0
153 | 0 | 1 | 0 | 0 | 0 | 0
I guess I can select all the possible values of "Place" with SELECT DISTINCT but how can I achieve this structure of result table?
Thanks

Below is for BigQuery Standard SQL
Step 1 - dynamically assemble proper SQL statement with all possible values of "place" field
#standardSQL
SELECT '''
SELECT UserID,''' || STRING_AGG(DISTINCT
' COUNTIF(Place = "' || Place || '") AS ' || REPLACE(Place, ' ', '_')
) || ''' FROM `project.dataset.table`
GROUP BY UserID
'''
FROM `project.dataset.table`
Note: you will get one row output with the text like below (already split in multiple rows for better reading
SELECT UserID,
COUNTIF(Place = "Paris") AS Paris,
COUNTIF(Place = "Los Angeles") AS Los_Angeles
FROM `project.dataset.table`
GROUP BY UserID
Note; I replaced Bangkok with Los Angeles so you see why it is important to replace possible spaces with underscores
Step 2 - just copy output text of Step 1 and simply run it
Obviously you can automate above two steps using any client of your choice

If you just want to count the places, you can use countif():
select userid,
countif(place = 'Paris') as paris,
countif(place = 'Bangkok') as bangkok,
countif(place = 'Rome') as rome,
. . .
from t
group by userid;

Related

Distinct values between two SQL queries

I want to be able to find any potential differences between data inputted on one day to another.
The relevant col. from the table are seen below:
Name | Size | DateSale | Location | Comments | Date
The two current queries are:
Select Name, Size, DateSale, Location, Comments from [Table] where Date = '06/02/2022'
Select Name, Size, DateSale, Location, Comments from [Table] where Date = '06/01/2022'
How would I come up with a list of values that are different from these two lists? Tried working with select distinct but could not figure it out.
Sample Data:
Name | Size | DateSale | Location | Comments | Date
john | 100 |06/05/2022| Houston | proj. | 06/02/2022
john | 100 |06/04/2022| Dallas | | 06/01/2022
jake | 90 |06/04/2022| Houston | proj. | 06/02/2022
jake | 90 |06/04/2022| Houston | proj. | 06/01/2022
Desired Result:
john | 100 |06/05/2022| Houston | proj. | 06/02/2022
Since the keys (Name + Size) are the same, but there are differences in the other categories (Sale Date, Location, or Comments), it will return
the row from the first query (most recent date)
SELECT y.* FROM (SELECT * from Table where Date = '05/31/2022') as x,
(SELECT * FROM Table where DATE = '06/02/2022') as y where x.Name = y.Name
and x.Size = y.Size and (x.DateSale!=y.DateSale or x.Location!=y.Location or
x.COMMENTS != y.COMMENTS)
This solution worked for me

how to loop an array in string in a where clause

I have an information table with a column of an array in string format. The length is unknown starting from 0. How can I put it in a where clause of PostgreSQL?
* hospital_information_table
| ID | main_name | alternative_name |
| --- | ---------- | ----------------- |
| 111 | 'abc' | 'abe, abx' |
| 222 | 'bbc' | '' |
| 333 | 'cbc' | 'cbe,cbd,cbf,cbg' |
​
​
* record
| ID | name | hospital_id |
| --- | ------- | ------------ |
| 1 | 'abc-1' | |
| 2 | 'bbe+2' | |
| 3 | 'cbf*3' | |
​
e.g. this column is for alternative names of hospitals. let's say e.g. 'abc,abd,abe,abf' as column Name and '111' as ID. And I have a record with a hospital name 'cbf*3' ('3' is the department name) and I would like to check its ID. How can I check all names one by one in 'cbe,cbd,cbf,cbg' and get its ID '333'?
--update--
In the example, in the record table, I used '-', '*', '+', meaning that I couldn't split the name in the record table under a certain pattern. But I can make sure that some of the alternative names may appear in the record name (as a substring). something similar to e.g. 'cbf' in 'cbf*3'. I would like to check all names, if 'abe' in 'cbf*3'? no, if 'abx' in 'cbf*3'? no, then the next row etc.
--update--
Thanks for the answers! They are great!
For more details, the original dataset is not in alphabetic languages. The text in the record name is not separable. it is really hard to find a separator or many separators. Therefore, for the solutions with regrex like '[-*+]' could not work here.
Thanks in advance!
You could use regexp_split_to_array to convert the coma-delimited string to a proper array, and then use the any operator to search inside it:
SELECT r.*, h.id
FROM record r
JOIN hospital_information h ON
SPLIT_PART(r.name, '-', 1) = ANY(REGEXP_SPLIT_TO_ARRAY(h.name, ','))
SQLFiddle demo
Substring can be used with a regular expression to get the hospital name from the record's name.
And String_to_array can transform a CSV string to an array.
SELECT
r.id as record_id
, r.name as record_name
, h.id as hospital_id
FROM record r
LEFT JOIN hospital_information h
ON SUBSTRING(r.name from '^(.*)[+*\-]\w+$') = ANY(STRING_TO_ARRAY(h.alternative_name,',')||h.main_name)
WHERE r.hospital_id IS NULL;
record_id
record_name
hospital_id
1
abc-1
111
2
bbe+2
222
3
cbf*3
333
Demo on db<>fiddle here
Btw, text [] can be used as a datatype in a table.

Count string occurances within a list column - Snowflake/SQL

I have a table with a column that contains a list of strings like below:
EXAMPLE:
STRING User_ID [...]
"[""null"",""personal"",""Other""]" 2122213 ....
"[""Other"",""to_dos_and_thing""]" 2132214 ....
"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]" 4342323 ....
QUESTION:
I want to be able to get a count of the amount of times each unique string appears (strings are seperable within the strings column by commas) but only know how to do the following:
SELECT u.STRING, count(u.USERID) as cnt
FROM table u
group by u.STRING
order by cnt desc;
However the above method doesn't work as it only counts the number of user ids that use a specific grouping of strings.
The ideal output using the example above would like this!
DESIRED OUTPUT:
STRING COUNT_Instances
"null" 1223
"personal" 543
"Other" 324
"to_dos_and_thing" 221
"getting_things_done" 146
"Work!!!!!" 22
Based on your description, here is my sample table:
create table u (user_id number, string varchar);
insert into u values
(2122213, '"[""null"",""personal"",""Other""]"'),
(2132214, '"[""Other"",""to_dos_and_thing""]"'),
(2132215, '"[""getting_things_done"",""TO_dos_and_thing"",""Work!!!!!""]"' );
I used SPLIT_TO_TABLE to split each string as a row, and then REGEXP_SUBSTR to clean the data. So here's the query and output:
select REGEXP_SUBSTR( s.VALUE, '""(.*)""', 1, 1, 'i', 1 ) extracted, count(*) from u,
lateral SPLIT_TO_TABLE( string , ',' ) s
GROUP BY extracted
order by count(*) DESC;
+---------------------+----------+
| EXTRACTED | COUNT(*) |
+---------------------+----------+
| Other | 2 |
| null | 1 |
| personal | 1 |
| to_dos_and_thing | 1 |
| getting_things_done | 1 |
| TO_dos_and_thing | 1 |
| Work!!!!! | 1 |
+---------------------+----------+
SPLIT_TO_TABLE https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
REGEXP_SUBSTR https://docs.snowflake.com/en/sql-reference/functions/regexp_substr.html

Pull NULL if column not present in table while UNION SQL Server

I am currently building a dynamic SQL query. The tables and columns are sent as parameters. So the columns may not be present in the table. Is there a way to pull NULL data in the result set when the column is not present in the table?
ex:
SELECT * FROM Table1
Output:
created date | Name | Salary | Married
-------------+-------+--------+----------
25-Jan-2016 | Chris | 2500 | Y
27-Jan-2016 | John | 4576 | N
30-Jan-2016 | June | 3401 | N
So when I run the query below
SELECT Created_date, Name, Age, Married
FROM Table1
I need to get
created date | Name | AGE | Married
-------------+-------+--------+----------
25-Jan-2016 | Chris | NULL | Y
27-Jan-2016 | John | NULL | N
30-Jan-2016 | June | NULL | N
Does anything like IF NOT EXISTS or ISNULL work in this?
I can't use extensive T-SQL in this segment and need to be simple since I am creating a UNION query to more than 50 tables (requirement :| ) . Any advice would be of great help to me.
I can't think of an easy solution. Since you're using dynamic sql, instead of
(previous dynamic string part)+' fieldname '+(next dynamic string part)
you could use
(previous dynamic string part)
+ case when exists (
select 1
from sys.tables t
inner join sys.columns c on t.object_id=c.object_id
where c.name=your_field_name and t.name=your_table_name)
) then ' fieldname ' else ' NULL ' end
+(next dynamic string part)

Splitting a string column in BigQuery

Let's say I have a table in BigQuery containing 2 columns. The first column represents a name, and the second is a delimited list of values, of arbitrary length. Example:
Name | Scores
-----+-------
Bob |10;20;20
Sue |14;12;19;90
Joe |30;15
I want to transform into columns where the first is the name, and the second is a single score value, like so:
Name,Score
Bob,10
Bob,20
Bob,20
Sue,14
Sue,12
Sue,19
Sue,90
Joe,30
Joe,15
Can this be done in BigQuery alone?
Good news everyone! BigQuery can now SPLIT()!
Look at "find all two word phrases that appear in more than one row in a dataset".
There is no current way to split() a value in BigQuery to generate multiple rows from a string, but you could use a regular expression to look for the commas and find the first value. Then run a similar query to find the 2nd value, and so on. They can all be merged into only one query, using the pattern presented in the above example (UNION through commas).
Trying to rewrite Elad Ben Akoune's answer in Standart SQL, the query becomes like this;
WITH name_score AS (
SELECT Name, split(Scores,';') AS Score
FROM (
(SELECT * FROM (SELECT 'Bob' AS Name ,'10;20;20' AS Scores))
UNION ALL
(SELECT * FROM (SELECT 'Sue' AS Name ,'14;12;19;90' AS Scores))
UNION ALL
(SELECT * FROM (SELECT 'Joe' AS Name ,'30;15' AS Scores))
))
SELECT name, score
FROM name_score
CROSS JOIN UNNEST(name_score.score) AS score;
And this outputs;
+------+-------+
| name | score |
+------+-------+
| Bob | 10 |
| Bob | 20 |
| Bob | 20 |
| Sue | 14 |
| Sue | 12 |
| Sue | 19 |
| Sue | 90 |
| Joe | 30 |
| Joe | 15 |
+------+-------+
If someone is still looking for an answer
select Name,split(Scores,';') as Score
from (
# replace the inner custome select with your source table
select *
from
(select 'Bob' as Name ,'10;20;20' as Scores),
(select 'Sue' as Name ,'14;12;19;90' as Scores),
(select 'Joe' as Name ,'30;15' as Scores)
);