Find the nearest weather station for multiple points in BigQuery [duplicate] - google-bigquery

I have a data table on Google BigQuery with locations, call it TABLE_A.
This is how TABLE_A looks like:
ID,Lat,Lon
1,32.95,65.567
2,33.95,65.566
There's a second table with different items, call it TABLE_B. TABLE_B has the same schema as TABLE_A. This is a sample from TABLE_B:
ID,Lat,Lon
a,32.96,65.566
b,33.96,65.566
and I want to create a new table, TABLE_C, in which every row has items from TABLE_A and TABLE_B such that the items are the closest (i.e. the distance between the lat/lon pair is the minimal distance when joining the tables). This would be an example of TABLE_C with the above sample data:
ID_A,ID_B
1,a
2,b
My actual data is a table of properties with lat/lon pairs on one hand and bigquery-public-data.noaa_gsod.stations on the other hand (I'm looking to find the closest weather station per property).

Below is for BigQuery Standard SQL
#standardSQL
SELECT AS VALUE ARRAY_AGG(STRUCT<id_a INT64, id_b STRING>(a.id, b.id) ORDER BY ST_DISTANCE(a.point, b.point) LIMIT 1)[OFFSET(0)]
FROM (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_a`) a
CROSS JOIN (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_b`) b
GROUP BY a.id
you can test, play with it using dummy data from your question as
#standardSQL
WITH `project.dataset.table_a` AS (
SELECT 1 id, 32.95 lat, 65.567 lon UNION ALL
SELECT 2, 33.95, 65.566
), `project.dataset.table_b` AS (
SELECT 'a' id, 32.96 lat, 65.566 lon UNION ALL
SELECT 'b', 33.96, 65.566
)
SELECT AS VALUE ARRAY_AGG(STRUCT<id_a INT64, id_b STRING>(a.id, b.id) ORDER BY ST_DISTANCE(a.point, b.point) LIMIT 1)[OFFSET(0)]
FROM (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_a`) a
CROSS JOIN (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_b`) b
GROUP BY a.id
with result
Row id_a id_b
1 1 a
2 2 b

Related

most efficient way to select duplicate rows with max timestamp

Suppose I have a table called t, which is like
id content time
1 'a' 100
1 'a' 101
1 'b' 102
2 'c' 200
2 'c' 201
id are duplicate, and for the same id, content could also be duplicate. Now I want to select for each id the rows with max timestamp, which would be
id content time
1 'b' 102
2 'c' 201
And this is my current solution:
select t1.id, t1.content, t1.time
from (
select id, content, time from t
) as t1
right join (
select id, max(time) as time from t group by id
) as t2
on t1.id = t2.id and t1.time = t2.time;
But this looks inefficient to me. Because theoretically when select id, max(time) as time from t group by id is executed, the rows I want have already been located. The right join brings extra O(n^2) time cost, which seems unnecessary.
So is there any more efficient way to do it, or anything that I missunderstand?
Use DISTINCT ON:
SELECT DISTINCT ON (id) id, content, time
FROM yourTable
ORDER BY id, time DESC;
On Postgres, this is usually the most performant way to write your query, and it should outperform ROW_NUMBER and other approaches.
The following index might speed up this query:
CREATE INDEX idx ON yourTable (id, time DESC, content);
This index, if used, would let Postgres rapidly find, for each id, the record having the latest time. This index also covers the content column.
Try this
SELECT a.id, a.content, a.time FROM t AS a
INNER JOIN (
SELECT a.content, MAX(a.time) AS time FROM t
GROUP BY a.content
) AS b ON a.content = b.content AND a.time = b.time

Counting the count of distinct values from two columns in sql

I have a table in data base in which there are corresponding values for the primary key.
I want to count the distinct values from two columns.
I already know one method of using union all and then applying groupby on that resultant table.
Select Id,Brand1
into #Temp
from data
union all
Select Id,Brand2
from data
Select ID,Count(Distinct Brand1)
from #Temp
group by ID
Same thing we can do in big query also using temp table only.
Sample Table
ID Brand1 Brand2
1 A B
1 B C
2 D A
2 A D
Resultant Table
ID Distinct_Count_Brand
1 3
2 2
As you can see in this column Distinct_count_Brand It is counting the unique count of Brand from two columns Brand1 and Brand2.
I already know one way (Basically unpivoting) but want to know if there is some other way around to count unique values from two columns.
I don't know BigQuery's quirks, but perhaps you can just inline the union query:
SELECT ID, COUNT(DISTINCT Brand)
FROM
(
SELECT ID, Brand1 AS Brand FROM data
UNION ALL
SELECT ID, Brand2 FROM data
) t
GROUP BY ID;
In SQL Server, I woud use:
Select b.id, count(distinct b.brand)
from data d cross apply
(values (id, brand1), (id, brand2)) b(id, brand)
group by b.id;
Here is a db<>fiddle.
In BigQuery, the equivalent would be expressed as:
select t.id, count(distinct brand)
from t cross join
unnest(array[brand1, brand2]) brand
group by t.id;
Here is a BQ query that demonstrates that this works:
with t as (
select 1 as id, 'A' as brand1, 'B' as brand2 union all
select 1, 'B', 'C' union all
select 2, 'D', 'A' union all
select 2, 'A', 'D'
)
select t.id, count(distinct brand)
from t cross join
unnest(array[brand1, brand2]) brand
group by t.id;

SQL: Finding the closest Lat/Lon record on Google BigQuery

I have a data table on Google BigQuery with locations, call it TABLE_A.
This is how TABLE_A looks like:
ID,Lat,Lon
1,32.95,65.567
2,33.95,65.566
There's a second table with different items, call it TABLE_B. TABLE_B has the same schema as TABLE_A. This is a sample from TABLE_B:
ID,Lat,Lon
a,32.96,65.566
b,33.96,65.566
and I want to create a new table, TABLE_C, in which every row has items from TABLE_A and TABLE_B such that the items are the closest (i.e. the distance between the lat/lon pair is the minimal distance when joining the tables). This would be an example of TABLE_C with the above sample data:
ID_A,ID_B
1,a
2,b
My actual data is a table of properties with lat/lon pairs on one hand and bigquery-public-data.noaa_gsod.stations on the other hand (I'm looking to find the closest weather station per property).
Below is for BigQuery Standard SQL
#standardSQL
SELECT AS VALUE ARRAY_AGG(STRUCT<id_a INT64, id_b STRING>(a.id, b.id) ORDER BY ST_DISTANCE(a.point, b.point) LIMIT 1)[OFFSET(0)]
FROM (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_a`) a
CROSS JOIN (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_b`) b
GROUP BY a.id
you can test, play with it using dummy data from your question as
#standardSQL
WITH `project.dataset.table_a` AS (
SELECT 1 id, 32.95 lat, 65.567 lon UNION ALL
SELECT 2, 33.95, 65.566
), `project.dataset.table_b` AS (
SELECT 'a' id, 32.96 lat, 65.566 lon UNION ALL
SELECT 'b', 33.96, 65.566
)
SELECT AS VALUE ARRAY_AGG(STRUCT<id_a INT64, id_b STRING>(a.id, b.id) ORDER BY ST_DISTANCE(a.point, b.point) LIMIT 1)[OFFSET(0)]
FROM (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_a`) a
CROSS JOIN (SELECT id, ST_GEOGPOINT(lon, lat) point FROM `project.dataset.table_b`) b
GROUP BY a.id
with result
Row id_a id_b
1 1 a
2 2 b

How to return two values from PostgreSQL subquery?

I have a problem where I need to get the last item across various tables in PostgreSQL.
The following code works and returns me the type of the latest update and when it was last updated.
The problem is, this query needs to be used as a subquery, so I want to select both the type and the last updated value from this query and PostgreSQL does not seem to like this... (Subquery must return only one column)
Any suggestions?
SELECT last.type, last.max FROM (
SELECT MAX(a.updated_at), 'a' AS type FROM table_a a WHERE a.ref = 5 UNION
SELECT MAX(b.updated_at), 'b' AS type FROM table_b b WHERE b.ref = 5
) AS last ORDER BY max LIMIT 1
Query is used like this inside of a CTE;
WITH sql_query as (
SELECT id, name, address, (...other columns),
last.type, last.max FROM (
SELECT MAX(a.updated_at), 'a' AS type FROM table_a a WHERE a.ref = 5 UNION
SELECT MAX(b.updated_at), 'b' AS type FROM table_b b WHERE b.ref = 5
) AS last ORDER BY max LIMIT 1
FROM table_c
WHERE table_c.fk_id = 1
)
The inherent problem is that SQL (all SQL not just Postgres) requires that a subquery used within a select clause can only return a single value. If you think about that restriction for a while it does makes sense. The select clause is returning rows and a certain number of columns, each row.column location is a single position within a grid. You can bend that rule a bit by putting concatenations into a single position (or a single "complex type" like a JSON value) but it remains a single position in that grid regardless.
Here however you do want 2 separate columns AND you need to return both columns from the same row, so instead of LIMIT 1 I suggest using ROW_NUMBER() instead to facilitate this:
WITH LastVals as (
SELECT type
, max_date
, row_number() over(order by max_date DESC) as rn
FROM (
SELECT MAX(a.updated_at) AS max_date, 'a' AS type FROM table_a a WHERE a.ref = 5
UNION ALL
SELECT MAX(b.updated_at) AS max_date, 'b' AS type FROM table_b b WHERE b.ref = 5
)
)
, sql_query as (
SELECT id
, name, address, (...other columns)
, (select type from lastVals where rn = 1) as last_type
, (select max_date from lastVals where rn = 1) as last_date
FROM table_c
WHERE table_c.fk_id = 1
)
----
By the way in your subquery you should use UNION ALL with type being a constant like 'a' or 'b' then even if MAX(a.updated_at) was identical for 2 or more tables, the rows would still be unique because of the difference in type. UNION will attempt to remove duplicate rows but here it just isn't going to help, so avoid that wasted effort by using UNION ALL.
----
For another way to skin this cat, consider using a LEFT JOIN instead
SELECT id
, name, address, (...other columns)
, lastVals.type
, LastVals.last_date
FROM table_c
WHERE table_c.fk_id = 1
LEFT JOIN (
SELECT type
, last_date
, row_number() over(order by last_date DESC) as rn
FROM (
SELECT MAX(a.updated_at) AS last_date, 'a' AS type FROM table_a a WHERE a.ref = 5
UNION ALL
SELECT MAX(b.updated_at) AS last_date, 'b' AS type FROM table_b b WHERE b.ref = 5
)
) LastVals ON LastVals.rn = 1

Combine the most recent entries from a number of tables

I have a master table with a number of IDs in it:
ID ...
0 ...
1 ...
And multiple tables (say vtbl1, vtbl2, vtbl3) with a foreign key to master, a timestamp and a value:
ID Timestamp Value
0 01/01/01.. 2
1 01/01/02.. 7
0 01/01/03.. 5
I would like to get one or more entries for each ID in master with an entry (or null if no entries exist) containing the most recent entry in each v... table (grouped by timestamps):
ID Timestamp vtbl1.Value vtbl2.Value vtbl3.value
0 01/01/03.. 5 2
0 01/01/01.. 4
1 01/01/02.. 7 4 9
I'm sure this is fairly simple but my SQL is rusty and I've been going in circles. Any help would be appreciated.
Clarification
These values come from one or more sensors able to read one or more of the values. So the latest value in each value table for the ID is to be considered the current system state for that ID. If the timestamps match they are considered one update.
I need the minimal set of updates required for each ID to give a full data set for the current state.
Also the values can be of different types.
If I understand your question correctly, one option is to use conditional aggregation and union all:
select id, timestamp,
max(case when tbl = 'tbl1' then value end) t1value,
max(case when tbl = 'tbl2' then value end) t2value,
max(case when tbl = 'tbl3' then value end) t3value
from (
select id, timestamp, value, 'tbl1' tbl
from tbl1
union all
select id, timestamp, value, 'tbl2' tbl
from tbl2
union all
select id, timestamp, value, 'tbl3' tbl
from tbl3
) t
group by id, timestamp
Or if you have multiple records per id and you want the highest value per by timestamp, you can include row_number() in your subquery:
select id, timestamp,
max(case when tbl = 'tbl1' then value end) t1value,
max(case when tbl = 'tbl2' then value end) t2value,
max(case when tbl = 'tbl3' then value end) t3value
from (
select id, timestamp, value, 'tbl1' tbl,
row_number() over (partition by id order by timestamp desc) rn
from tbl1
union all
select id, timestamp, value, 'tbl2' tbl,
row_number() over (partition by id order by timestamp desc) rn
from tbl2
union all
select id, timestamp, value, 'tbl3' tbl,
row_number() over (partition by id order by timestamp desc) rn
from tbl3
) t
where rn = 1
group by id, timestamp
This can get difficult though if max(timestamp) values aren't the same in each of the child tables. Which do you join on at that point?
select m.*, v1.value as t1_val, v2.value as t2_val, v3.value as t3_val
from master m
left join (select x.*
from vtbl1 x
join (select id, max(timestamp) as last_ts
from vtbl1
group by id) y
on x.id = y.id
and x.timestamp = y.last_ts) v1
on m.id = v1.id
left join (select x.*
from vtbl2 x
join (select id, max(timestamp) as last_ts
from vtbl2
group by id) y
on x.id = y.id
and x.timestamp = y.last_ts) v2
on m.id = v2.id
left join (select x.*
from vtbl3 x
join (select id, max(timestamp) as last_ts
from vtbl3
group by id) y
on x.id = y.id
and x.timestamp = y.last_ts) v3
on m.id = v3.id
The fastest query technique depends on the distribution of values. DISTINCT ON would be a simple solution in Postgres, ideal for just a few values per id in each child table. But guessing from your description I expect many rows per id, so I suggest a solution with LATERAL joins. Requires Postgres 9.3+:
Optimize GROUP BY query to retrieve latest record per user
One more complication for your already-not-so-simple case:
Also the values can be of different types
Alternative 1
Cast all values to text. Every data type can be cast to text.
Base query
SELECT m.id, v.timestamp, 1 AS tbl, v.value -- simple int as table id
FROM master m
, LATERAL (
SELECT timestamp, value::text -- cast to text
FROM vtbl1
WHERE id = m.id -- lateral reference
ORDER BY timestamp DESC NULLS LAST
LIMIT 1
) v
UNION ALL
SELECT m.id, v.timestamp, 2 AS tbl, v.value -- ascending without gaps
FROM master m
, LATERAL (
SELECT timestamp, value::text
FROM vtbl2
WHERE id = m.id
ORDER BY timestamp DESC NULLS LAST
LIMIT 1
) v
UNION ALL
SELECT m.id, v.timestamp, 3 AS tbl, value
FROM ...
;
All you need for this to be fast is an index on (id, timestamp) for each child table. Best in this form (adding value is only useful if you get index-only scans out of it):
CREATE INDEX vtbl1_combo_idx ON vtbl1 (id, timestamp DESC NULLS LAST, value)
1a. Aggregate (pseudo-crosstab)
To format as desired use aggregate functions on CASE expressions in Postgres 9.3 or older (like demonstrated by #sgeddes) or (better) the new aggregate FILTER clause in Postgres 9.4+:
How can I simplify this game statistics query?
SELECT id, timestamp
, max(value) FILTER (WHERE tbl = 1) AS val1
, max(value) FILTER (WHERE tbl = 2) AS val2
, ...
FROM ( <query frm above> ) t
GROUP BY 1, 2;
1b. Crosstab
Actual cross tabulation (also called "pivot" in other RDBMS) should be considerably faster. You need the additional module tablefunc installed, instructions below.
The special difficulty here: we have a composite "row name" (id, timestamp), but the function expects a single column as row name. So we substitute with row_number(), but do not display that surrogate key in the result:
SELECT id, timestamp, val1, val2, val3, ...
-- normally SELECT * is enough; explicit list to filter rn
FROM crosstab(
$$
SELECT row_number() OVER (ORDER BY id, timestamp DESC NULLS LAST) AS rn
, id, timestamp, tbl, value
FROM ( <query from above> ) t
ORDER BY 1
$$
, 'SELECT generate_series(1,3)' -- replace 3 with highest table nr.
) AS ct (
rn int, id int, timestamp date
, val1 text, val2 text, val3 text, ...);
Closely related:
Postgres - Transpose Rows to Columns
Relevant basics:
PostgreSQL Crosstab Query
Pivot on Multiple Columns using Tablefunc
Alternative 2
Simple, but may be just as fast and preserves original data types:
SELECT id, timestamp
, max(val1) AS val1, max(val2) AS val2, max(val3) AS val3, ...
FROM (
SELECT m.id, v.timestamp
, v.value AS val1, NULL::int AS val2, NULL::numeric AS val3, ...
-- list all values with actual data type
FROM master m
, LATERAL (
SELECT timestamp, value
FROM vtbl1
WHERE id = m.id
ORDER BY timestamp DESC NULLS LAST
LIMIT 1
) v
UNION ALL
SELECT m.id, v.timestamp
, NULL, v.value, NULL, ... -- column names & data types defined in first SELECT
FROM master m
, LATERAL (
SELECT timestamp, value
FROM vtbl2
WHERE id = m.id
ORDER BY timestamp DESC NULLS LAST
LIMIT 1
) v
UNION ALL
SELECT m.id, v.timestamp
, NULL, NULL, v.value, ...
FROM ...
) t
GROUP BY 1, 2
ORDER BY 1, 2;
Aside: Never use basic type names or reserved words (in standard SQL) like timestamp as identifier.