Postgresql selecting window of rows based on conditions - sql

I have a car trip data with different car trips. The relevant columns are
id - varchar(32)
sequence - integer - resets to 1 for a new car trip
timestamp - time at which device recorded gps data. It is shown as date in the below pic but assume it as timestamp
Latitude - numeric
Longitude
I am trying to find out car trips that are between a particular origin and destination point. If I enter origin as 40.34, 23.5 and destination as 40.75, 23.9 then the output would be as shown in second picture.
The first picture contains 2 car trips namely abc & def. 'abc' took place on December 18th while 'def' took on December 15th so def appears first in the output. The output table is ordered by timestamp column and sequence column and grouped by id. The output should also contain intermediate points between origin and destination.
I am unable to figure out how to find first trips that pass through particular points.
Input:
Output:

Assuming your car trip data table is named trips:
WITH starts_and_ends AS (
SELECT
starts.id,
starts.sequence AS start_sequence,
ends.sequence AS end_sequence
FROM
trips AS starts
JOIN trips AS ends
ON (starts.id = ends.id AND starts.sequence < ends.sequence)
WHERE
starts.latitude = 40.34 AND
starts.longitude = 23.50 AND
ends.latitude = 40.75 AND
ends.longitude = 23.90
)
SELECT
trips.*
FROM
starts_and_ends,
trips
WHERE
trips.id = starts_and_ends.id AND
sequence BETWEEN starts_and_ends.start_sequence AND starts_and_ends.end_sequence
ORDER BY
trips.id,
trips.sequence,
trips.timestamp;
In WITH query I select start and end point ids and sequence number. Then join it with original table to show the trip.
Output:
abc 2 2017-12-18 40.34 23.50
abc 3 2017-12-18 40.56 23.80
abc 4 2017-12-18 40.75 23.90
def 2 2017-12-15 40.34 23.50
def 3 2017-12-15 40.55 23.59
def 4 2017-12-15 40.80 23.99
def 5 2017-12-15 40.75 23.90

Try row_number() over()
SELECT
*
FROM (
SELECT
t.*
, ROW_NUMBER() OVER (PARTITION BY id ORDER BY sequence, timestamp) AS rn
FROM yourtable t
WHERE Latitude = 40.34
AND Longitude = 23.5
) d
WHERE rn = 1
nb: not sure is timestamp is needed in the ordering, but could be used as tie-beaker perhaps.

Related

how to turn a wide table into a long table

I have a wide table that looks like this:
Case REFERENCE
OUTCOME_EMP_SITUATION
MONTH1_EMP_SITUATION
MONTH1_REASON
MONTH3_EMP_SITUATION
MONTH3_REASON
MONTH6_EMP_SITUATION
MONTH6_REASON
12345
Employed
Employed
Outcome at 1 month
Employed
Outcome at 3 month
Employed
Outcome at 6 month
this is survey results that people completed after they finished employment program. They complete the survey 4 times, once immediately after finishing the program, and then after 1/3/6 month. the problem is, the results for immediately after program completion are in one table (Outcome table) and the 1/3/6 month checkpoint results are in another table (Checkpointinfo table) I would like to combine those tables to create a long table so that instead of having "Outcome" in 5 different columns, I would have it in one column and it would look like this:
Case Reference
Outcome_emp_situation
Month_Reason
12345
Employed
NULL
12345
Employed
Outcome at 1 month
12345
Employed
Outcome at 3 month
12345
Employed
Outcome at 6 month
I was wondering if anyone could please help me out to turn this wide query into a long table query.
Here is the query for the wide table:
Select
ch.CASEREFERENCE, oc.OUTCOME_DATE, oc.OUTCOME_REFERENCE_ID, oc.OUTCOME_EMP_SITUATION, oc.OUTCOME_EMPLOYMENT_TYPE, oc.OUTCOME_NUM_JOBS, oc.OUTCOME_NAICS_DESC, oc.OUTCOME_JOB_NATURE,
oc.OUTCOME_WORK_HOURS, oc.OUTCOME_WAGE, oc.OUTCOME_STUDENT_STATUS, oc.OUTCOME_GOT_SERVICE, oc.OUTCOME_RIGHT_SERVICE, oc.OUTCOME_RECOMMEND_PROGRAM,
ck1.REASONCODE AS REASONCODE1,
CASE WHEN ck1.REASONCODE = 'OT1' THEN "Outcome at 1 month" END MONTH1_REASON,
ck1.MONTH_START_DATE AS MONTH1_START_DATE, ck1.MONTH_END_DATE AS MONTH1_END_DATE, ck1.MONTH_OUTCOME_EMP_SITUATION AS MONTH1_OUTCOME_EMP_SITUATION,
ck1.MONTH_EMPLOYMENT_TYPE AS MONTH1_EMPLOYMENT_TYPE, ck1.MONTH_NUM_JOBS AS ,MONTH1_NUM_JOBS, ck1.MONTH_NAICS_DESC AS MONTH1_NAICS_DESC, ck1.MONTH_JOB_NATURE AS MONTH1_JOB_NATURE,
ck1.MONTH_WORK_HOURS AS MONTH1_WORK_HOURS, ck1.MONTH_WAGE AS MONTH1_WAGE, ck1.MONTH_STUDENT_STATUS AS MONTH1_STUDENT_STATUS, ck1.MONTH_GOT_SERVICE AS MONTH1_GOT_SERVICE,
ck1.MONTH_RIGHT_SERVICE AS MONTH1_RIGHT_SERVICE, ck1.MONTH_RECOMMEND_PROGRAM AS MONTH1_RECOMMEND_PROGRAM, ck1.MONTH_RESUBMIT_MILESTONE AS MONTH1_RESUBMIT_MILESTONE,
ck1.MONTH_MILESTONE_ACHIEVED AS MONTH1_MILESTONE_ACHIEVED, ck1.MONTH_APPROVED_DATE AS MONTH1_APPROVED_DATE,
ck3.REASONCODE AS REASONCODE3,
CASE WHEN ck3.REASONCODE = 'OT3' THEN "Outcome at 3 month" END MONTH3_REASON,
ck3.MONTH_START_DATE AS MONTH3_START_DATE, ck3.MONTH_END_DATE AS MONTH3_END_DATE, ck3.MONTH_OUTCOME_EMP_SITUATION AS MONTH3_OUTCOME_EMP_SITUATION,
ck3.MONTH_EMPLOYMENT_TYPE AS MONTH3_EMPLOYMENT_TYPE, ck3.MONTH_NUM_JOBS AS ,MONTH3_NUM_JOBS, ck3.MONTH_NAICS_DESC AS MONTH3_NAICS_DESC, ck3.MONTH_JOB_NATURE AS MONTH3_JOB_NATURE,
ck3.MONTH_WORK_HOURS AS MONTH3_WORK_HOURS, ck3.MONTH_WAGE AS MONTH3_WAGE, ck3.MONTH_STUDENT_STATUS AS MONTH3_STUDENT_STATUS, ck3.MONTH_GOT_SERVICE AS MONTH3_GOT_SERVICE,
ck3.MONTH_RIGHT_SERVICE AS MONTH3_RIGHT_SERVICE, ck3.MONTH_RECOMMEND_PROGRAM AS MONTH3_RECOMMEND_PROGRAM, ck3.MONTH_RESUBMIT_MILESTONE AS MONTH3_RESUBMIT_MILESTONE,
ck3.MONTH_MILESTONE_ACHIEVED AS MONTH3_MILESTONE_ACHIEVED, ck3.MONTH_APPROVED_DATE AS MONTH3_APPROVED_DATE,
ck6.REASONCODE AS REASONCODE6,
CASE WHEN ck6.REASONCODE = 'OT6' THEN "Outcome at 6 month" END MONTH6_REASON,
ck6.MONTH_START_DATE AS MONTH6_START_DATE, ck6.MONTH_END_DATE AS MONTH6_END_DATE, ck6.MONTH_OUTCOME_EMP_SITUATION AS MONTH6_OUTCOME_EMP_SITUATION,
ck6.MONTH_EMPLOYMENT_TYPE AS MONTH6_EMPLOYMENT_TYPE, ck6.MONTH_NUM_JOBS AS ,MONTH6_NUM_JOBS, ck6.MONTH_NAICS_DESC AS MONTH6_NAICS_DESC, ck6.MONTH_JOB_NATURE AS MONTH6_JOB_NATURE,
ck6.MONTH_WORK_HOURS AS MONTH6_WORK_HOURS, ck6.MONTH_WAGE AS MONTH6_WAGE, ck6.MONTH_STUDENT_STATUS AS MONTH6_STUDENT_STATUS, ck6.MONTH_GOT_SERVICE AS MONTH6_GOT_SERVICE,
ck6.MONTH_RIGHT_SERVICE AS MONTH6_RIGHT_SERVICE, ck6.MONTH_RECOMMEND_PROGRAM AS MONTH6_RECOMMEND_PROGRAM, ck6.MONTH_RESUBMIT_MILESTONE AS MONTH6_RESUBMIT_MILESTONE,
ck6.MONTH_MILESTONE_ACHIEVED AS MONTH6_MILESTONE_ACHIEVED, ck6.MONTH_APPROVED_DATE AS MONTH6_APPROVED_DATE
FROM PROGRAM as pg
LEFT JOIN CASEINFO as ch ON pg.CASEID = ch.CASEID
LEFT JOIN OUTCOME as oc ON pg.CASEID = oc.CASEID
LEFT JOIN ( SELECT cp.CASEID, cp.REASONCODE, cp.MONTH_OUTCOME_EMP_SITUATION, cpi.* FROM CHECKPOINT cp LEFT JOIN CHECKPOINTINFO cpi ON cp.CASEREVIEWID = cpi.CASEREVIEWID WHERE cpi.REASONCODE = 'OT1')ck1 ON pg.CASEID = ck1.CASEID
LEFT JOIN ( SELECT cp.CASEID, cp.REASONCODE, cp.MONTH_OUTCOME_EMP_SITUATION, cpi.* FROM CHECKPOINT cp LEFT JOIN CHECKPOINTINFO cpi ON cp.CASEREVIEWID = cpi.CASEREVIEWID WHERE cpi.REASONCODE = 'OT3')ck3 ON pg.CASEID = ck3.CASEID
LEFT JOIN ( SELECT cp.CASEID, cp.REASONCODE, cp.MONTH_OUTCOME_EMP_SITUATION, cpi.* FROM CHECKPOINT cp LEFT JOIN CHECKPOINTINFO cpi ON cp.CASEREVIEWID = cpi.CASEREVIEWID WHERE cpi.REASONCODE = 'OT6')ck6 ON pg.CASEID = ck6.CASEID
If someone could please help me turn this wide table into a long table, it would be much appreciated.
thank you
You need to do unpivot for outcome and reason columns. But first you need an extra column for overall reason. This is the query:
with a as (
select 12345 as case_reference,
'Employed' as OUTCOME_EMP_SITUATION,
'Employed' as MONTH1_EMP_SITUATION,
'Outcome at 1 month' as MONTH1_REASON,
'Employed' as MONTH3_EMP_SITUATION,
'Outcome at 3 month' as MONTH3_REASON,
'Employed' as MONTH6_EMP_SITUATION,
'Outcome at 6 month' as MONTH6_REASON
from dual
)
select
case_reference,
outcome_emp_situation,
month_reason
from (
select a.*,
cast(null as varchar2(1000)) as reason
from a
) a
unpivot(
(Outcome_emp_situation, Month_Reason)
for mon in (
(OUTCOME_EMP_SITUATION, reason) as 0,
(MONTH1_EMP_SITUATION, MONTH1_REASON) as 1,
(MONTH3_EMP_SITUATION, MONTH3_REASON) as 3,
(MONTH6_EMP_SITUATION, MONTH6_REASON) as 6
)
)
order by mon asc
CASE_REFERENCE | OUTCOME_EMP_SITUATION | MONTH_REASON
-------------: | :-------------------- | :-----------------
12345 | Employed | null
12345 | Employed | Outcome at 1 month
12345 | Employed | Outcome at 3 month
12345 | Employed | Outcome at 6 month
db<>fiddle here
UPD: The explanation below.
The tuple just after unpivot keyword is the result column names, column after for keyword identifies column group which produced that values. Tuples inside in define the columns' groups: for each group that columns' values will be passed to the corresponding (by position) columns of the result tuple and new row will be generated with the value of for column defined after as keyword.
So if you need more columns to be transferred to each row, you need to add new columns to the result tuple (after unpivot) and to each column group inside in. If for some reason you have not enough columns to pass for some groups, you can wrap your source query with outer select and add dummy (or constantly valued) columns for that groups.
Note:
Datatypes of each tuples should be the same (or convertible according to default datatype precedence). I.e. each tuple's member on the same position should have the same type, members at different positions may have different types.
You can reuse the same column in multiple groups and positions.

How do I stop my query from pulling duplicates?

Yes, I know this seems simple:
SELECT DISTINCT(...)
Except, it apparently isn't
Here is my actual Query:
SELECT
DeclinationReasons.Reason,
EmployeeInformation.ID,
EmployeeInformation.Employee,
EmployeeInformation.Active,
CompletedTrainings.DecShotDate,
CompletedTrainings.DecShotLocation,
CompletedTrainings.DecReason,
CompletedTrainings.DecExplanation,
IIf([DecShotLocation]="MCS","Yes","No") AS YesMCS,
IIf([DecReason]=1,1,0) AS YesAllergy,
IIf([DecReason]=2,1,0) AS YesImmune,
IIf([DecReason]=3,1,0) AS YesAdverse,
IIf([DecReason]=4,1,0) AS YesMedical,
IIf([DecReason]=5,1,0) AS YesSpiritual,
IIf([DecReason]=6,1,0) AS YesOther,
IIf([DecReason]=7,1,0) AS YesAlready
FROM
EmployeeInformation
INNER JOIN (CompletedTrainings
LEFT JOIN DeclinationReasons ON CompletedTrainings.DecReason = DeclinationReasons.ReasonID)
ON EmployeeInformation.ID = CompletedTrainings.Employee
GROUP BY
DeclinationReasons.Reason,
EmployeeInformation.ID,
EmployeeInformation.Employee,
EmployeeInformation.Active,
CompletedTrainings.DecShotDate,
CompletedTrainings.DecShotLocation,
CompletedTrainings.DecReason,
CompletedTrainings.DecExplanation,
IIf([DecShotLocation]="MCS","Yes","No"),
IIf([DecReason]=1,1,0),
IIf([DecReason]=2,1,0),
IIf([DecReason]=3,1,0),
IIf([DecReason]=4,1,0),
IIf([DecReason]=5,1,0),
IIf([DecReason]=6,1,0),
IIf([DecReason]=7,1,0)
HAVING
((((EmployeeInformation.Active) Like -1)
AND ((CompletedTrainings.DecShotDate + 365 >= DATE())
OR (CompletedTrainings.DecShotDate IS NULL))));
This is Joining a few tables (obviously) in order to get a number of records. The problem is that if someone is duplicated on the table with a NULL in one of the date fields, and a date in another field, it pulls both the NULL and the DATE, or pulls multiple NULLS it might pull multiple dates but those are not present right at the moment.
I need the Nulls, they are actual data in this particular case, but if someone has a date and a NULL I need to pull only the newest record, I thought I could add MAX(RecordID) from the table, but that didn't change the results of the query either.
That code:
SELECT
DeclinationReasons.Reason,
EmployeeInformation.ID,
EmployeeInformation.Employee,
EmployeeInformation.Active,
MAX(CompletedTrainings.RecordID),
CompletedTrainings.DecShotDate
...
And it returned the same issue, Duplicated EmployeeInformation.ID with different DecShotDate values.
Currently it returns:
ID
Active
DecShotDate
etc. x a bunch
1
-1
date date
whatever goes
2
-1
in these
2
-1
date date
columns
These are being used in a report, that is to determine the total number of employees who fit the criteria of the report. The NULLs in DecShotDate are needed as they show people who did not refuse to get a flu vaccine in the current year, while the dates are people who did refuse.
Now I have come up with one simple solution, I could add a column to the CompletedTrainings Table that contains a date or other value, and add that to the HAVING statement. This might be the right solution as this is a yearly training questionnaire that employees have to fill out. But I am asking for advice before doing this.
Am I right in thinking I need to add a column to filter by so that older data isn't being pulled, or should I be able to do this by pulling recordID, and did I just bork that part of the query up?
Edited to add raw table views:
EmployeeInformation Table:
ID
Last
First
empID
Active
Termdate
DoH
Title
PT/FT/PD
PI
1
Doe
Jane
982
-1
date
Sr
PD
X
2
Roe
John
278
0
date
date
Jr
PD
X
3
Moe
Larry
1232
-1
date
Sr
FT
X
4
Zoe
Debbie
1424
-1
date
Sr
PT
X
DeclinationReasons Table:
ReasonID
Reason
1
Allergy
2
Already got it
3
Illness
CompletedTrainings Table:
RecordID
Employee
Training
...
DecShotdate
DecShotLocation
DecShotReason
DecExp
1
1
4
date
location
2
text
2
1
4
3
2
4
4
3
4
date
location
3
text
5
3
4
date
location
1
text
6
4
4
After some serious soul searching, I decided to use another column and filter by that.
In the end my query looks like this:
SELECT *
FROM (
(
SELECT RecordID, DecShotDate, DecShotLocation, DecReason, DecExplanation, Employee,
IIf([DecShotLocation]="MCS","Yes","No") AS YesMCS, IIf([DecReason]=1,1,0) AS YesAllergy,
IIf([DecReason]=2,1,0) AS YesImmune, IIf([DecReason]=3,1,0) AS YesAdverse,
IIf([DecReason]=4,1,0) AS YesMedical, IIf([DecReason]=5,1,0) AS YesSpiritual,
IIf([DecReason]=6,1,0) AS YesOther, IIf([DecReason]=7,1,0) AS YesAlready
FROM CompletedTrainings WHERE (CompletedDate > DATE() - 365 ) AND (Training = 69)) AS T1
LEFT JOIN
(
SELECT ID, Active FROM EmployeeInformation) AS T2 ON T1.Employee = T2.ID)
LEFT JOIN
(
SELECT Reason, ReasonID FROM DeclinationReasons) AS T3 ON T1.DecReason = T3.ReasonID;
This may not have been the best solution, but it did exactly what I needed. Which is to get the information by latest entry into the database.
Previously I had tried to use MAX(), DISTINCT(), etc. but always had a problem of multiple records being retrieved. In this case, I intentionally SELECT the most recent records first, then join them to the results of the next query, and so on. Until I have all the required data for my report.
I write this in hopes someone else finds it useful. Or even better if someone tells me why this is wrong, so as to improve my own skills.

BigQuery full table to partition

I have a 340 GB of data in one table (270 days worth of data). Now planning move this data to partition table.
That means I will have 270 partitions. What is the best way to move this data to partition table.
I dont want to run 270 queries which is very costly operation. So looking for optimized solution.
I have multiple tables like this. I need to migrate all these tables to partition tables.
Thanks,
I see three options
Direct Extraction out of original table:
Actions (how many queries to run) = Days [to extract] = 270
Full Scans (how much data scanned measured in full scans of original table) = Days = 270
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 270 = $459.00
Hierarchical(recursive) Extraction: (described in Mosha’s answer)
Actions = 2^log2(Days) – 2 = 510
Full Scans = 2*log2(Days) = 18
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 18 = $30.60
Clustered Extraction: (I will describe it in a sec)
Actions = Days + 1 = 271
Full Scans = [always]2 = 2
Cost, $ = $5 x Table Size, TB xFull Scans = $5 x 0.34 x 2 = $3.40
Summary
Method Actions Total Full Scans Total Cost
Direct Extraction 270 270 $459.00
Hierarchical(recursive) Extraction 510 18 $30.60
Clustered Extraction 271 2 $3.40
Definitely, for most practical purposes Mosha’s solution is way to go (I use it in most such cases)
It is relatively simple and straightforward
Even though you need to run query 510 times – the query is "relatively" simple and orchestration logic is simple to implement with whatever client you usually use
And cost save is quite visible!
From $460 down to $31!
Almost 15 times down!
In case if you -
a) want to lower cost even further for yet another 9 times (so it will be total x135 times lower)
b) and like having fun and more challenges
- take a look at third option
“Clustered Extraction” Explanation
Idea / Goal:
Step 1
We want to transform original table into another [single] table with 270 columns – one column for one day
Each column will hold one serialized row for respective day from original table
Total number of rows in this new table will be equal to number of rows for most "heavy" day
This will require just one query (see example below) with one full scan
Step 2
After this new table is ready – we will be extracting day-by-day querying ONLY respective column and write into final daily table (schema of daily table are the very same as original table’s schema and all those tables could be pre-created)
This will require 270 queries to be run with scans approximately equivalent (this really depends on how complex your schema, so can vary) to one full size of original table
While querying column – we will need to de-serialize row’s value and parse it back to original scheme
Very simplified example: (using BigQuery Standard SQL here)
The purpose of this example is just to give direction if you will find idea interesting for you
Serialization / de-serialization is extremely simplified to keep focus on idea and less on particular implementation which can be different from case to case (mostly depends on schema)
So, assume original table (theTable) looks somehow like below
SELECT 1 AS id, "101" AS x, 1 AS ts UNION ALL
SELECT 2 AS id, "102" AS x, 1 AS ts UNION ALL
SELECT 3 AS id, "103" AS x, 1 AS ts UNION ALL
SELECT 4 AS id, "104" AS x, 1 AS ts UNION ALL
SELECT 5 AS id, "105" AS x, 1 AS ts UNION ALL
SELECT 6 AS id, "106" AS x, 2 AS ts UNION ALL
SELECT 7 AS id, "107" AS x, 2 AS ts UNION ALL
SELECT 8 AS id, "108" AS x, 2 AS ts UNION ALL
SELECT 9 AS id, "109" AS x, 2 AS ts UNION ALL
SELECT 10 AS id, "110" AS x, 3 AS ts UNION ALL
SELECT 11 AS id, "111" AS x, 3 AS ts UNION ALL
SELECT 12 AS id, "112" AS x, 3 AS ts UNION ALL
SELECT 13 AS id, "113" AS x, 3 AS ts UNION ALL
SELECT 14 AS id, "114" AS x, 3 AS ts UNION ALL
SELECT 15 AS id, "115" AS x, 3 AS ts UNION ALL
SELECT 16 AS id, "116" AS x, 3 AS ts UNION ALL
SELECT 17 AS id, "117" AS x, 3 AS ts UNION ALL
SELECT 18 AS id, "118" AS x, 3 AS ts UNION ALL
SELECT 19 AS id, "119" AS x, 4 AS ts UNION ALL
SELECT 20 AS id, "120" AS x, 4 AS ts
Step 1 – transform table and write result into tempTable
SELECT
num,
MAX(IF(ts=1, ser, NULL)) AS ts_1,
MAX(IF(ts=2, ser, NULL)) AS ts_2,
MAX(IF(ts=3, ser, NULL)) AS ts_3,
MAX(IF(ts=4, ser, NULL)) AS ts_4
FROM (
SELECT
ts,
CONCAT(CAST(id AS STRING), "|", x, "|", CAST(ts AS STRING)) AS ser,
ROW_NUMBER() OVER(PARTITION BY ts ORDER BY id) num
FROM theTable
)
GROUP BY num
tempTable will look like below:
num ts_1 ts_2 ts_3 ts_4
1 1|101|1 6|106|2 10|110|3 19|119|4
2 2|102|1 7|107|2 11|111|3 20|120|4
3 3|103|1 8|108|2 12|112|3 null
4 4|104|1 9|109|2 13|113|3 null
5 5|105|1 null 14|114|3 null
6 null null 15|115|3 null
7 null null 16|116|3 null
8 null null 17|117|3 null
9 null null 18|118|3 null
Here, I am using simple concatenation for serialization
Step 2 – extracting rows for specific day and write output to respective daily table
Please note: In below example - we extracting rows for ts = 2 : this corresponds to column ts_2
SELECT
r[OFFSET(0)] AS id,
r[OFFSET(1)] AS x,
r[OFFSET(2)] AS ts
FROM (
SELECT SPLIT(ts_2, "|") AS r
FROM tempTable
WHERE NOT ts_2 IS NULL
)
The result will look like below (which is expected):
id x ts
6 106 2
7 107 2
8 108 2
9 109 2
I wish I had more time for this to write down, so don’t judge to heavy if something missing – this is more directional answer - but at the same time example is pretty reasonable and if you have plain simple schema – almost no extra thinking is required. Of course with records, nested stuff in schema - most challenging part is serialization / de-serialization – but that’s where fun is – along with extra $saving
I will add another fourth option to #Mikhail's answer
DML QUERY
Action = 1 query to run
Full scans = 1
Cost = $5 x 0.34 = 1.7$ (x270 times cheaper than solution #1 \o/)
With the new DML feature of BiQuery you can convert a none partitioned table to a partitioned one while doing only one full scan of the source table
To illustrate my solution I will use one of BQ's public tables, namely bigquery-public-data:hacker_news.comments. below is the tables schema
name | type | description
_________________________________
id | INTGER | ...
_________________________________
by | STRING | ...
_________________________________
author | STRING | ...
_________________________________
... | |
_________________________________
time_ts | TIMESTAMP | human readable timestamp in UTC YYYY-MM-DD hh:mm:ss /!\ /!\ /!\
_________________________________
... | |
_________________________________
We are going to partition the comments table based on time_ts
#standardSQL
CREATE TABLE my_dataset.comments_partitioned
PARTITION BY DATE(time_ts)
AS
SELECT *
FROM `bigquery-public-data:hacker_news.comments`
I hope it helps :)
If your data was in sharded tables (i.e. with YYYYmmdd suffix), you could've used "bq partition" command. But with data in a single table - you will have to scan it multiple times applying different WHERE clauses on your partition key column.
The only optimization I can think of is to do it hierarchically, i.e. instead of 270 queries which will do 270 full table scans - first split table in half, then each half in half etc. This way you will need to pay for 2*log_2(270) = 2*9 = 18 full scans.
Once the conversion is done - all the temporary tables can be deleted to eliminate extra storage costs.

Aggregate columns based off the same data

I have the following query -
SELECT d.PRD_YY,
Count(*)
FROM (SELECT CARD,
Min(TXN_DT) mindt
FROM db1.dbo.tblcards
GROUP BY CARD) a
JOIN db1.dbo.tlkpdates d
ON a.MINDT = d.GREG_DT
WHERE d.PRD_YY = 2016
AND d.PRD_NBR = 5
GROUP BY d.PRD_YY
basically, this tells me how many cards from my tblcards table first appeared in a given date range that is taken from the tlkpdates table by joining on mindt from my inner query result.
what I want to do is also see how many cards showed up in that date range altogether, and not just cards whose first occurrence was in that date range.
it doesn't seem like this is possible because i'm joining greg_dt (which is just a normal Gregorian date like 6/1/2016) on the minimum date, so how could i possible join on the maximum date (most recent occurrence)?
i know i can just make another query return that same data set but i'd rather have it in the same query.
edit - what also has to be considered is that i'm going to want to group on more than just PRD_YY - there's also a period number, period week, and period day, for a more granular view.
sample data -
Card Date Store Transaction
10123131444 2014-05-08 25 141414
40999999999 2013-12-07 847 15154
30999999998 2015-02-05 96 234235
20999999997 2016-03-21 139 2342525
50999999996 2016-03-30 659 1234121515
70999999995 2016-03-04 659 52525
50999999994 2016-03-03 907 2362362
20999999993 2014-05-23 941 2623626
70999999992 2013-12-03 18 123124
40999999991 2014-01-18 107 1512515
current output
prd_yy new_cards
2016 22911
desired output
prd_yy new_cards total_cards
2016 22911 54992
Assuming that card is a PK in the tblCards table (or at least only appears once each day at most), I think this will work for what you're trying to do:
SELECT
D.prd_yy,
SUM(CASE WHEN MD.card IS NOT NULL THEN 1 ELSE 0 END) AS new_cards,
COUNT(*) AS total_cards
FROM
dbo.tlkpDates D
INNER JOIN dbo.tblCards C ON C.txn_dt = D.greg_dt
LEFT OUTER JOIN (SELECT card, MIN(txn_dt) AS min_dt FROM dbo.tblCards GROUP BY card) MD ON
MD.card = C.card AND MD.min_dt = C.txn_dt
WHERE
D.prd_yy = '2016' AND
D.prd_nbr = 5
GROUP BY
D.prd_yy
Use the LEFT OUTER JOIN on the minimum dates as a flag for whether or not that particular day is the "first". Then you can use that with SUM(CASE...) to get your conditional count.

Looking Up Value to Return Hourly Rate

I have a table storing hourly pay rates and a start and end value associated to each. The theory being that your hourly pay is dependent on your takings sitting between the start and end values.
Table Example - dbo.PayScales
PayScaleId Starting Ending HourlyRate
1 0.00 32.88 12.00
2 32.89 34.20 12.50
3 34.21 35.52 13.00
I have the takings stored in a separate table along with a person id, and I need to lookup the hourlyrate based on the takings (which I am having a complete mind block about)
Table Example - dbo.Employees
EmpId Takings HourlyRate
1 33.50
2 31.19
3 37.00
So my exepected results would be:
EmpId 1 Hourly rate = 12.50
EmpId 2 Hourly rate = 12.00
EmpId 3 Hourly rate = 13.00 as the value is greater than the ending value.
You can use CROSS APPLY together with TOP:
SELECT *
FROM dbo.Employees e
CROSS APPLY(
SELECT TOP 1 p.HourlyRate
FROM dbo.PayScales p
WHERE
e.Takings BETWEEN p.Starting AND p.Ending
OR e.Takings > p.Ending
ORDER BY p.Ending DESC
) t
ONLINE DEMO
Of course #FelixPamittan's answer solves the problem, but with a small change in your data it's down to a simple join.
Change the highest Ending to a really high value (999999999), greater than any Takings, or NULL:
FROM #Employees AS e
JOIN #PayScales AS p
ON e.Takings BETWEEN p.Starting AND p.Ending
-- or
ON e.Takings BETWEEN p.Starting AND COALESCE(p.Ending, 999999999)