I have data from two different sources. On one hand I have user data from our app. This has a primary key of ID and UTC date. There are only rows for UTC dates when are users uses the app. On the other hand I have advertisement campaign attribition data for the users (which can be multiple advertisment campaigns per user). This table has a primary key of ID and campaign and a metric containing a advertisment attribution timestamp. I want to combine the two data sources such that I can compute if a campaign is generating more revenue than it costs among other campaign statistics.
App data example:
SELECT
*
FROM UNNEST(ARRAY<STRUCT<ID INT64, UTC_Date DATE, Revenue FLOAT64>>
[(1, DATE('2021-01-01'), 0),
(1, DATE('2021-01-05'), 5),
(1, DATE('2021-01-10'), 0),
(2, DATE('2021-01-03'), 10),
(2, DATE('2021-01-08'), 0),
(2, DATE('2021-01-09'), 0)])
advertisement campaign attribition data example:
SELECT
*
FROM UNNEST(ARRAY<STRUCT<ID INT64, Attribution_Timestamp Timestamp, campaign_name STRING>>
[(1, TIMESTAMP('2021-01-01 09:54:31'), "A"),
(1, TIMESTAMP('2021-01-09 22:32:51'), "B"),
(2, TIMESTAMP('2021-01-03 19:12:11'), "A")])
The end result I would like to get is:
SELECT
*
FROM UNNEST(ARRAY<STRUCT<ID INT64, UTC_Date DATE, Revenue FLOAT64, campaign_name STRING>>
[(1, DATE('2021-01-01'), 0, "A"),
(1, DATE('2021-01-05'), 5, "A"),
(1, DATE('2021-01-10'), 0, "B"),
(2, DATE('2021-01-03'), 10, "A"),
(2, DATE('2021-01-08'), 0, "A"),
(2, DATE('2021-01-09'), 0, "A")])
This can be achieved by somehow joining the campaign attribution data to the app data and then forward filling.
The problem I have is that the advertisment attribution timestamp can have a mismatch with the UTC dates in the app data table. This means I cannot use a left join as it will not assign campaign_name B to ID 1. Does anyone know an elegant way to solve this problem?
Found a solution! Here is what I did (and a little bit more sample data):
WITH app_data AS
(
SELECT
*
FROM UNNEST(ARRAY<STRUCT<adid INT64, utc_date DATE, Revenue FLOAT64>>
[(1, DATE('2021-01-01'), 0),
(1, DATE('2021-01-05'), 5),
(1, DATE('2021-01-10'), 0),
(1, DATE('2021-01-12'), 0),
(1, DATE('2021-01-15'), 0),
(1, DATE('2021-01-16'), 15),
(1, DATE('2021-01-18'), 0),
(2, DATE('2021-01-03'), 10),
(2, DATE('2021-01-08'), 0),
(2, DATE('2021-01-09'), 0),
(2, DATE('2021-01-15'), 4),
(2, DATE('2021-02-01'), 0),
(2, DATE('2021-02-08'), 8),
(2, DATE('2021-02-15'), 0),
(2, DATE('2021-03-04'), 0),
(2, DATE('2021-03-06'), 12),
(3, DATE('2021-02-15'), 10),
(3, DATE('2021-02-23'), 5),
(3, DATE('2021-03-25'), 0),
(3, DATE('2021-03-30'), 0)])
),
advertisment_attribution_data AS
(
SELECT
*
FROM UNNEST(ARRAY<STRUCT<adid INT64, utc_date DATE, campaign_name STRING>>
[(1, DATE(TIMESTAMP('2021-01-01 09:54:31')), "A"),
(1, DATE(TIMESTAMP('2021-01-09 22:32:51')), "B"),
(1, DATE(TIMESTAMP('2021-01-17 14:30:05')), "C"),
(2, DATE(TIMESTAMP('2021-01-03 19:12:11')), "A"),
(1, DATE(TIMESTAMP('2021-01-15 18:17:57')), "B"),
(3, DATE(TIMESTAMP('2021-03-14 22:32:51')), "C")])
)
SELECT
t1.*,
IFNULL(LAST_VALUE(t2.campaign_name IGNORE NULLS) OVER (PARTITION BY t1.adid ORDER BY t1.utc_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW), "Organic") as campaign_name
FROM
app_data t1
LEFT JOIN
advertisment_attribution_data t2
ON t1.adid = t2.adid
AND t1.utc_date = (SELECT MIN(t3.utc_date) FROM app_data t3 WHERE t2.adid=t3.adid AND t2.utc_date <= t3.utc_date)
EDIT
It doesn't work when I select a real table in app_data. It says: Unsupported subquery with table in join predicate.
EDIT 2
Found a way to solve the problem where you cannot use subqueries in joins (apparently it is possible for tables which are not selected from an existing table...) This is the way it works in any case:
WITH app_data AS
(
SELECT
*
FROM UNNEST(ARRAY<STRUCT<adid INT64, utc_date DATE, Revenue FLOAT64>>
[(1, DATE('2021-01-01'), 0),
(1, DATE('2021-01-05'), 5),
(1, DATE('2021-01-10'), 0),
(1, DATE('2021-01-12'), 0),
(1, DATE('2021-01-15'), 0),
(1, DATE('2021-01-16'), 15),
(1, DATE('2021-01-18'), 0),
(2, DATE('2021-01-03'), 10),
(2, DATE('2021-01-08'), 0),
(2, DATE('2021-01-09'), 0),
(2, DATE('2021-01-15'), 4),
(2, DATE('2021-02-01'), 0),
(2, DATE('2021-02-08'), 8),
(2, DATE('2021-02-15'), 0),
(2, DATE('2021-03-04'), 0),
(2, DATE('2021-03-06'), 12),
(3, DATE('2021-02-15'), 10),
(3, DATE('2021-02-23'), 5),
(3, DATE('2021-03-25'), 0),
(3, DATE('2021-03-30'), 0)])
),
advertisment_attribution_data AS
(
SELECT
*,
(
SELECT
MIN(t2.utc_date)
FROM app_data t2
WHERE t1.adid=t2.adid
AND t1.utc_date <= t2.utc_date
) as attribution_join_date -- is the closest next date for this adid in app_data to the attribution date. This ensures the join lateron works.
FROM UNNEST(ARRAY<STRUCT<adid INT64, utc_date DATE, campaign_name STRING>>
[(1, DATE(TIMESTAMP('2021-01-01 09:54:31')), "A"),
(1, DATE(TIMESTAMP('2021-01-09 22:32:51')), "B"),
(1, DATE(TIMESTAMP('2021-01-17 14:30:05')), "C"),
(2, DATE(TIMESTAMP('2021-01-03 19:12:11')), "A"),
(1, DATE(TIMESTAMP('2021-01-15 18:17:57')), "B"),
(3, DATE(TIMESTAMP('2021-03-14 22:32:51')), "C")]) t1
)
SELECT
t1.*,
IFNULL(LAST_VALUE(t2.campaign_name IGNORE NULLS) OVER (PARTITION BY t1.adid ORDER BY t1.utc_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW), 'Organic') as campaign_name
FROM
app_data t1
LEFT JOIN
advertisment_attribution_data t2
ON t1.adid = t2.adid
AND t1.utc_date = t2.attribution_join_date
Related
There is a need to monitor the performance of a warehouse of goods. Please refer to the table containing data for one warehouse below:
WK_NO: Week number; Problem: Problem faced on that particular week. Empty cells are NULLs.
I need to create the 3rd column:
Weeks on list: A column indicating the number of weeks that a particular warehouse is being monitored as of that particular week.
Required Logic:
Initially the column's values are to be 0. If a warehouse is encountering problems continuously for 4 weeks, it is put onto a "list" and a counter starts, indicating the number of weeks the warehouse has been problematic. And if the warehouse is problem-free for 4 continuous weeks after facing problems, the counter resets to 0 and stays 0 until there is another 4 weeks of problems.
Code to generate data shown above:
CREATE TABLE warehouse (
WK_NO INT NOT NULL,
Problem STRING,
Weeks_on_list_ref INT
);
INSERT INTO warehouse
(WK_NO, Problem, Weeks_on_list_ref)
VALUES
(1, NULL, 0),
(2, NULL, 0),
(3, 'supply', 0),
(4, 'supply', 0),
(5, 'manpower', 0),
(6, 'supply', 0),
(7, 'manpower', 1),
(8, 'supply', 2),
(9, NULL, 3),
(10, NULL, 4),
(11, 'supply', 5),
(12, 'supply', 6),
(13, 'manpower', 7),
(14, NULL, 8),
(15, NULL, 9),
(16, NULL, 10),
(17, NULL, 11),
(18, NULL, 0),
(19, NULL, 0),
(20, NULL, 0);
Any help is much appreciated.
Update:
Some solutions are failing when bringing in data for multiple warehouses.
Updated the code generation script with W_NO which is the warehouse ID, for your consideration.
CREATE OR REPLACE TABLE warehouse (
W_NO INT NOT NULL,
WK_NO INT NOT NULL,
Problem STRING,
Weeks_on_list_ref INT
);
INSERT INTO warehouse
(W_NO, WK_NO, Problem, Weeks_on_list_ref)
VALUES
(1, 1, NULL, 0),
(1, 2, NULL, 0),
(1, 3, 'supply', 0),
(1, 4, 'supply', 0),
(1, 5, 'manpower', 0),
(1, 6, 'supply', 0),
(1, 7, 'manpower', 1),
(1, 8, 'supply', 2),
(1, 9, NULL, 3),
(1, 10, NULL, 4),
(1, 11, 'supply', 5),
(1, 12, 'supply', 6),
(1, 13, 'manpower', 7),
(1, 14, NULL, 8),
(1, 15, NULL, 9),
(1, 16, NULL, 10),
(1, 17, NULL, 11),
(1, 18, NULL, 0),
(1, 19, NULL, 0),
(1, 20, NULL, 0),
(2, 1, NULL, 0),
(2, 2, NULL, 0),
(2, 3, 'supply', 0),
(2, 4, 'supply', 0),
(2, 5, 'manpower', 0),
(2, 6, 'supply', 0),
(2, 7, 'manpower', 1),
(2, 8, 'supply', 2),
(2, 9, NULL, 3),
(2, 10, NULL, 4),
(2, 11, 'supply', 5),
(2, 12, 'supply', 6),
(2, 13, 'manpower', 7),
(2, 14, NULL, 8),
(2, 15, NULL, 9),
(2, 16, NULL, 10),
(2, 17, NULL, 11),
(2, 18, NULL, 0),
(2, 19, NULL, 0),
(2, 20, NULL, 0);
Consider below query for updated question:
SELECT W_NO, WK_NO, Problem, IF(MOD(div, 2) = 0, 0, RANK() OVER (PARTITION BY W_NO, div ORDER BY WK_NO)) AS Weeks_on_list
FROM (
SELECT *, COUNTIF(flag IS TRUE) OVER (PARTITION BY W_NO ORDER BY WK_NO) AS div FROM (
SELECT *,
LAG(Problem, 5) OVER w0 IS NULL AND COUNT(Problem) OVER w1 = 4 OR
LAG(Problem, 5) OVER w0 IS NOT NULL AND COUNT(Problem) OVER w1 = 0 AS flag
FROM warehouse
WINDOW w0 AS (PARTITION BY W_NO ORDER BY WK_NO), w1 AS (w0 ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING)
)
)
ORDER BY W_NO, WK_NO;
Consider below query:
Using a window frame with fixed size 4, find boundaries first where
warehouse turns into abnormal state and vice versa in innermost query.
Partition weeks using boundaries found in step 1.
Since normal and abnormal states take turns, so calculate RANK() only for abnormal state in outermost query.
SELECT WK_NO, Problem, IF(MOD(div, 2) = 0, 0, RANK() OVER (PARTITION BY div ORDER BY WK_NO)) AS Weeks_on_list
FROM (
SELECT *, COUNTIF(flag IS TRUE) OVER (ORDER BY WK_NO) AS div FROM (
SELECT *,
LAG(Problem, 5) OVER w0 IS NULL AND COUNT(Problem) OVER w1 = 4 OR
LAG(Problem, 5) OVER w0 IS NOT NULL AND COUNT(Problem) OVER w1 = 0 AS flag
FROM warehouse
WINDOW w0 AS (ORDER BY WK_NO), w1 AS (w0 ROWS BETWEEN 4 PRECEDING AND 1 PRECEDING)
)
)
ORDER BY WK_NO;
I've got data on customers balance on a daily basis, with the following structure in BigQuery:
CREATE TABLE if not EXISTS balance (
user_id int,
updated_ag DATE,
value float
);
INSERT INTO balance VALUES
(1, '2021-01-01', 0),
(1, '2021-01-02', 1),
(1, '2021-01-05', 2),
(1, '2021-01-07', 5),
(2, '2021-01-01', 5),
(2, '2021-01-03', 0),
(2, '2021-01-04', 1),
(2, '2021-01-06', 2);
I have one row for a user on a given day if the balance on that day changed.
I'd like to complete the data for each user by putting the balance of the last day where there was an update, with the dates being between the first day with balance and last date in the table.
So, the output table in this example would have the following values:
(1, '2021-01-01', 0),
(1, '2021-01-02', 1),
(1, '2021-01-03', 1),
(1, '2021-01-04', 1),
(1, '2021-01-05', 2),
(1, '2021-01-06', 2),
(1, '2021-01-07', 5),
(2, '2021-01-01', 5),
(2, '2021-01-02', 5),
(2, '2021-01-03', 0),
(2, '2021-01-04', 1),
(2, '2021-01-05', 1),
(2, '2021-01-06', 2),
(2, '2021-01-07', 2)
What's the simplest way of doing this in BigQuery?
Try this one:
select user_id, generated_date, value
from (
select
*,
ifnull(lead(date) over(partition by user_id order by date) - 1, max(date) over()) date_to
from balance
), unnest(generate_date_array(date, date_to, interval 1 day)) generated_date
I've got data on how a set of customers spend money on a daily basis, with the following structure in BigQuery:
CREATE TABLE if not EXISTS daily_spend (
user_id int,
created_at DATE,
value float
);
INSERT INTO daily_spend VALUES
(1, '2021-01-01', 0),
(1, '2021-01-02', 1),
(1, '2021-01-04', 1),
(1, '2021-01-05', 2),
(1, '2021-01-07', 5),
(2, '2021-01-01', 5),
(2, '2021-01-03', 0),
(2, '2021-01-04', 1),
(2, '2021-01-06', 2);
I'd like to complete the data for each user by putting 0's in the days the users didn't spent any money, only including days between their first and last days spending money.
So, the output table in this example would have the following values:
(1, '2021-01-01', 0),
(1, '2021-01-02', 1),
(1, '2021-01-03', 0),
(1, '2021-01-04', 1),
(1, '2021-01-05', 2),
(1, '2021-01-06', 0),
(1, '2021-01-07', 5),
(2, '2021-01-01', 5),
(2, '2021-01-02', 0),
(2, '2021-01-03', 0),
(2, '2021-01-04', 1),
(2, '2021-01-05', 0),
(2, '2021-01-06', 2)
What's the simplest way of doing this in BigQuery?
Use below
select user_id, created_at, ifnull(value, 0) value
from (
select user_id, min(created_at) min_date, max(created_at) max_date
from daily_spend
group by user_id
), unnest(generate_date_array(min_date, max_date)) created_at
left join daily_spend
using(user_id, created_at)
If applied to sample data in your question - output is
So I have an account number and a reading number that I want to take the cumulative sum of but reset at the beginning of a new reading cycle (I want to reset the running sum).
I am using a window function but cannot figure out how to set it when the new reading cycle exists.
Data has the following format:
The Reading cycle Volume value is what I am attempting to achieve.
Currently I have tried SUM(Value) OVER(PARTITION BY ACCOUNT ORDER BY OBS)
I do not know how to reset it when reading # = 1.
I have tried:
Case
when [Reading #] = 1 THEN value
ELSE SUM(Value) OVER(PARTITION BY ACCOUNT ORDER BY OBS)
END AS [Running Total]
If I understand the question correctly and the values, stored in the Obs and [Reading #] columns are without gaps, the next approach is an option:
Table:
SELECT *
INTO Data
FROM (VALUES
(1, 1, 1, 5),
(1, 2, 2, 6),
(1, 3, 3, 5),
(1, 4, 4, 6),
(1, 5, 5, 5),
(1, 6, 6, 5),
(1, 7, 1, 5),
(1, 8, 2, 6),
(1, 9, 3, 5),
(1, 10, 4, 6),
(1, 11, 5, 5),
(1, 12, 6, 5),
(2, 1, 1, 7),
(2, 2, 2, 8),
(2, 3, 3, 9),
(2, 4, 4, 10),
(2, 5, 5, 11),
(2, 6, 6, 12),
(2, 7, 1, 7),
(2, 8, 2, 8),
(2, 9, 3, 9),
(2, 10, 4, 10),
(2, 11, 5, 11),
(2, 12, 6, 12)
) v (Account, Obs, [Reading #], [Value])
Statement:
SELECT
Account, Obs, [Reading #], [Value],
SUM([Value]) OVER (PARTITION BY Account, [Group] ORDER BY Account, Obs) AS [Ready Cicle Value]
FROM (
SELECT
*,
(Obs - [Reading #]) AS [Group]
FROM Data
) t
One additional option (as a more general approach) is to create groups when [Reading #] is equal to 1:
SELECT
Account, Obs, [Reading #], [Value],
SUM([Value]) OVER (PARTITION BY Account, [Group] ORDER BY Obs) AS [Ready Cicle Value]
FROM (
SELECT *, SUM([Change]) OVER (PARTITION BY Account ORDER BY Obs) AS [Group]
FROM (
SELECT *, CASE WHEN [Reading #] = 1 THEN 1 ELSE 0 END AS [Change]
FROM Data
) a
) b
Help us to help you. Always include a minimal set of data and the code we need, which we can copy and paste to immediately be on the same page as you without wasting time & effort that is better spent helping others. Note how you can simply copy and paste our solutions and play with them? They are complete and stand alone. That is what we are looking for from you.
You are close. The piece you are missing is that you need some way to group your readings and then you can include that in your partitioning as well.
There are any number of ways to create the new derived value for "reading_group" the following is just one way.
DECLARE #t_customer_readings TABLE
( account_number INT,
observation INT,
reading_number INT,
reading_value INT
)
INSERT INTO #t_customer_readings
VALUES (1, 1 , 1, 3),
(1, 2 , 2, 6),
(1, 3 , 3, 9),
(1, 4 , 4, 5),
(1, 5 , 5, 5),
(1, 6 , 6, 8),
(1, 7 , 1, 1),
(1, 8 , 2, 4),
(1, 9 , 3, 7),
(1, 10, 4, 0),
(1, 11, 5, 3),
(1, 12, 6, 6),
(2, 1 , 1, 9),
(2, 2 , 2, 2),
(2, 3 , 3, 5),
(2, 4 , 4, 8),
(2, 5 , 5, 1),
(2, 6 , 6, 4),
(2, 7 , 1, 7),
(2, 8 , 2, 0),
(2, 9 , 3, 3),
(2, 10, 1, 6), -- note I have split this group into 2 to show that the reading numbers do not need to be sequential.
(2, 11, 5, 9),
(2, 12, 6, 2)
SELECT r.*,
-- reading_group = CASE WHEN r.reading_number = 1 THEN observation ELSE rg.reading_group END,
ready_cycle_volume = SUM(reading_value) OVER(PARTITION BY account_number,
CASE WHEN r.reading_number = 1 THEN observation
ELSE rg.reading_group
END
ORDER BY observation)
FROM #t_customer_readings r
CROSS APPLY
(SELECT reading_group = MAX(observation) -- I picked observation but you could use whatever value you like. we are just creating something we can group on.
FROM #t_customer_readings
WHERE account_number = r.account_number
AND observation < r.observation
AND reading_number = 1) rg
I have a table that has 3 columns
Plate_Id, Prod_id and Location
You can say that the plate_id is the "header" column.
The prod_id groups the 'locations' together for a particular 'plate_id'
Given a particular set of values, I only want to pick locations that are 'COMMON' amongst prod_ids for a particular plate_id.
NOTE: My table can have multiple plate_ids
I am close, but its not perfect.
I tried to isolate the smallest group for a given plate_id and then tried to inner join it with the original list, but it fails for the scenario where I have 3 prod_ids and a location is common for even one group(i need only locations that are strictly in every prod_id)
Following is the result I desire, based on the how far I have gotten so far,
-- DESIRED RESULT:
-- plate_id location
-- 100 1
-- 100 2
-- 200 3
-- 200 4
-- 300 1
-- 300 2
-- 300 5
create table #AllTab
(
plate_id int,
prod_id int,
location int
)
insert into #AllTab
values
(100,10, 1),
(100,10, 2),
(100,10, 3),
(100,10, 4),
(100,20, 1),
(100,20, 2),
(100,20, 3),
(100,20, 4),
(100,20, 5),
(100,20, 6),
(100,20, 9),
(100,30, 1),
(100,30, 2),
(100,30, 9),
(100,40, 1),
(100,40, 2),
(100,40, 12),
(100,40, 14),
(100,40, 1),
(100,40, 2),
(100,40, 25),
(100,40, 30),
-----------------
(200,10, 1),
(200,10, 2),
(200,10, 3),
(200,10, 4),
(200,20, 1),
(200,20, 2),
(200,20, 3),
(200,20, 4),
(200,20, 5),
(200,20, 6),
(200,20, 7),
(200,30, 3),
(200,30, 4),
(200,30, 9),
-----------------
(300,10, 1),
(300,10, 2),
(300,10, 3),
(300,10, 5),
(300,20, 1),
(300,20, 2),
(300,20, 3),
(300,20, 4),
(300,20, 5),
(300,20, 6),
(300,20, 7),
(300,20, 9),
(300,30, 1),
(300,30, 2),
(300,30, 5)
-- The #SubTab table isolates the smallest group from the above table
-- for a particular plate_id
create table #SubTab
(
plate_id int,
prod_id int,
location int
)
insert into #SubTab
values
(100,30, 1),
(100,30, 2),
(100,30, 9),
------------
(200,30, 3),
(200,30, 4),
(200,30, 9),
------------
(300,30, 1),
(300,30, 2),
(300,30, 5)
select distinct pr.plate_id, pr.prod_id, pr.location from #SubTab pr
inner join #AllTab pl on pr.plate_id = pl.plate_id
and pr.location = pl.location
where pr.Prod_Id <> pl.prod_id
group by pr.plate_id, pr.prod_id, pr.location
This query returns the locations that are in all the products for a given plate:
select plate_id, location
from #alltab a
group by plate_id, location
having count(distinct prod_id) = (select count(distinct prod_id) from #alltab a2 where a2.plate_id = a.plate_id);
This assumes no duplicates in the table -- a reasonable assumption given your data.
Here is a rextester.
try this:
;with cte1
AS
(
Select Plate_Id,Count(DISTINCT prod_id) as ProdCount
From #AllTab
Group by Plate_Id
)
,cte2
AS
(
Select Plate_Id,Location,Count(Location) As LocCount
from #AllTab
Group by Plate_Id,Location
)
SELECT t1.plate_id ,t2.location
FROM cte1 t1 JOIN cte2 t2
ON t1.Plate_Id =t2.Plate_Id
Where LocCount>=ProdCount
I have a solution, it might be a tad lengthy, but works,
SELECT
SubGroupCounts.plate_id,
LocationSubGroupCounts.location
FROM
(-- Number of sub-grouping relative to main grouping
SELECT
plate_id,
count(distinct prod_id) as num
FROM
AllTab
GROUP BY
plate_id) SubGroupCounts
INNER JOIN
(-- Count the number of sub-groups each location appears in
SELECT
plate_id,
Location,
COUNT(distinct prod_id) AS num
FROM
AllTab
GROUP BY
Location, plate_id) LocationSubGroupCounts ON LocationSubGroupCounts.plate_id = SubGroupCounts.plate_id
AND LocationSubGroupCounts.num = SubGroupCounts.num