BigQuery LAG returning only nulls - google-bigquery

I'm facing a problem in bigquery where I'm not getting the desired output using the LAG function:
WITH
base AS (
SELECT "2022-11-01" month , 1100 icount union all
SELECT "2022-10-01" month , 1000 icount union all
SELECT "2022-09-01" month , 900 icount union all
SELECT "2022-08-01" month , 800 icount union all
SELECT "2022-07-01" month , 700 icount union all
SELECT "2022-06-01" month , 600 icount union all
SELECT "2022-05-01" month , 500 icount union all
SELECT "2022-04-01" month , 400 icount
)
SELECT
month,
icount,
LAG(icount) OVER w1 AS previous_icount
FROM base
WINDOW w1 AS (
PARTITION BY month ORDER BY icount)
ORDER BY
month DESC
which results in:
month
icount
previous_icount
2022-11-01
1100
null
2022-10-01
1000
null
2022-09-01
900
null
2022-08-01
800
null
2022-07-01
700
null
2022-06-01
600
null
2022-05-01
500
null
2022-04-01
400
null
but I was expecting to get the following result:
month
icount
previous_icount
2022-11-01
1100
1000
2022-10-01
1000
900
2022-09-01
900
800
2022-08-01
800
700
2022-07-01
700
600
2022-06-01
600
500
2022-05-01
500
400
2022-04-01
400
null
I went trough the documentation but can't figure what am I missing to get this right.

As pointed out by #Jaytiger, the issue lies in your window definition.
When partitioning by month you apply a treatment for every set of rows that is identified by a unique month.
Since every month in your table only has one row, the LAG function is applied to partitions that only contain one value, for the corresponding month.
And the LAGged value in this case is NULL.
What you want to do is to PARTITION your table by something that will keep the different records in the same partition:
if you had a common column (ex: shop) you would partition by this column,
here you don't have such column so you can PARTITION BY 1,
but in reality you don't even have to PARTITION, BigQuery will understand there is only one partition to consider:
WITH
base AS (
SELECT "2022-11-01" month , 1100 icount union all
SELECT "2022-10-01" month , 1000 icount union all
SELECT "2022-09-01" month , 900 icount union all
SELECT "2022-08-01" month , 800 icount union all
SELECT "2022-07-01" month , 700 icount union all
SELECT "2022-06-01" month , 600 icount union all
SELECT "2022-05-01" month , 500 icount union all
SELECT "2022-04-01" month , 400 icount
)
SELECT
month,
icount,
LAG(icount) OVER w1 AS previous_icount
FROM base
WINDOW w1 AS (
PARTITION BY 1 -- unnecessary
ORDER BY icount)
ORDER BY
month DESC
Which gives the desired result

Related

Filling missing weekend rows with previous working day values

I have a data table like the below. For each customer, missing days(weekends or holidays) should be inserted with the balance of previous working day. And this should only be done between the dates that customer has in the table. Balance should be added as 0 for dates outside the customer date range in the table. So for customer with id 1 should be filled between 2022-07-01 and 2022-07-31. Customer with id 2 should be filled between 2022-07-07 and 2022-07-19. Also for the dates 2022-07-01 to 2022-07-07 and 2022-07-19 to 2022-07-31 balance should be added as 0.
Data Table
date customer_id balance
2022-07-01 1 100
2022-07-04 1 150
2022-07-05 1 200
. 1 .
. 1 .
2022-07-31 1 650
2022-07-07 2 200
2022-07-08 2 300
2022-07-11 2 400
. 2 .
. 2 .
2022-07-19 2 750
Output table should look like this:
date customer_id balance
2022-07-01 1 100
2022-07-02 1 100
2022-07-03 1 100
2022-07-04 1 150
2022-07-05 1 200
. 1 .
. 1 .
2022-07-31 1 650
2022-07-01 2 0
2022-07-02 2 0
. 2 .
. 2 .
2022-07-07 2 200
2022-07-08 2 300
2022-07-09 2 300
2022-07-10 2 300
2022-07-11 2 400
. 2 .
. 2 .
2022-07-19 2 750
2022-07-20 2 0
. 2 .
. 2 .
2022-07-31 2 0
There are some solutions that use cross join with calendar table to similar questions on the site. But i couldn't implement them for my case.
Any help is much appreciated.
The below is a solution that uses recursion instead of a calendar table.
It essentially works by 'extending' your original data to create some extra rows with 0 balances for every customer at:
The min date in the table (if the customer didn't already have a record at the min date)
The max date in the table (if the customer didn't already have a record at the max date)
The day after the last record for the customer (as long as this doesn't go over the max date in the table)
It then uses recursion to plug the gaps between the dates for each customer.
With balances as (
-- This is a simplified version of the data already in your table
SELECT '2022-07-01' as dt, 1 as customer_id, 100 as balance
UNION ALL SELECT '2022-07-04' as dt, 1 as customer_id, 150 as balance
UNION ALL SELECT '2022-07-05' as dt, 1 as customer_id, 200 as balance
UNION ALL SELECT '2022-07-31' as dt, 1 as customer_id, 650 as balance
UNION ALL SELECT '2022-07-07' as dt, 2 as customer_id, 200 as balance
UNION ALL SELECT '2022-07-08' as dt, 2 as customer_id, 300 as balance
UNION ALL SELECT '2022-07-11' as dt, 2 as customer_id, 400 as balance
UNION ALL SELECT '2022-07-19' as dt, 2 as customer_id, 750 as balance
)
, min_records as (
-- This can create a 0 balance record for each customer at the min date in the table
SELECT dt, customer_id, 0 as balance
FROM (
SELECT min(dt) as dt
FROM balances
) as min_dt
CROSS JOIN (
SELECT DISTINCT customer_id
FROM balances
) as customers
)
, max_records as (
-- This can create a 0 balance record for each customer at the max date in the table
SELECT dt, customer_id, 0 as balance
FROM (
SELECT max(dt) as dt
FROM balances
) as min_dt
CROSS JOIN (
SELECT DISTINCT customer_id
FROM balances
) as customers
)
, max_customer_records as (
-- This creates a 0 balance record for each customer for the day after their last record,
-- so long as that date does not go beyond the max date in the table
SELECT dateadd(day, 1, max(dt)) as dt, customer_id, 0 as balance
FROM balances as a
CROSS JOIN (
SELECT max(dt) as max_dt
FROM balances
) as m
GROUP BY customer_id, max_dt
HAVING max(dt) < max_dt
)
, extended_balances as (
-- We then join all of the tables above to the original balances table.
-- Grouping to the dt + customer level and sum(balance) wont cause issues for customers
-- who already had a record on the min(dt) or max(dt) because x + 0 still = x
SELECT dt, customer_id, sum(balance) as balance
FROM (
SELECT *
FROM balances
UNION
SELECT dt, customer_id, balance
FROM min_records
UNION
SELECT dt, customer_id, balance
FROM max_records
UNION
SELECT dt, customer_id, balance
FROM max_customer_records
) AS A
GROUP BY dt, customer_id
)
, recursive_query as (
-- Now we use recursion to fill in the gaps between the dates
SELECT dt as original_dt
, dt
, customer_id
, balance
-- We use lead() to find the date when a new balance exists
, coalesce(lead(dt) over(partition by customer_id order by dt asc), dateadd(day, 1, dt)) as next_dt
FROM extended_balances
UNION ALL
SELECT original_dt
, dateadd(day, 1, dt)
, customer_id
, balance
, next_dt
FROM recursive_query
WHERE dateadd(day, 1, dt) < next_dt
)
SELECT dt, customer_id, balance
FROM recursive_query
ORDER BY customer_id, dt
To help illustrate the steps, I've included examples of key tables:
Balances:
dt
customer_id
balance
2022-07-01
1
100
2022-07-04
1
150
2022-07-05
1
200
2022-07-31
1
650
2022-07-07
2
200
2022-07-08
2
300
2022-07-11
2
400
2022-07-19
2
750
Extended Balances:
dt
customer_id
balance
2022-07-01
1
100
2022-07-04
1
150
2022-07-05
1
200
2022-07-31
1
650
2022-07-01
2
0
2022-07-07
2
200
2022-07-08
2
300
2022-07-11
2
400
2022-07-19
2
750
2022-07-20
2
0
2022-07-31
2
0
First 10 records of the recursive query:
original_dt
dt
customer_id
balance
next_dt
2022-07-01
2022-07-01
1
100
2022-07-04
2022-07-01
2022-07-02
1
100
2022-07-04
2022-07-01
2022-07-03
1
100
2022-07-04
2022-07-04
2022-07-04
1
150
2022-07-05
2022-07-05
2022-07-05
1
200
2022-07-31
2022-07-05
2022-07-06
1
200
2022-07-31
2022-07-05
2022-07-07
1
200
2022-07-31
2022-07-05
2022-07-08
1
200
2022-07-31
2022-07-05
2022-07-09
1
200
2022-07-31
2022-07-05
2022-07-10
1
200
2022-07-31

Closing balance of the previous day as an Opening balance of today

I am developing a database application for a small electronics business. I need a SQL query which takes the closing balance of previous day as an opening balance of current day. I have following data tables
Expensis
ExpenseID Date Expense
1 2019-03-01 2,000
2 2019-03-02 1,000
3 2019-03-03 500
Income
IncomeID Date Income
1 2019-03-01 10,000
2 2019-03-02 13,000
3 2019-03-03 10,000
Required result
Date Opening Balance Income Expense Closing Balance
2019-03-01 0 10,000 2,000 8,000
2019-03-02 8,000 13,000 1,000 20,000
2019-03-03 20,000 10,000 5,00 29,500
You can use sum aggregation function recursively ( lag window analytic function cannot be used for sql server 2008 )
with Expensis( ExpenseID, Date, Expense ) as
(
select 1, '2019-03-01', 2000 union all
select 2, '2019-03-02', 1000 union all
select 3, '2019-03-03', 500
), Income( IncomeID, Date, Income ) as
(
select 1, '2019-03-01', 10000 union all
select 2, '2019-03-02', 13000 union all
select 3, '2019-03-03', 10000
), t as
(
select i.date,
i.income,
e.expense,
sum(i.income-e.expense) over (order by i.date) as closing_balance
from income i
join expensis e on e.date = i.date
)
select date,
( closing_balance - income + expense ) as opening_balance,
income, expense, closing_balance
from t;
date opening balance income expense closing balance
---------- --------------- ------ ------- ---------------
2019-03-01 0 10000 2000 8000
2019-03-02 8000 13000 1000 20000
2019-03-03 20000 10000 500 29500
Demo
Here is one way you could do it
You have to valuate income and expenses differently
WITH INCOME AS
(
SELECT '2018-01-05' AS DT, 200 AS INC, 1 AS TP
UNION ALL
SELECT '2018-01-06' AS DT, 300 AS INC, 1 AS TP
UNION ALL
SELECT '2018-01-07' AS DT, 400 AS INC, 1 AS TP
)
, EXPENSES AS
(
SELECT '2018-01-05' AS DT, -100 AS EXPS, 2 AS TP
UNION ALL
SELECT '2018-01-06' AS DT, -500 AS EXPS, 2 AS TP
UNION ALL
SELECT '2018-01-07' AS DT, -30 AS EXPS, 2 AS TP
)
, UN AS
(
SELECT * FROM INCOME
UNION ALL
SELECT * FROM EXPENSES
)
SELECT *, [1]+[2] AS END_BALANCE FROM UN
PIVOT
(
SUM(INC)
FOR TP IN ([1],[2])
) AS P

Can this daily inventory balance calculation on bigquery be improved

i came up with the following query to calculate inventory balances per day. The query works and gives me the expected results but it takes over 200 seconds to run on a subset of the transaction table with about 2mio rows.
Being new to bigquery i am wondering if there is a better/more efficient way to do this?
The code with some sample data is below.
Thanks in advance for any thoughts or tips.
#### Generate a continuous date range
WITH days AS
(
SELECT day
FROM UNNEST(
GENERATE_DATE_ARRAY(DATE('2011-01-01'), CURRENT_DATE(), INTERVAL 1 DAY)) AS day
),
#### Transactional information of inventory movements. Simple example
movements AS
(
SELECT 1 AS ItemID
,1 AS Location
,DATE('2017-12-01') AS TransactionDate
,0 AS Quantity
UNION ALL SELECT 1, 1, DATE('2017-12-03'), 10
UNION ALL SELECT 1, 1, DATE('2017-12-06'), 100
UNION ALL SELECT 1, 1, DATE('2017-12-12'), 1000
),
#### Calculate cumulative sum for each item and location based on the transaction date
cumsum AS
(
SELECT ItemID
,TransactionDate
,Location
,SUM(Quantity) OVER (PARTITION BY ItemID, Location ORDER BY TransactionDate ROWS UNBOUNDED PRECEDING) as cumulative_quantity
FROM movements
),
#### Cross join with the date range to backfill cumulative values for each day
#### This will return multiple lines for a day when there are multiple transaction date balances
cross_sum AS
(
SELECT m.ItemID
,m.Location
,d.day
,m.TransactionDate
,m.cumulative_quantity
FROM days d
CROSS JOIN cumsum m
WHERE m.TransactionDate <= d.day
),
#### Get just one line per day, based on the latest transaction date
filtered AS
(
SELECT ItemID
,Location
,CAST (day AS datetime) AS BalanceDate
,ARRAY_AGG(cumulative_quantity ORDER BY TransactionDate DESC LIMIT 1) AS InventoryBalance
FROM cross_sum
GROUP BY 1,2,3
)
#### Final result, flattened out
SELECT ItemID
,Location
,BalanceDate
,(SELECT SUM(InventoryBalance) FROM UNNEST(InventoryBalance) AS InventoryBalance) AS InventoryBalance
FROM filtered
ORDER BY 1,2,3
i am wondering if there is a better/more efficient way to do this?
Below is for BigQuery Standard SQL
as you can see: days, cumsum and cross_sum are modified/optimized and the rest just eliminated. It has good potentials to be more efficient but needs to be tested on real data - so you should try and see if it is
#standardSQL
#### Transactional information of inventory movements. Simple example
WITH movements AS (
SELECT 1 AS ItemID, 1 AS Location, DATE('2017-12-01') AS TransactionDate, 0 AS Quantity UNION ALL
SELECT 1, 1, DATE('2017-12-03'), 10 UNION ALL
SELECT 1, 1, DATE('2017-12-06'), 100 UNION ALL
SELECT 1, 1, DATE('2017-12-12'), 1000
), days AS (
SELECT day, ItemID, Location
FROM UNNEST(GENERATE_DATE_ARRAY((SELECT MIN(TransactionDate) AS d FROM movements), CURRENT_DATE(), INTERVAL 1 DAY)) AS day
CROSS JOIN (SELECT DISTINCT ItemID, Location FROM movements)
), cumsum AS (
SELECT ItemID
,TransactionDate
,Location
,LEAD(TransactionDate) OVER(PARTITION BY ItemID, Location ORDER BY TransactionDate) AS NextTransactionDate
,SUM(Quantity) OVER(PARTITION BY ItemID, Location ORDER BY TransactionDate ROWS UNBOUNDED PRECEDING) AS cumulative_quantity
FROM movements
), cross_sum AS (
SELECT d.ItemID
,d.Location
,d.day AS BalanceDate
,m.cumulative_quantity
FROM days d
JOIN cumsum m
ON d.day >= IFNULL(m.TransactionDate, d.day)
AND d.day < IFNULL(m.NextTransactionDate, CURRENT_DATE())
)
SELECT ItemID
,Location
,BalanceDate
,cumulative_quantity
FROM cross_sum
ORDER BY 1,2,3
result is
ItemID Location BalanceDate cumulative_quantity
1 1 2017-12-01 0
1 1 2017-12-02 0
1 1 2017-12-03 10
1 1 2017-12-04 10
1 1 2017-12-05 10
1 1 2017-12-06 110
1 1 2017-12-07 110
1 1 2017-12-08 110
1 1 2017-12-09 110
1 1 2017-12-10 110
1 1 2017-12-11 110
1 1 2017-12-12 1110
1 1 2017-12-13 1110
1 1 2017-12-14 1110
1 1 2017-12-15 1110

Oracle SQL How to break down income by month based on a date range?

Trying to find an efficient way of achieving the results in table B below based on data from table A. Is there an efficient way (i.e. not resource hungry) of doing so assuming one has millions of such records in table A? Please note ID 1 has an end date of 12/31/2199 (not a typo), and we only list the income for each ID during the months of 09/2016 to 12/2016. Also note that ID 3 has two incomes in the month of 11/2016, with 600 representing the November income (since that's the income the ID had at the end of November 2016 month). As for IDs that started in say November 2016, their rows would be missing for Sept 16 and Oct 16 since they did not exist pre-November.
Table A:
ID INCOME EFFECTIVE_DATE END_DATE
1 700 07/01/2016 12/31/2199
2 500 08/20/2016 12/31/2017
3 600 11/11/2016 02/28/2017
3 100 09/01/2016 11/10/2016
4 400 11/21/2016 12/31/2016
Table B (Intended results):
ID INCOME MONTH
1 700 12/2016
1 700 11/2016
1 700 10/2016
1 700 09/2016
2 500 12/2016
2 500 11/2016
2 500 10/2016
2 500 09/2016
3 600 12/2016
3 600 11/2016
3 100 10/2016
3 100 09/2016
4 400 12/2016
4 400 11/2016
RESOLVED I used the answer provided by #mathguy below and it worked like a charm -- learned something new in this process: pivot and unpivot. Also thanks to #MTO (and everyone else) for taking the time to help.
Here is a solution that uses each row from the base table just once, and does not require joins, group by, etc. It uses the unpivot operation, available since Oracle 11.1, which is not an expensive operation.
with
table_a ( id, income, effective_date, end_date ) as (
select 1, 700, date '2016-07-01', date '2199-12-31' from dual union all
select 2, 500, date '2016-08-20', date '2017-12-31' from dual union all
select 3, 600, date '2016-11-11', date '2017-02-28' from dual union all
select 3, 100, date '2016-09-01', date '2016-11-10' from dual union all
select 4, 400, date '2016-11-21', date '2016-12-31' from dual
)
-- end of test data (not part of the solution): SQL query begins BELOW THIS LINE
select id, income, mth
from (
select id,
case when date '2016-09-30' between effective_date and end_date
then income end as sep16,
case when date '2016-10-31' between effective_date and end_date
then income end as oct16,
case when date '2016-11-30' between effective_date and end_date
then income end as nov16,
case when date '2016-12-31' between effective_date and end_date
then income end as dec16
from table_a
)
unpivot ( income for mth in ( sep16 as '09/2016', oct16 as '10/2016', nov16 as '11/2016',
dec16 as '12/2016' )
)
order by id, mth desc
;
Output:
ID INCOME MTH
-- ------ -------
1 700 12/2016
1 700 11/2016
1 700 10/2016
1 700 09/2016
2 500 12/2016
2 500 11/2016
2 500 10/2016
2 500 09/2016
3 600 12/2016
3 600 11/2016
3 100 10/2016
3 100 09/2016
4 400 12/2016
4 400 11/2016
14 rows selected.
A solution using a recursive sub-query factoring clause. This does not rely on hard-coding the bounds into the query as they can be passed as the bind variable :lower_bound and :upper_bound; in the example below they are set to DATE '2016-09-01' and DATE '2016-12-31' respectively.
Query:
WITH months ( id, income, month, end_dt ) AS (
SELECT id,
income,
CAST( TRUNC( GREATEST( a.effective_date, :lower_bound ), 'MM' ) AS DATE ),
LEAST( a.end_date, :upper_bound )
FROM TableA a
WHERE :lower_bound <= a.end_date
AND a.effective_date <= :upper_bound
UNION ALL
SELECT id,
income,
CAST( ADD_MONTHS( month, 1 ) AS DATE ),
end_dt
FROM months
WHERE ADD_MONTHS( month, 1 ) <= end_dt
)
SELECT id,
income,
LAST_DAY( month ) AS month
FROM months
WHERE LAST_DAY( month ) <= end_dt
ORDER BY id, month;
Output:
ID INCOME MONTH
-- ------ ----------
1 700 2016-09-30
1 700 2016-10-31
1 700 2016-11-30
1 700 2016-12-31
2 500 2016-09-30
2 500 2016-10-31
2 500 2016-11-30
2 500 2016-12-31
3 100 2016-09-30
3 100 2016-10-31
3 600 2016-11-30
3 600 2016-12-31
4 400 2016-11-30
4 400 2016-12-31

How to make a time dependent distribution in SQL?

I have an SQL Table in which I keep project information coming from primavera.
Suppose that i have columns for Start Date,End Date,Duration, and Total Qty as shown below .
How can i distribute Total Qty over Months using these information. What kind of additional columns, sql queries i need in order to get correct monthly distribution?
Thanks in Advance.
Columns in order:
itemname,quantity,startdate,duration,enddate
item1 -- 108 -- 2013-03-25 -- 720 -- 2013-07-26
item2 -- 640 -- 2013-03-25 -- 720 -- 2013-07-26
.
.
I think the key is to break the records apart by month. Here is an example of how to do it:
with months as (
select 1 as mon union all select 2 union all select 3 union all
select 4 as mon union all select 5 union all select 6 union all
select 7 as mon union all select 8 union all select 9 union all
select 10 as mon union all select 11 union all select 12
)
select item, m.mon, quantity / nummonths
from (select t.*, (month(enddate) - month(startdate) + 1) as nummonths
from t
) t join
months m
on month(t.startDate) <= m.mon and
months(t.endDate) >= m.mon;
This works because all the months are within the same year -- as in your example. You are quite vague on how the split should be calculated. So, I assumed that every month from the start to the end gets an equal amount.