Grouping by last day of each month—inefficient running - sql

I am attempting to pull month end balances from all accounts a customer has for every month. Here is what I've written. This runs correctly and gives me what I want—but it also runs extremely slowly. How would you recommend speeding it up?
SELECT DISTINCT
[AccountNo]
,SourceDate
,[AccountType]
,[AccountSub]
,[Balance]
FROM [DW].[dbo].[dwFactAccount] AS fact
WHERE SourceDate IN (
SELECT MAX(SOURCEDATE)
FROM [DW].[dbo].[dwFactAccount]
GROUP BY MONTH(SOURCEDATE),
YEAR (SOURCEDATE)
)
ORDER BY SourceDate DESC

I'd try a window function:
SELECT * FROM (
SELECT
[AccountNo]
,[SourceDate]
,[AccountType]
,[AccountSub]
,[Balance]
,ROW_NUMBER() OVER(
PARTITION BY accountno, EOMONTH(sourcedate)
ORDER BY sourcedate DESC
) as rn
FROM [DW].[dbo].[dwFactAccount]
)x
WHERE x.rn = 1
The row number will establish an incrementing counter in order of sourcedate descending. The counter will restart from 1 when the month in sourcedate changes (or the account number changes) thanks to the EOMONTH function quantising any date in a given month to be the last date of the month (2020-03-9 12:34:56 becomes 2020-03-31, as do all other datetimes in March). Any similar tactic to quantise to a fixed date in the month would also work such as using YEAR(sourcedate), MONTH(sourcedate)

You need to build a dimension table for the date with Date as PK, and your SourceDate in the fact table ref. that date dimension table.
Date dimension table can have month, year, week, is_weekend, is_holiday, etc. columns. You join your fact table with the date dimension table and you can group data using any columns in date table you want.

Your absolute first step should be to view the execution plan for the query and determine why the query is slow.
The following explains how to see a graphical execution plan:
Display an Actual Execution Plan
The steps to interpreting the plan and optimizing the query are too much for an SO answer, but you should be able to find some good articles on the topic by Googling. You could also post the plan in an edit to your question and get some real feedback on what steps to take to improve query performance.

Related

SQL: Apply an aggregate result per day using window functions

Consider a time-series table that contains three fields time of type timestamptz, balance of type numeric, and is_spent_column of type text.
The following query generates a valid result for the last day of the given interval.
SELECT
MAX(DATE_TRUNC('DAY', (time))) as last_day,
SUM(balance) FILTER ( WHERE is_spent_column is NULL ) AS value_at_last_day
FROM tbl
2010-07-12 18681.800775017498741407984000
However, I am in need of an equivalent query based on window functions to report the total value of the column named balance for all the days up to and including the given date .
Here is what I've tried so far, but without any valid result:
SELECT
DATE_TRUNC('DAY', (time)) AS daily,
SUM(sum(balance) FILTER ( WHERE is_spent_column is NULL ) ) OVER ( ORDER BY DATE_TRUNC('DAY', (time)) ) AS total_value_per_day
FROM tbl
group by 1
order by 1 desc
2010-07-12 16050.496339044977568391974000
2010-07-11 13103.159119670350269890284000
2010-07-10 12594.525752964512456914454000
2010-07-09 12380.159588711091681327014000
2010-07-08 12178.119542536668113577014000
2010-07-07 11995.943973804127033140014000
EDIT:
Here is a sample dataset:
LINK REMOVED
The running total can be computed by applying the first query above on the entire dataset up to and including the desired day. For example, for day 2009-01-31, the result is 97.13522530000000000000, or for day 2009-01-15 when we filter time as time < '2009-01-16 00:00:00' it returns 24.446144000000000000.
What I need is an alternative query that computes the running total for each day in a single query.
EDIT 2:
Thank you all so very much for your participation and support.
The reason for differences in result sets of the queries was on the preceding ETL pipelines. Sorry for my ignorance!
Below I've provided a sample schema to test the queries.
https://www.db-fiddle.com/f/veUiRauLs23s3WUfXQu3WE/2
Now both queries given above and the query given in the answer below return the same result.
Consider calculating running total via window function after aggregating data to day level. And since you aggregate with a single condition, FILTER condition can be converted to basic WHERE:
SELECT daily,
SUM(total_balance) OVER (ORDER BY daily) AS total_value_per_day
FROM (
SELECT
DATE_TRUNC('DAY', (time)) AS daily,
SUM(balance) AS total_balance
FROM tbl
WHERE is_spent_column IS NULL
GROUP BY 1
) AS daily_agg
ORDER BY daily

SELECT if exists where DATE=TODAY, if not where DATE=YESTERDAY

I have a table with some columns and a date column (that i made a partition with)
For example
[Amount, Date ]
[4 , 2020-4-1]
[3 , 2020-4-2]
[5 , 2020-4-4]
I want to get the latest Amount based on the Date.
I thought about doing a LIMIT 1 with ORDER BY, but, is that optimized by BigQuery or it will scan my entire table?
I want to avoid costs at all possible, I thought about doing a query based on the date today, and if nothing found search for yesterday, but I don't know how to do it in only one query.
Below is for BigQuery Standard SQL
#standardSQL
SELECT ARRAY_AGG(amount ORDER BY `date` DESC LIMIT 1)[SAFE_OFFSET(0)]
FROM `project.dataset.table`
WHERE `date` >= DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)
Note: above assumes your date field is of DATE data type.
If your date field is a partition, you can use it in WHERE clause to filter which partitions should be read in your query.
In your case, you could do something like:
SELECT value
FROM <your-table>
WHERE Date >= DATE_SUB(CURRENT_DATE(), INTERVAL 1 DAY)
ORDER BY Data DESC
LIMIT 1
This query basically will:
Filter only today's and yesterday's partitions
Order the rows by your Date field, from the most recent to the older
Select the first element of the ordered list
If the table has a row with today's date, the query will return the data for today. If it dont't, the query will return the data for yesterday.
Finally, I would like to attach here this reference regarding querying partitioned tables.
I hope it helps
The LIMIT order stops the query whet it gets the amount of results indicated.
I think the query should be something like this, I'm not sure if "today()-1" returns
SELECT Amount
FROM <table> as t
WHERE date(t.Date) = current_date()
OR date(t.Date) = makedate(year(current_date()), dayofyear(current_date())-1);
Edited: Sorry, my answer is for MariaDB I now see you ask for Google-BigQuery which I didn't even know, but it looks like SQL, I hope it has some functions like the ones I posted.

SQL calculate time difference with previous row

I have code that provides the order number, the estimated time of delivery, the actual time of delivery and the difference between the two times.
If the order is late I need to take that difference and add it to the next order to display the new estimated time of delivery.
How can I have SQL reach back to the previous row and get the calculated difference to add to the estimated time of delivery? LAG is not available since we are using 2012 SQL Shell.
This gets the datediff of current datetime from previous record
WITH orders AS
(SELECT *, ROW_NUMBER() OVER (ORDER BY datetimecolumn) AS rownum
FROM mytable
)
SELECT DATEDIFF(second, curr.est_tod, prev.act_tod)
FROM orders curr
INNER JOIN orders prev
ON curr.rownum = prev.rownum - 1

Display a rolling 12 weeks chart in SSRS report

I am calling the data query in ssrs like this:
SELECT * FROM [DATABASE].[dbo].[mytable]
So, the current week is the last week from the query (e.g. 3/31 - 4/4) and each number represents the week before until we have reached the 12 weeks prior to this week and display in a point chart.
How can I accomplish grouping all the visits for all locations by weeks and adding it to the chart?
I suggest updating your SQL query to Group by a descending Dense_Rank of DatePart(Week,ARRIVED_DATE). In this example, I have one column for Visits because I couldn't tell which columns you were using to get your Visit count:
-- load some test data
if object_id('tempdb..#MyTable') is not null
drop table #MyTable
create table #MyTable(ARRIVED_DATE datetime,Visits int)
while (select count(*) from #MyTable) < 1000
begin
insert into #MyTable values
(dateadd(day,round(rand()*100,0),'2014-01-01'),round(rand()*1000,0))
end
-- Sum Visits by WeekNumber relative to today's WeekNumber
select
dense_rank() over(order by datepart(week,ARRIVED_DATE) desc) [Week],
sum(Visits) Visits
from #MyTable
where datepart(week,ARRIVED_DATE) >= datepart(week,getdate()) - 11
group by datepart(week,ARRIVED_DATE)
order by datepart(week,ARRIVED_DATE)
Let me know if I can provide any more detail to help you out.
You are going to want to do the grouping of the visits within SQL. You should be able to add a calculated column to your table which is something like WorkWeek and it should be calculated on the days difference from a certain day such as Sunday. This column will then by your X value rather than the date field you were using.
Here is a good article that goes into first day of week: First Day of Week

How can I make this query run efficiently?

In BigQuery, we're trying to run:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT value, UTC_USEC_TO_DAY(timestamp) as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [Datastore.PerformanceDatum]
WHERE type = "MemoryPerf"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
which returns a relatively small amount of data. But we're getting the message:
Error: Resources exceeded during query execution. The query contained a GROUP BY operator, consider using GROUP EACH BY instead. For more details, please see https://developers.google.com/bigquery/docs/query-reference#groupby
What is making this query fail, the size of the subquery? Is there some equivalent query we can do which avoids the problem?
Edit in response to comments: If I add GROUP EACH BY (and drop the outer ORDER BY), the query fails, claiming GROUP EACH BY is here not parallelizable.
I wrote an equivalent query that works for me:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT data value, UTC_USEC_TO_DAY(dtimestamp) as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [io_sensor_data.moscone_io13]
WHERE sensortype = "humidity"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
If I run only the inner query, I get 3,660,624 results. Is your dataset bigger than that?
The outer select gives me only 4 results when grouped by day. I'll try a different grouping to see if I can hit a limit there:
SELECT day, AVG(value)/(1024*1024) FROM (
SELECT data value, dtimestamp / 1000 as day,
PERCENTILE_RANK() OVER (PARTITION BY day ORDER BY value ASC) as rank
FROM [io_sensor_data.moscone_io13]
WHERE sensortype = "humidity"
) WHERE rank >= 0.9 AND rank <= 0.91
GROUP BY day
ORDER BY day desc;
Runs too, now with 57,862 different groups.
I tried different combinations to get to the same error. I was able to get the same error as you doubling the amount of initial data. An easy "hack" to double the amount of data is changing:
FROM [io_sensor_data.moscone_io13]
To:
FROM [io_sensor_data.moscone_io13], [io_sensor_data.moscone_io13]
Then I get the same error. How much data do you have? Can you apply an additional filter? As you are already partitioning the percentile_rank by day, can you add an additional query to only analyze a fraction of the days (for example, only last month)?