There are questions like this all over the place so let me specify where I specifically need help.
I have seen moving averages in SQL with Oracle Analytic functions, MSSQL apply, or a variety of other methods. I have also seen this done with self joins (one join for each day of the average, such as here How do you create a Moving Average Method in SQL? ).
I am curious as to if there is a way (only using self joins) to do this in SQL (preferably oracle, but since my question is geared towards joins alone this should be possible for any RDBMS). The way would have to be scalable (for a 20 or 100 day moving average, in contrast to the link I researched above, which required a join for each day in the moving average).
My thoughts are
select customer, a.tradedate, a.shares, avg(b.shares)
from trades a, trades b
where b.tradedate between a.tradedate-20 and a.tradedate
group by customer, a.tradedate
But when I tried it in the past it hadn't worked. To be more specific, I am trying a smaller but similar exmaple (5 day avg instead of 20 day) with this fiddle demo and cant find out where I am going wrong. http://sqlfiddle.com/#!6/ed008/41
select a.ticker, a.dt_date, a.volume, avg(b.volume)
from yourtable a, yourtable b
where b.dt_date between a.dt_date-5 and a.dt_date
and a.ticker=b.ticker
group by a.ticker, a.dt_date, a.volume
I don't see anything wrong with your second query, I think the only reason it's not what you're expecting is because the volume field is an integer data type so when you calculate the average the resulting output will also be an integer data type. For an average you have to cast it, because the result won't necessarily be an integer (whole number):
select a.ticker, a.dt_date, a.volume, avg(cast(b.volume as float))
from yourtable a
join yourtable b
on a.ticker = b.ticker
where b.dt_date between a.dt_date - 5 and a.dt_date
group by a.ticker, a.dt_date, a.volume
Fiddle:
http://sqlfiddle.com/#!6/ed008/48/0 (thanks to #DaleM for DDL)
I don't know why you would ever do this vs. an analytic function though, especially since you mention wanting to do this in Oracle (which has analytic functions). It would be different if your preferred database were MySQL or a database without analytic functions.
Just to add to the answer, this is how you would achieve the same result in Oracle using analytic functions. Notice how the PARTITION BY acts as the join you're using on ticker. That splits up the results so that the same date shared across multiple tickers don't interfere.
select ticker,
dt_date,
volume,
avg(cast(volume as decimal)) over( partition by ticker
order by dt_date
rows between 5 preceding
and current row ) as mov_avg
from yourtable
order by ticker, dt_date, volume
Fiddle:
http://sqlfiddle.com/#!4/0d06b/4/0
Analytic functions will likely run much faster.
http://sqlfiddle.com/#!6/ed008/45 would appear to be what you need.
select a.ticker,
a.dt_date,
a.volume,
(select avg(cast(b.volume as float))
from yourtable b
where b.dt_date between a.dt_date-5 and a.dt_date
and a.ticker=b.ticker)
from yourtable a
order by a.ticker, a.dt_date
not a join but a subquery
Related
I have a table with about 3 million rows of Customer Sales by Date.
For each CustomerID row I need to get the sum of the Spend_Value
WHERE Order_Date BETWEEN Order_Date_m365 AND Order_Date
Order_Date_m365 = OrderDate minus 365 days.
I just tried a self join but of course, this gave the wrong results due to rows overlapping dates.
If there is a way with Window Functions this would be ideal but I tried and can't do the between dates in the function, unless I missed a way.
Tonly way I can think now is to loop so process all rank 1 rows into a table, then rank 2 in a table, etc., but this will be really inefficient on 3 million rows.
Any ideas on how this is usually handled in SQL?
SELECT CustomerID,Order_Date_m365,Order_Date,Spend_Value
FROM dbo.CustomerSales
Window functions likely won't help you here, so you are going to need to reference the table again. I would suggest that you use an APPLY with a subquery to do this. Provided you have relevant indexes then this will likely be the more efficient approach:
SELECT CS.CustomerID,
CS.Order_Date_m365,
CS.Order_Date,
CS.Spend_Value,
o.output
FROM dbo.CustomerSales CS
CROSS APPLY (SELECT SUM(Spend_Value) AS output
FROM dbo.CustomerSales ca
WHERE ca.CustomerID = CS.CustomerID
AND ca.Order_Date >= CS.Order_Date_m365 --Or should this is >?
AND ca.Order_Date <= CS.Order_Date) o;
I have a Postgres database schema groceries.
There are two tables purchases 19 and 20 connected over a third one categories.
I can join every table alone with categories without problem.
For calculating the year change I need 19 and 20 together.
It seems the problem is that the third table categories has got only one foreign key for both tables. Thus it return every time a col with zeros because there is no match for one table. Maybe I am wrong.
Any suggestions to query the tables?
More info below.
The groceries database has a subset dairies: 'whole milk','yogurt', 'domestic eggs'.
There are no clear primary keys.
I share the database file with this link:
https://drive.google.com/drive/folders/1BBXr-il7rmDkHAukETUle_ZYcDC7t44v?usp=sharing
I want to answer:
For each month of 2020, what was the percentage increase or decrease in total monthly dairy purchases compared to the same month in 2019 (i.e., the year_change)?
How can I do this?
I have tried different queries along this line:
SELECT
a.month,
COUNT(a.purchaseid) as sales_2020,
COUNT(b.purchase_id) as sales_2019,
ROUND(((CAST(COUNT(purchaseid) as decimal) /
(SELECT COUNT(purchaseid)FROM purchases_2020)) *100),2)
as market_share,
(COUNT(a.purchaseid) - COUNT(b.purchase_id) ) as year_change
FROM purchases_2020 as a
Left Outer Join categories as cat ON a.purchaseid = cat.purchase_id
Left Outer Join purchases_2019 as b ON cat.purchase_id = b.purchase_id
WHERE cat.category in ('whole milk','yogurt', 'domestic eggs')
GROUP BY a.month
ORDER BY a.month
;
It gives me either no result or the result above with an empty sales_2019 column.
The expected result is a table
with the monthly dairy sales for 2020, the montly market share of dairies of all products in 2020, and the monthly year change between 2019 and 2020 in percentage.
How can I calculate the year change?
Thanks for your help.
%%sql
postgresql:///groceries
with p2019Sales as (
select
month,
count(p.purchase_id) as total_sales
from purchases_2019 p
left join categories c
using (purchase_id)
where c.category in ('whole milk', 'yogurt' ,'domestic eggs')
group by month
order by month
),
mkS as (
select
cast(extract(month from fulldate::date)as int) as month,
count(*) as total_share
from purchases_2020
group by month
order by month
),
p2020Sales as (
select
cast(extract(month from fulldate::date)as int) as month,
count(p.purchaseid) as total_sales,
round(count(p.purchaseid)*100::numeric/ m.total_share,2) as market_share,
sum(count(*)) over() as tos
from purchases_2020 p
left join categories c
on p.purchaseid = c.purchase_id
left join mks m
on cast(extract(month from p.fulldate::date)as int) = m.month
where c.category in ('whole milk', 'yogurt' ,'domestic eggs')
group by 1,m.total_share
order by 1,m.total_share
),
finalSale as (
select
month,
p2.total_sales,
p2.market_share,
round((p2.total_sales - p1.total_sales)*100::numeric/p1.total_sales,2) as year_change
from p2019Sales p1
inner join p2020Sales p2
using(month)
)
select *
from finalSale
The answer of user18262778 is excellent.
but as Jeremy Caney is stating:
" add additional details that will help others understand how this addresses the question asked."
I deliver some details.
My goal:
get the output I want in one query
My problem:
The query is long and complicated.
There are several approaches to the problem:
joins
subqueries
All are prone to circular dependencies.
The subqueries and joins produce results,but discard data necessary to move on further towards the final result
The solution:
The with statement allows to compute the aggregation and reference this by name within the query.
If you know it is the WITH statement, then there is of course a lot of info on the web. The description below summarises exactly the benefits of the given solution in general.
"In PostgreSQL, the WITH query provides a way to write auxiliary statements for use in a larger query. It helps in breaking down complicated and large queries into simpler forms, which are easily readable. These statements often referred to as Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query.
The WITH query being CTE query, is particularly useful when subquery is executed multiple times. It is equally helpful in place of temporary tables. It computes the aggregation once and allows us to reference it by its name (may be multiple times) in the queries.
The WITH clause must be defined before it is used in the query."
PostgreSQL - WITH Clause
I have a list of dates in a SQL Server table, and need to figure out a few separate themes about them:
Firstly, are the dates monthly or quarterly? The dates always start on the first of the month.
E.g. one sequence may be 01/01/13, 01/02/13, 01/03/13, 01/04/13, 01/05/13 therefore monthly (UK)
E.g. another sequence may be 01/12/12, 01/03/13, 01/06/13, 01/09/13, 01/12/13 therefore quarterly (UK)
And secondly, which may be solved by the first, are all the dates present? eg no gaps. One way I went around solving this was to say it is either monthly / quarterly or no idea, but that was in C#.
Thanks
You can use the DATEDIFF() function to compare two dates, and you can use a self-join and the ROW_NUMBER() function to compare dates from different rows:
;WITH cte AS (SELECT *, ROW_NUMBER() OVER (ORDER BY dt) RN
FROM Table1)
SELECT DATEDIFF(day,a.dt,b.dt)
FROM cte a
JOIN cte b
ON a.RN = b.RN-1
If you are using SQL 2012 you can use the LEAD() function to compare values from different rows:
SELECT DATEDIFF(day,dt,LEAD(dt,1) OVER(ORDER BY dt)) AS Days
,DATEDIFF(quarter,dt,LEAD(dt,1) OVER(ORDER BY dt)) AS Quarters
FROM Table2
Demo: SQL Fiddle
I am having trouble in calculating the maximum of a row_number in my sql case.
I will explain it directly on the SQL Fiddle example, as I think it will be faster to understand: SQL Fiddle
Columns 'OrderNumber', 'HourMinute' and 'Code' are just to represent my table and hence, should not be relevant for coding purposes
Column 'DateOnly' contains the dates
Column 'Phone' contains the phones of my customers
Column 'Purchases' contains the number of times customers have bought in the last 12 months. Note that this value is provided for each date, so the 12 months time period is relative to the date we're evaluating.
Finally, the column I am trying to produce is the 'PREVIOUSPURCHASES' which counts the number of times the figure provided in the column 'Purchases' has appeared in the previous 12 months (for each phone).
You can see on the SQL Fiddle example what I have achieved so far. The column 'PREVIOUSPURCHASES' is producing what I want, however, it is also producing lower values (e.g. only the maximum one is the one I need).
For instance, you can see that rows 4 and 5 are duplicated, one with a 'PREVIOUSPURCHASES' of 1 and the other with 2. I don't want to have the 4th row, in this case.
I have though about replacing the row_number by something like max(row_number) but I haven't been able to produce it (already looked at similar posts at stackoverflow...).
This should be implemented in SQL Server 2012.
Thanks in advance.
I'm not sure what kind of result set you want to see but is there anything wrong with what's returned with this?
SELECT c.OrderNumber, c.DateOnly, c.HourMinute, c.Code, c.Phone, c.Purchases, MAX(o.PreviousPurchases)
FROM cte c CROSS APPLY (
SELECT t2.DateOnly, t2.Phone,t2.ordernumber, t2.Purchases, ROW_NUMBER() OVER(PARTITION BY c.DateOnly ORDER BY t2.DateOnly) AS PreviousPurchases
FROM CurrentCustomers_v2 t2
WHERE c.Phone = t2.Phone AND t2.purchases<=c.purchases AND DATEDIFF(DAY, t2.DateOnly, c.DateOnly) BETWEEN 0 AND 365
) o
WHERE c.OrderNumber = o.OrderNumber
GROUP BY c.OrderNumber, c.DateOnly, c.HourMinute, c.Code, c.Phone, c.Purchases
ORDER BY c.DateOnly
I have a table with sequential timestamps:
2011-03-17 10:31:19
2011-03-17 10:45:49
2011-03-17 10:47:49
...
I need to find the average time difference between each of these(there could be dozens) in seconds or whatever is easiest, I can work with it from there. So for example the above inter-arrival time for only the first two times would be 870 (14m 30s). For all three times it would be: (870 + 120)/2 = 445 (7m 25s).
A note, I am using postgreSQL 8.1.22 .
EDIT: The table I mention above is from a different query that is literally just a one-column list of timestamps
Not sure I understood your question completely, but this might be what you are looking for:
SELECT avg(difference)
FROM (
SELECT timestamp_col - lag(timestamp_col) over (order by timestamp_col) as difference
FROM your_table
) t
The inner query calculates the distance between each row and the preceding row. The result is an interval for each row in the table.
The outer query simply does an average over all differences.
i think u want to find avg(timestamptz).
my solution is avg(current - min value). but since result is interval, so add it to min value again.
SELECT avg(target_col - (select min(target_col) from your_table))
+ (select min(target_col) from your_table)
FROM your_table
If you cannot upgrade to a version of PG that supports window functions, you
may compute your table's sequential steps "the slow way."
Assuming your table is "tbl" and your timestamp column is "ts":
SELECT AVG(t1 - t0)
FROM (
-- All this silliness would be moot if we could use
-- `` lead(ts) over (order by ts) ''
SELECT tbl.ts AS t0,
next.ts AS t1
FROM tbl
CROSS JOIN
tbl next
WHERE next.ts = (
SELECT MIN(ts)
FROM tbl subquery
WHERE subquery.ts > tbl.ts
)
) derived;
But don't do that. Its performance will be terrible. Please do what
a_horse_with_no_name suggests, and use window functions.