Create weighted average in SQL using dates - sql

I have a SQL query that lists details about a certain item. Everything works as should except for the last column. I want the weight of transaction column to report back a difference in days.
So for example the 4th row in the txdate column is 05/21/2014 and the 3rd row is 05/12/20014. The weight of transaction column in the 4th row should say 9.
I read about the Lag and Lead functions, but I'm not sure how to implement those with dates (if it's even possible). If it isn't possible is there a way to accomplish this?
Select t.txNumber,
t.item,
t.txCode,
t.txdate,
(t.onhandlocold + t.stockQty) as 'Ending Quantity',
tmax.maxtnumber 'Latest Transaction Code',
tmax.maxdate 'Latest Transaction Date',
tmin.mindate 'First Transaction Date',
(t.txdate - tmin.mindate) 'weight of transaction'
From tbliminvtxhistory t
Left outer join
(Select t.item, max(t.txnumber) as maxtnumber, max(t.txdate) as maxdate
From tbliminvtxHistory t
Where t.txCode != 'PAWAY'
Group By Item) tmax
on t.item = tmax.item
Left Outer Join
(Select t.item, min(t.txdate) as mindate
From tbliminvtxHistory t
WHere t.txCode != 'PAWAY'
and t.txdate > DateAdd(Year, -1, GetDate())
Group By Item) tmin
on t.item = tmin.item
where t.item = 'LR50M'
and t.txCode != 'PAWAY'
and t.txdate > DateAdd(Year, -1, GetDate())

Check out the DATEDIFF function, which will return the difference between two dates.
I think this is what you are looking for:
DATEDIFF(dd,tmin.mindate,t.txdate)
UPDATE:
Now that I understand your question a little better, here is an update. As mentioned in a comment on the above post, the LAG function is only supported in SQL 2012 and up. An alternative is to use ROW_NUMBER and store the results into a temp table. Then you can left join back to the same table on the next ROW_NUMBER in the results. Then you would use your DATEDIFF to compare the dates. This will do the exact same thing as the the LAG function.
Example:
SELECT ROW_NUMBER() OVER (ORDER BY txdate) AS RowNumber,*
INTO #Rows
FROM tbliminvtxhistory
SELECT DATEDIFF(dd,r2.txdate,r.txdate),*
FROM #Rows r
LEFT JOIN #Rows r2 ON r.RowNumber=r2.RowNumber+1
DROP TABLE #Rows

I think you are looking for this expression:
Select . . . ,
datediff(day, lag(txdate) over (order by xnumber), txdate)
This assumes that the rows are ordered by the first column, which seems reasonable given your explanation and the sample data.
EDIT:
Without lag() you can use outer apply. For simplicity, let me assume that your query is defined as a CTE:
with cte as (<your query here>)
select . . . ,
datediff(day, prev.txdate , cte.txdate)
from cte cross apply
(select top 1 cte2.*
from cte cte2
where cte2.xnumber < cte.xnumber
order by cte2.xnumber desc
) prev

Related

Delete the records repeated by date, and keep the oldest

I have this query, and it returns the following result, I need to delete the records repeated by date, and keep the oldest, how could I do this?
select
a.EMP_ID, a.EMP_DATE,
from
EMPLOYES a
inner join
TABLE2 b on a.table2ID = b.table2ID and b.ID_TYPE = 'E'
where
a.ID = 'VJAHAJHSJHDAJHSJDH'
and year(a.DATE) = 2021
and month(a.DATE) = 1
and a.ID <> 31
order by
a.DATE;
Additionally, I would like to fill in the missing days of the month ... and put them empty if I don't have that data, can this be done?
I would appreciate if you could guide me to solve this problem
Thank you!
The other answers miss some of the requirement..
Initial step - do this once only. Make a calendar table. This will come in handy for all sorts of things over the time:
DECLARE #Year INT = '2000';
DECLARE #YearCnt INT = 50 ;
DECLARE #StartDate DATE = DATEFROMPARTS(#Year, '01','01')
DECLARE #EndDate DATE = DATEADD(DAY, -1, DATEADD(YEAR, #YearCnt, #StartDate));
;WITH Cal(n) AS
(
SELECT 0 UNION ALL SELECT n + 1 FROM Cal
WHERE n < DATEDIFF(DAY, #StartDate, #EndDate)
),
FnlDt(d, n) AS
(
SELECT DATEADD(DAY, n, #StartDate), n FROM Cal
),
FinalCte AS
(
SELECT
[D] = CONVERT(DATE,d),
[Dy] = DATEPART(DAY, d),
[Mo] = DATENAME(MONTH, d),
[Yr] = DATEPART(YEAR, d),
[DN] = DATENAME(WEEKDAY, d),
[N] = n
FROM FnlDt
)
SELECT * INTO Cal FROM finalCte
ORDER BY [Date]
OPTION (MAXRECURSION 0);
credit: mostly this site
Now we can write some simple query to stick your data (with one small addition) onto it:
--your query, minus the date bits in the WHERE, and with a ROW_NUMBER
WITH yourQuery AS(
SELECT a.emp_id, a.emp_date,
ROW_NUMBER() OVER(PARTITION BY CAST(a.emp_date AS DATE) ORDER BY a.emp_date) rn
FROM EMPLOYES a
INNER JOIN TABLE2 b on a.table2ID = b.table2ID
WHERE a.emp_id = 'VJAHAJHSJHDAJHSJDH' AND a.id <> 31 AND b.id_type = 'E'
)
--your query, left joined onto the cal table so that you get a row for every day even if there is no emp data for that day
SELECT c.d, yq.*
FROM
Cal c
LEFT JOIN yourQuery yq
ON
c.d = CAST(yq.emp_date AS DATE) AND --cut the time off
yq.rn = 1 --keep only the earliest time per day
WHERE
c.d BETWEEN '2021-01-01' AND EOMONTH('2021-01-01')
We add a rownumbering to your table, it restarts every time the date changes and counts up in order of time. We make this into a CTE (or a subquery, CTE is cleaner) then we simply left join it to the calendar table. This means that for any date you don't have data, you still have the calendar date. For any days you do have data, the rownumber rn being a condition of the join means that only the first datetime from each day is present in the results
Note: something is wonky about your question . You said you SELECT a.emp_id and your results show 'VJAHAJHSJHDAJHSJDH' is the emp id, but your where clause says a.id twice, once as a string and once as a number - this can't be right, so I've guessed at fixing it but I suspect you have translated your query into something for SO, perhaps to hide real column names.. Also your SELECT has a dangling comma that is a syntax error.
If you have translated/obscured your real query, make absolutely sure you understand any answer here when translating it back. It's very frustrating when someone is coming back and saying "hi your query doesn't work" then it turns out that they damaged it trying to translate it back to their own db, because they hid the real column names in the question..
FInally, do not use functions on table data in a where clause; it generally kills indexing. Always try and find a way of leaving table data alone. Want all of january? Do like I did, and say table.datecolumn BETWEEN firstofjan AND endofjan etc - SQLserver at least stands a chance of using an index for this, rather than calling a function on every date in the table, every time the query is run
You can use ROW_NUMBER
WITH CTE AS
(
SELECT a.EMP_ID, a.EMP_DATE,
RN = ROW_NUMBER() OVER (PARTITION BY a.EMP_ID, CAST(a.DATE as Date) ORDER BY a.DATE ASC)
from EMPLOYES a INNER JOIN TABLE2 b
on a.table2ID = b.table2ID
and b.ID_TYPE = 'E'
where a.ID = 'VJAHAJHSJHDAJHSJDH'
and year(a.DATE) = 2021
and MONTH(a.DATE) = 1
and a.ID <> 31
)
SELECT * FROM CTE
WHERE RN = 1
Try with an aggregate function MAX or MIN
create table #tmp(dt datetime, val numeric(4,2))
insert into #tmp values ('2021-01-01 10:30:35', 1)
insert into #tmp values ('2021-01-02 10:30:35', 2)
insert into #tmp values ('2021-01-02 11:30:35', 3)
insert into #tmp values ('2021-01-03 10:35:35', 4)
select * from #tmp
select tmp.*
from #tmp tmp
inner join
(select max(dt) as dt, cast(dt as date) as dt_aux from #tmp group by cast(dt as date)) compressed_rows on
tmp.dt = compressed_rows.dt
drop table #tmp
results:

SQL with as expression shows multiple results

I am writing a SQL query using with as expression. I always get a result in the square of what I required.
This is my query:
DECLARE #MAX_DATE AS INT
SET #MAX_DATE = (SELECT DATEPART(MONTH,FECHA) FROM ALBVENTACAB WHERE NUMALBARAN IN (SELECT DISTINCT MAX(NUMALBARAN) FROM ALBVENTACAB));
;WITH TABLE_LAST AS (
SELECT CONCAT(DATEPART(MONTH,FECHA),'-',DATEPART(YEAR,FECHA)) as LAST_YEAR_MONTH
,SUM(TOTALNETO) AS LAST_YEAR_VALUE
FROM ALBVENTACAB
WHERE DATEPART(YEAR,CURRENT_TIMESTAMP) -1 = DATEPART(YEAR,FECHA) AND NUMSERIE LIKE 'A%'
AND DATEPART(MONTH,FECHA) <= #MAX_DATE
GROUP BY CONCAT(DATEPART(MONTH,FECHA),'-',DATEPART(YEAR,FECHA))
)
,TABLE_CURRENT AS(
SELECT CONCAT(DATEPART(MONTH,FECHA),'-',DATEPART(YEAR,FECHA)) as CURR_YEAR_MONTH
,SUM(TOTALNETO) AS CURR_YEAR_VALUE
FROM ALBVENTACAB
WHERE DATEPART(YEAR,CURRENT_TIMESTAMP) <= DATEPART(YEAR,FECHA) AND NUMSERIE LIKE 'A%'
GROUP BY CONCAT(DATEPART(MONTH,FECHA),'-',DATEPART(YEAR,FECHA))
)
SELECT *
FROM TABLE_CURRENT, TABLE_LAST
When I run the query I get exactly the square of the result.
I want to compare sale monthly with last year.
2-2020 814053.3 2-2019 840295.1
1-2020 1094993.65 2-2019 840295.1
3-2020 293927.3 2-2019 840295.1
2-2020 814053.3 1-2019 1050701.68
1-2020 1094993.65 1-2019 1050701.68
3-2020 293927.3 1-2019 1050701.68
2-2020 814053.3 3-2019 887776.1
1-2020 1094993.65 3-2019 887776.1
3-2020 293927.3 3-2019 887776.1
I should get only 3 rows instead of 9 rows.
You need to properly join your two CTE - the way you're doing it now, you're getting a Cartesian product of each row in either CTE together.
Do something like:
*;WITH TABLE_LAST AS
( ....
),
TABLE_CURRENT AS
( ....
)
SELECT *
FROM TABLE_CURRENT curr
INNER JOIN TABLE_LAST last ON (some join condition here)
What that join condition is going to be - I have no idea, and cannot tell from your question - but you have to define how these two sets of data "connect" ....
It could be something like:
SELECT *
FROM TABLE_CURRENT curr
INNER JOIN TABLE_LAST last ON curr.CURR_YEAR_MONTH = last.LAST_YEAR_MONT
or whatever else makes sense in your situation - but basically, you need to somehow "tie together" these two sets of data and get only those rows that make sense - not just every row from "last" combined with every row from "curr" ....
While you already got the answer on how to join the two results, I thought I'd tell you how to typically approach such problems.
From the same table, you want two sums on different conditions (different years that is). You solve this with conditional aggregation, which does just that: aggregate (sum) based on a condition (year).
select
datepart(month, fecha) as month,
sum(case when datepart(year, fecha) = datepart(year, getdate()) then totalneto end) as this_year,
sum(case when datepart(year, fecha) = datepart(year, getdate()) -1 then totalneto end) as last_year
from albventacab
where numserie like 'A%'
and fecha > dateadd(year, -2, getdate())
group by datepart(month, fecha)
order by datepart(month, fecha);

Teradata spool space issue on running a sub query with Count

I am using below query to calculate business days between two dates for all the order numbers. Business days are already available in the teradata table Common_WorkingCalendar. But, i'm also facing spool space issue while i execute the query. I have ample space available in my data lab. Need to optimize the query. Appreciate any inputs.
SELECT
tx."OrderNumber",
(SELECT COUNT(1) FROM Common_WorkingCalendar
WHERE CalDate between Cast(tx."TimeStamp" as date) and Cast(mf.ShipDate as date)) as BusDays
from StoreFulfillment ff
inner join StoreTransmission tx
on tx.OrderNumber = ff.OrderNumber
inner join StoreMerchandiseFulfillment mf
on mf.OrderNumber = ff.OrderNumber
This is a very inefficient way to get this count which results in a product join.
The recommended approach is adding a sequential number to your calendar which increases only on business days (calculated using SUM(CASE WHEN businessDay THEN 1 ELSE 0 END) OVER (ORDER BY CalDate ROWS UNBOUNDED PRECEDING)), then it's two joins, for the start date and the end date.
If this calculation is needed a lot you better add a new column, otherwise you can do it on the fly:
WITH cte AS
(
SELECT CalDate,
-- as this table only contains business days you can use this instead
row_number(*) Over (ORDER BY CalDate) AS DayNo
FROM Common_WorkingCalendar
)
SELECT
tx."OrderNumber",
to_dt.DayNo - from_dt.DayNo AS BusDays
FROM StoreFulfillment ff
INNER JOIN StoreTransmission tx
ON tx.OrderNumber = ff.OrderNumber
INNER JOIN StoreMerchandiseFulfillment mf
ON mf.OrderNumber = ff.OrderNumber
JOIN cte AS from_dt
ON from_dt.CalDate = Cast(tx."TimeStamp" AS DATE)
JOIN cte AS to_dt
ON to_dt.CalDate = Cast(mf.ShipDate AS DATE)

First date when certain condition was met

I'm trying to find a first date when a condition was met. So the logic is below:
use
[AdventureWorksDW2012]
go
;WITH sales AS (
select d.OrderDateKey,
SalesAmount = SUM(d.SalesAmount)
from [dbo].[FactInternetSales] d
group by d.OrderDateKey
having SUM(d.SalesAmount)>10000
)
select FirstOrderDateKey = MIN(OrderDateKey)
from sales
The only problem is that in my data is too complex and too huge to calculate value for each date and then choose the min date when the condition is met. Is there any quick way of finding first date when Internet sales amount exceeded 10000? Is there some kind of loop required?
You can do it in single statement also. The performance can be improved with this if you have proper index on orderdatekey.
select MIN(s.OrderDateKey) as FirstOrderDateKey
from [dbo].[FactInternetSales] d
group by d.OrderDateKey
having SUM(d.SalesAmount)>10000
SELECT TOP 1
d.OrderDateKey
,S.RunningTotal
FROM [dbo].[FactInternetSales] d
CROSS APPLY ( SELECT SUM(SalesAmount) AS RunningTotal
FROM [dbo].[FactInternetSales]
WHERE OrderDateKey <= d.OrderDateKey
) S
WHERE S.RunningTotal < -- Condition
ORDER BY d.OrderDateKey DESC

How to self-join table in a way that every record is joined with the "previous" record?

I have a MS SQL table that contains stock data with the following columns: Id, Symbol, Date, Open, High, Low, Close.
I would like to self-join the table, so I can get a day-to-day % change for Close.
I must create a query that will join the table with itself in a way that every record contains also the data from the previous session (be aware, that I cannot use yesterday's date).
My idea is to do something like this:
select * from quotes t1
inner join quotes t2
on t1.symbol = t2.symbol and
t2.date = (select max(date) from quotes where symbol = t1.symbol and date < t1.date)
However I do not know if that's the correct/fastest way. What should I take into account when thinking about performance? (E.g. will putting UNIQUE index on a (Symbol, Date) pair improve performance?)
There will be around 100,000 new records every year in this table. I am using MS SQL Server 2008
One option is to use a recursive cte (if I'm understanding your requirements correctly):
WITH RNCTE AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY symbol ORDER BY date) rn
FROM quotes
),
CTE AS (
SELECT symbol, date, rn, cast(0 as decimal(10,2)) perc, closed
FROM RNCTE
WHERE rn = 1
UNION ALL
SELECT r.symbol, r.date, r.rn, cast(c.closed/r.closed as decimal(10,2)) perc, r.closed
FROM CTE c
JOIN RNCTE r on c.symbol = r.symbol AND c.rn+1 = r.rn
)
SELECT * FROM CTE
ORDER BY symbol, date
SQL Fiddle Demo
If you need a running total for each symbol to use as the percentage change, then easy enough to add an additional column for that amount -- wasn't completely sure what your intentions were, so the above just divides the current closed amount by the previous closed amount.
Something like this w'd work in SQLite:
SELECT ..
FROM quotes t1, quotes t2
WHERE t1.symbol = t2.symbol
AND t1.date < t2.date
GROUP BY t2.ID
HAVING t2.date = MIN(t2.date)
Given SQLite is a simplest of a kind, maybe in MSSQL this will also work with minimal changes.
Index on (symbol, date)
SELECT *
FROM quotes q_curr
CROSS APPLY (
SELECT TOP(1) *
FROM quotes
WHERE symbol = q_curr.symbol
AND date < q_curr.date
ORDER BY date DESC
) q_prev
You do something like this:
with OrderedQuotes as
(
select
row_number() over(order by Symbol, Date) RowNum,
ID,
Symbol,
Date,
Open,
High,
Low,
Close
from Quotes
)
select
a.Symbol,
a.Date,
a.Open,
a.High,
a.Low,
a.Close,
a.Date PrevDate,
a.Open PrevOpen,
a.High PrevHigh,
a.Low PrevLow,
a.Close PrevClose,
b.Close-a.Close/a.Close PctChange
from OrderedQuotes a
join OrderedQuotes b on a.Symbol = b.Symbol and a.RowNum = b.RowNum + 1
If you change the last join to a left join you get a row for the first date for each symbol, not sure if you need that.
You can use option with CTE and ROW_NUMBER ranking function
;WITH cte AS
(
SELECT symbol, date, [Open], [High], [Low], [Close],
ROW_NUMBER() OVER(PARTITION BY symbol ORDER BY date) AS Id
FROM quotes
)
SELECT c1.Id, c1.symbol, c1.date, c1.[Open], c1.[High], c1.[Low], c1.[Close],
ISNULL(c2.[Close] / c1.[Close], 0) AS perc
FROM cte c1 LEFT JOIN cte c2 ON c1.symbol = c2.symbol AND c1.Id = c2.Id + 1
ORDER BY c1.symbol, c1.date
For improving performance(avoiding sorting and RID Lookup) use this index
CREATE INDEX ix_symbol$date_quotes ON quotes(symbol, date) INCLUDE([Open], [High], [Low], [Close])
Simple demo on SQLFiddle
What you had is fine. I don't know if translating the sub-query into the join will help. However, you asked for it, so the way to do it might be to join the table to itself once more.
select *
from quotes t1
inner join quotes t2
on t1.symbol = t2.symbol and t1.date > t2.date
left outer join quotes t3
on t2.symbol = t3.symbol and t2.date > t3.date
where t3.date is null
You could do something like this:
DECLARE #Today DATETIME
SELECT #Today = DATEADD(DAY, 0, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP))
;WITH today AS
(
SELECT Id ,
Symbol ,
Date ,
[OPEN] ,
High ,
LOW ,
[CLOSE],
DATEADD(DAY, -1, Date) AS yesterday
FROM quotes
WHERE date = #today
)
SELECT *
FROM today
LEFT JOIN quotes yesterday ON today.Symbol = yesterday.Symbol
AND today.yesterday = yesterday.Date
That way you limit your "today" results, if that's an option.
EDIT: The CTEs listed as other questions may work well, but I tend to be hesitant to use ROW_NUMBER when dealing with 100K rows or more. If the previous day may not always be yesterday, I tend to prefer to pull out the check for the previous day in its own query then use it for reference:
DECLARE #Today DATETIME, #PreviousDay DATETIME
SELECT #Today = DATEADD(DAY, 0, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP));
SELECT #PreviousDay = MAX(Date) FROM quotes WHERE Date < #Today;
WITH today AS
(
SELECT Id ,
Symbol ,
Date ,
[OPEN] ,
High ,
LOW ,
[CLOSE]
FROM quotes
WHERE date = #today
)
SELECT *
FROM today
LEFT JOIN quotes AS previousday
ON today.Symbol = previousday.Symbol
AND previousday.Date = #PreviousDay