I have a temp table #StatusInfo with the following data
+---------+--------------+-------+-------------------------+--+
| OrderNo | GroupLineNum | Type1 | UpdateDate | |
+---------+--------------+-------+-------------------------+--+
| Order85 | NULL | 1 | 2019-11-25 05:15:55.000 | |
+---------+--------------+-------+-------------------------+--+
| Order86 | NULL | 1 | 2019-11-25 05:15:55.000 | |
+---------+--------------+-------+-------------------------+--+
| Order86 | 2 | 2 | 2019-11-25 05:32:23.773 | |
+---------+--------------+-------+-------------------------+--+
| Order87 | NULL | 1 | 2019-11-25 05:15:55.000 | |
+---------+--------------+-------+-------------------------+--+
| Order87 | 1 | 2 | 2019-11-25 05:43:37.637 | | B
+---------+--------------+-------+-------------------------+--+
| Order87 | 2 | 2 | 2019-11-25 05:42:32.390 | | A
+---------+--------------+-------+-------------------------+--+
| Order88 | NULL | 1 | 2019-11-25 06:35:13.000 | |
+---------+--------------+-------+-------------------------+--+
| Order88 | 1 | 2 | 2019-11-25 06:39:16.170 | |
+---------+--------------+-------+-------------------------+--+
Any update the user does on an order will be pulled into this temp table. Type 1 column with value 2 denotes a 'Required Date' field change by the user. The timestamp when the user made the change is the last column.
I have another temp table #LineInfo with the following data. This table is created by joining other tables and a left join with the above table too. The 'LineNum' column from below table will match the 'GroupLineNum' column in the above table for Type1=2
+---------+-----------+---------+------------+-------------------------+-------+
| OrderNo | RowNumber | LineNum | TotalCost | ReqDate | Type1 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order85 | 1 | 1 | 309.110000 | 2019-10-30 23:59:00.000 | 1 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order85 | 2 | 2 | 265.560000 | 2019-10-30 23:59:00.000 | 1 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order86 | 1 | 1 | 309.110000 | 2019-10-30 23:59:00.000 | 1 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order86 | 2 | 2 | 265.560000 | 2019-12-28 23:59:00.000 | 2 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order87 | 1 | 1 | 309.110000 | 2020-01-31 23:59:00.000 | 2 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order87 | 2 | 2 | 265.560000 | 2020-01-01 23:59:00.000 | 2 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order88 | 1 | 1 | 309.110000 | 2019-11-29 23:59:00.000 | 2 |
+---------+-----------+---------+------------+-------------------------+-------+
| Order88 | 2 | 2 | 265.560000 | 2019-12-31 23:59:00.000 | 2 |
+---------+-----------+---------+------------+-------------------------+-------+
I will be joining #lineInfo with other tables to generate a new table with only one record for an orderno. Its grouped by orderno.
What I need to do is ensure that the new selectquery will have a column 'ReqDate' which will be the latest ReqDate value for the order.
For example, Order87 has two lines in the order. User updated Line 2 first at '2019-11-25 05:42:32.390' as seen in the row marked 'A' followed by Line 1 marked B # '2019-11-25 05:43:37.637 ' from the first table.
The new query should have the data from LineInfo and only the 'ReqDate' value matching the 'LineNum' that has the maximum of 'UpdateDate' column for Type1=2 and group by orderno.
So in our example, the output should have the ReqDate value '2020-01-31 23:59:00.000'.
In short, an order should have the most recently updated required date. Order can have multiple line items where reqdate is udpated. If there is no entry in #StatusInfo table with Type2 for an order, then any one of the ReqDate value from the #LineInfo table will suffice. Maybe the first line
I wrote something like this but it doesnt pull orders without any entry in StatusInfo table. Those orders will have a default value even though user didnt udpate and i am not sure how to join the result of this with LineInfo table to set the latest value
Select SIT.Orderno, max_date,grouplinenum
from #StatusInfo SIT
inner join
(SELECT Orderno, MAX(ActDate) as max_date
FROM #StatusInfo SI
WHERE SI.Type1=2
GROUP BY SI.Orderno)a
on a.Orderno = SIT.Orderno and a.max_date = SIT.ActDate
This is what I did. I created the blow CTE to load orders with req date change in order of Updated date and assigned it row number. Record with row number 1 will be the most recently updated date
;WITH cteLatestReqDate AS ( --We need to pull the latest ReqDate value the user set. So we are are ordering the SIT table by ActDate and assigning a row number and respective line's required date here
SELECT SIT.OrderNo, SIT.UpdateDate, SIT.GroupLineNum, LLI.ReqDate,
ROW_NUMBER() OVER (PARTITION BY SIT.OrderNo ORDER BY ActDate DESC) AS RowNum
FROM #StatusInfo SIT INNER JOIN #LineLevelInfo LLI ON SIT.OrderNo = OI.OrderNo AND SIT.GroupLineNum = LLI.LineNum
WHERE SIT.Type1 = 2
)
and then I added the below condition to my select query. Below select query is partial
SELECT
CASE WHEN MAX(LRD.ReqDate) IS NULL THEN CAST(FORMAT(MAX(LLI.ReqDate), 'yyMMdd') AS NVARCHAR(10))
ELSE CAST(FORMAT(MAX(LRD.ReqDate), 'yyMMdd') AS NVARCHAR(10)) END AS LatestReqDate
FROM #LineLevelInfo LLI
LEFT JOIN(SELECT * FROM cteLatestReqDate WHERE RowNum = 1)LRD ON LRD.OrderNo = LLI.OrderNo And LRD.GroupLineNum = LLI.LineNum
I got a table data as follows:
ID | TYPE_ID | CREATED_DT | ROW_NUM
=====================================
123 | 485 | 2019-08-31 | 1
123 | 485 | 2019-05-31 | 2
123 | 485 | 2019-02-28 | 3
123 | 485 | 2018-11-30 | 4
123 | 485 | 2018-08-31 | 5
123 | 485 | 2018-05-31 | 6
123 | 487 | 2019-05-31 | 1
123 | 487 | 2018-05-31 | 2
I would like to select 6 ROW_NUMs for each TYPE_ID, if there is missing data I need to return NULL value for CREATED_DT and the final result set should look like:
ID | TYPE_ID | CREATED_DT | ROW_NUM
=====================================
123 | 485 | 2019-08-31 | 1
123 | 485 | 2019-05-31 | 2
123 | 485 | 2019-02-28 | 3
123 | 485 | 2018-11-30 | 4
123 | 485 | 2018-08-31 | 5
123 | 485 | 2018-05-31 | 6
123 | 487 | 2019-05-31 | 1
123 | 487 | 2018-05-31 | 2
123 | 487 | NULL | 3
123 | 487 | NULL | 4
123 | 487 | NULL | 5
123 | 487 | NULL | 6
Query:
SELECT
A.*
FROM TBL AS A
WHERE A.ROW_NUM <= 6
UNION ALL
SELECT
B.*
FROM TBL AS B
WHERE B.ROW_NUM NOT IN (SELECT ROW_NUM FROM TBL)
AND B.ROW_NUM <= 6
I tried using UNION ALL and ISNULL to backfill data that is not available but it is still giving me the existing data but not the expected result. I think this can be done in a easy way by using CTE but not sure how to get this working. Can any help me in this regard.
Assuming Row_Num has at least record has at least all 6 rows... 1,2,3,4,5,6 in tbl and no fractions or 0 or negative numbers...
we get a list of all the distinct type ID's and IDs. (Alias A)
Then we get a distinct list of row numbers less than 7 (giving us 6 records)
we cross join these to ensure each ID & Type_ID has all 6 rows.
we then left join back in the base set (tbl) to get all the needed dates; where such dates exist. As we're using left join the rows w/o a date will still persist.
.
SELECT A.ID, A.Type_ID, C.Created_DT, B.Row_Num
FROM (SELECT DISTINCT ID, Type_ID FROM tbl) A
CROSS JOIN (SELECT distinct row_num from tbl where Row_num < 7) B
LEFT JOIN tbl C
on C.ID = A.ID
and C.Type_ID = A.Type_ID
and C.Row_num = B.Row_num
Giving us:
+----+-----+---------+------------+---------+
| | ID | Type_ID | Created_DT | Row_Num |
+----+-----+---------+------------+---------+
| 1 | 123 | 485 | 2019-08-31 | 1 |
| 2 | 123 | 485 | 2019-05-31 | 2 |
| 3 | 123 | 485 | 2019-02-28 | 3 |
| 4 | 123 | 485 | 2018-11-30 | 4 |
| 5 | 123 | 485 | 2018-08-31 | 5 |
| 6 | 123 | 485 | 2018-05-31 | 6 |
| 7 | 123 | 487 | 2019-05-31 | 1 |
| 8 | 123 | 487 | 2018-05-31 | 2 |
| 9 | 123 | 487 | NULL | 3 |
| 10 | 123 | 487 | NULL | 4 |
| 11 | 123 | 487 | NULL | 5 |
| 12 | 123 | 487 | NULL | 6 |
+----+-----+---------+------------+---------+
Rex Tester: Example
This also assumes that you'd want 1-6 for each combination of type_id and ID. If ID's irrelevant, then simply exclude it from the join criteria. I included it as it's an ID and seems like it's part of a key.
Please reference the other answer for how you can do this using a CROSS JOIN - which is pretty neat. Alternatively, we can utilize the programming logic available in MS-SQL to achieve the desired results. The following approach stores distinct ID and TYPE_ID combinations inside a SQL cursor. Then it iterates through the cursor entries to ensure the appropriate amount of data is stored into a temp table. Finally, the SELECT is performed on the temp table and the cursor is closed. Here is a proof of concept that I validated on https://rextester.com/l/sql_server_online_compiler.
-- Create schema for testing
CREATE TABLE Test (
ID INT,
TYPE_ID INT,
CREATED_DT DATE
)
-- Populate data
INSERT INTO Test(ID, TYPE_ID, CREATED_DT)
VALUES
(123,485,'2019-08-31')
,(123,485,'2019-05-31')
,(123,485,'2019-02-28')
,(123,485,'2018-11-30')
,(123,485,'2018-08-31')
,(123,485,'2018-05-31')
,(123,487,'2019-05-31')
,(123,487,'2018-05-31');
-- Create TempTable for output
CREATE TABLE #OutputTable (
ID INT,
TYPE_ID INT,
CREATED_DT DATE,
ROW_NUM INT
)
-- Declare local variables
DECLARE #tempID INT, #tempType INT;
-- Create cursor to iterate ID and TYPE_ID
DECLARE mycursor CURSOR FOR (
SELECT DISTINCT ID, TYPE_ID FROM Test
);
OPEN mycursor
-- Populate cursor
FETCH NEXT FROM mycursor
INTO #tempID, #tempType;
-- Loop
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #count INT = (SELECT COUNT(*) FROM Test WHERE ID = #tempID AND TYPE_ID = #tempType);
INSERT INTO #OutputTable (ID, TYPE_ID, CREATED_DT, ROW_NUM)
SELECT ID, TYPE_ID, CREATED_DT, ROW_NUMBER() OVER(ORDER BY ID ASC)
FROM Test
WHERE ID = #tempID AND TYPE_ID = #tempType;
WHILE #count < 6
BEGIN
SET #count = #count + 1
INSERT INTO #OutputTable
VALUES (#tempID, #tempType, NULL, #count);
END
FETCH NEXT FROM mycursor
INTO #tempID, #tempType;
END
-- Close cursor
CLOSE mycursor;
-- View results
SELECT * FROM #OutputTable;
Note, if you have an instance where a unique combination of ID and TYPE_ID are grouped more than 6 times, the additional groupings will be included in your final result. If you must only show exactly 6 groupings, you can change that part of the query to SELECT TOP 6 ....
create a cte with a series and cross apply it
CREATE TABLE Test (
ID INT,
TYPE_ID INT,
CREATED_DT DATE
)
INSERT INTO Test(ID, TYPE_ID, CREATED_DT)
VALUES
(123,485,'2019-08-31')
,(123,485,'2019-05-31')
,(123,485,'2019-02-28')
,(123,485,'2018-11-30')
,(123,485,'2018-08-31')
,(123,485,'2018-05-31')
,(123,487,'2019-05-31')
,(123,487,'2018-05-31')
;
WITH n(n) AS
(
SELECT 1
UNION ALL
SELECT n+1 FROM n WHERE n < 6
)
,id_n as (
SELECT
DISTINCT
ID
,TYPE_ID
,n
FROM
Test
cross apply n
)
SELECT
id_n.ID
,id_n.TYPE_ID
,test.CREATED_DT
,id_n.n row_num
FROM
id_n
left join
(
select
ID
,TYPE_ID
,CREATED_DT
,ROW_NUMBER() over(partition by id, type_id order by created_dt) rn
from
Test
) Test on Test.ID = id_n.ID and Test.TYPE_ID = id_n.TYPE_ID and id_n.n = test.rn
drop table Test
I have a 1 table in a db that stored Incoming, Outgoing and Net values for various Account Codes over time. Although there is a date field the sequence of events per Account Code is based on the "Version" number where 0 = original record for each Account Code and it increments by 1 after each change to that Account Code.
The Outgoing and Incoming values are stored in the db as cumulative values rather than the individual transaction value but I am looking for a way to Select * From this table and return the individual amounts as opposed to the cumulative.
Below are test scripts of table and data, and also 2 examples.
If i Select where code = '123' in the test table I currently get this (values are cumulative);
+------+------------+---------+---------+---------+-----+
| Code | Date | Version | Incoming| Outgoing| Net |
+------+------------+---------+---------+---------+-----+
| 123 | 01/01/2018 | 0 | 100 | 0 | 100 |
| 123 | 07/01/2018 | 1 | 150 | 0 | 150 |
| 123 | 09/01/2018 | 2 | 150 | 100 | 50 |
| 123 | 14/01/2018 | 3 | 200 | 100 | 100 |
| 123 | 18/01/2018 | 4 | 200 | 175 | 25 |
| 123 | 23/01/2018 | 5 | 225 | 175 | 50 |
| 123 | 30/01/2018 | 6 | 225 | 225 | 0 |
+------+------------+---------+---------+---------+-----+
This is what I would like to see (each individual transaction);
+------+------------+---------+----------+----------+------+
| Code | Date | Version | Incoming | Outgoing | Net |
+------+------------+---------+----------+----------+------+
| 123 | 01/01/2018 | 0 | 100 | 0 | 100 |
| 123 | 07/01/2018 | 1 | 50 | 0 | 50 |
| 123 | 09/01/2018 | 2 | 0 | 100 | -100 |
| 123 | 14/01/2018 | 3 | 50 | 0 | 50 |
| 123 | 18/01/2018 | 4 | 0 | 75 | -75 |
| 123 | 23/01/2018 | 5 | 25 | 0 | 25 |
| 123 | 30/01/2018 | 6 | 0 | 50 | -50 |
+------+------------+---------+----------+----------+------+
If I had the individual transaction values and wanted to report on the cumulative, I would use an OVER PARTITION BY, but is there an opposite to that?
I am not looking to redesign the create table or the process in which it is stored, I am just looking for a way to report on this from our MI environment.
Note: I've added other random Account Codes into this to emphasis how the data is not ordered by Code or Version, but by Date.
thanks in advance for any help.
USE [tempdb];
IF EXISTS ( SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Table1'
AND TABLE_SCHEMA = 'dbo')
DROP TABLE [dbo].[Table1];
GO
CREATE TABLE [dbo].[Table1]
(
[Code] CHAR(3)
,[Date] DATE
,[Version] CHAR(3)
,[Incoming] DECIMAL(20,2)
,[Outgoing] DECIMAL(20,2)
,[Net] DECIMAL(20,2)
);
GO
INSERT INTO [dbo].[Table1] VALUES
('123','2018-01-01','0','100','0','100'),
('456','2018-01-02','0','50','0','50'),
('789','2018-01-03','0','0','0','0'),
('456','2018-01-04','1','100','0','100'),
('456','2018-01-05','2','150','0','150'),
('789','2018-01-06','1','50','50','0'),
('123','2018-01-07','1','150','0','150'),
('456','2018-01-08','3','200','0','200'),
('123','2018-01-09','2','150','100','50'),
('789','2018-01-10','2','0','0','0'),
('456','2018-01-11','4','225','0','225'),
('789','2018-01-12','3','75','25','50'),
('987','2018-01-13','0','0','50','-50'),
('123','2018-01-14','3','200','100','100'),
('654','2018-01-15','0','100','0','100'),
('456','2018-01-16','5','250','0','250'),
('987','2018-01-17','1','50','50','0'),
('123','2018-01-18','4','200','175','25'),
('789','2018-01-19','4','100','25','75'),
('987','2018-01-20','2','150','125','25'),
('321','2018-01-21','0','100','0','100'),
('654','2018-01-22','1','0','0','0'),
('123','2018-01-23','5','225','175','50'),
('321','2018-01-24','1','100','50','50'),
('789','2018-01-25','5','100','50','50'),
('987','2018-01-26','3','150','150','0'),
('456','2018-01-27','6','250','250','0'),
('456','2018-01-28','7','270','250','20'),
('321','2018-01-29','2','100','100','0'),
('123','2018-01-30','6','225','225','0'),
('987','2018-01-31','4','175','150','25')
;
GO
SELECT *
FROM [dbo].[Table1]
WHERE [Code] = '123'
GO;
USE [tempdb];
IF EXISTS ( SELECT *
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME = 'Table1'
AND TABLE_SCHEMA = 'dbo')
DROP TABLE [dbo].[Table1];
GO;
}
Just use lag():
select Evt, Date, Version,
(Loss - lag(Loss, 1, 0) over (partition by evt order by date)) as incoming,
(Rec - lag(Rec, 1, 0) over (partition by evt order by date)) as outgoing,
(Net - lag(Net, 1, 0) over (partition by evt order by date)) as net
from [dbo].[Table1];