Table design:
| PeriodStart | Person | Day1 | Day2 | Day3 | Day4 | Day5 | Day6 | Day7 |
-------------------------------------------------------------------------
| 01/01/2018 | 123 | 2 | 4 | 6 | 8 | 10 | 12 | 14 |
| 01/15/2018 | 246 | 1 | 3 | 5 | 7 | 9 | 11 | 13 |
I am trying to create a pivot statement that can dynamically transpose both rows.
Desired output:
| Date | Person | Values |
--------------------------------
| 01/01/2018 | 123 | 2 |
| 01/02/2018 | 123 | 4 |
| 01/03/2018 | 123 | 6 |
| 01/04/2018 | 123 | 8 |
| 01/05/2018 | 123 | 10 |
| 01/06/2018 | 123 | 12 |
| 01/15/2018 | 246 | 1 |
| 01/16/2018 | 246 | 3 |
| 01/17/2018 | 246 | 5 |
... and so on. Date order not important
The following query will help initialize things:
DECLARE #WeekTable TABLE (
[PeriodStart] datetime
, [Person] int
, [Day1] int
, [Day2] int
, [Day3] int
, [Day4] int
, [Day5] int
, [Day6] int
, [Day7] int
)
INSERT INTO #WeekTable(
[PeriodStart],[Person],[Day1],[Day2],[Day3],[Day4],[Day5],[Day6],[Day7]
)
VALUES ('01/01/2018','123','2','4','6','8','10','12','14')
,('01/15/2018','246','1','3','5','7','9','11','13')
Other option is to do that with APPLY operator
SELECT DATEADD(DAY, Days-1, Dates) Date, a.Person, Value
FROM #WeekTable t CROSS APPLY (
VALUES (PeriodStart, Person, 1, Day1), (PeriodStart, Person, 2, Day2),
..., (PeriodStart, Person, 7, Day7)
)a(Dates, Person, Days, Value)
you can use unpivot to turn the day columns back into rows and then parse the number out of the column name:
with periods as (
select * from #WeekTable
unpivot ([Values] for [Day] in (Day1, Day2, Day3, Day4, Day5, Day6, Day7))x
)
select dateadd(day, convert(int, substring(Day, 4, 1)), PeriodStart), Person, [Values]
from periods
fiddle
Related
I'm trying to get the successive differences of rows of data in SQL, including differences between first and last row and 0, where the rows are grouped by multiple columns.
I have two tables that look like this
Date Value
+------------+-------+ +------------+-------+------+------+
| Date | Name | | Date | Value | Name | Type |
+------------+-------+ +------------+-------+------+------+
| 2019-10-10 | A | | 2019-10-11 | 10 | A | X |
| 2019-10-11 | A | | 2019-10-12 | 11 | A | X |
| 2019-10-12 | A | | 2019-10-14 | 20 | A | X |
| 2019-10-13 | A | | 2019-10-11 | 10 | A | Y |
| 2019-10-14 | A | | 2019-10-12 | 22 | A | Y |
| 2019-10-15 | A | | 2019-10-14 | 30 | A | Y |
| 2019-10-10 | B | | 2019-10-11 | 10 | B | X |
| 2019-10-11 | B | | 2019-10-12 | 33 | B | X |
| 2019-10-12 | B | | 2019-10-14 | 40 | B | X |
| 2019-10-13 | B | | 2019-10-11 | 10 | B | Y |
| 2019-10-14 | B | | 2019-10-12 | 44 | B | Y |
| 2019-10-15 | B | | 2019-10-15 | 50 | B | Y |
+------------+-------+ +------------+-------+------+------+
The Date table holds the universe of dates for the different names. The Value table has values of different types for each name. I'd like to get a set of successive differences for every value, grouped by Name and Type.
The end result I'm looking for is
+------------+-------+------+-------+---------------+------------+
| Date | Name | Type | Value | PreviousValue | Difference |
+------------+-------+------+-------+---------------+------------+
| 2019-10-11 | A | X | 10 | 0 | 10 |
| 2019-10-12 | A | X | 11 | 10 | 1 |
| 2019-10-14 | A | X | 20 | 11 | 9 |
| 2019-10-15 | A | X | 0 | 20 | -20 |
| 2019-10-11 | A | Y | 10 | 0 | 10 |
| 2019-10-12 | A | Y | 22 | 10 | 12 |
| 2019-10-14 | A | Y | 30 | 22 | 8 |
| 2019-10-15 | A | Y | 0 | 30 | -30 |
| 2019-10-11 | B | X | 10 | 0 | 10 |
| 2019-10-12 | B | X | 33 | 10 | 23 |
| 2019-10-14 | B | X | 40 | 33 | 7 |
| 2019-10-15 | B | X | 0 | 40 | -40 |
| 2019-10-11 | B | Y | 10 | 0 | 10 |
| 2019-10-12 | B | Y | 44 | 10 | 34 |
| 2019-10-15 | B | Y | 50 | 44 | 10 |
+------------+-------+------+-------+---------------+------------+
Note that the B–Y set of rows illustrates an important point—we might have a value for the last date, in which case there's no need for an "extra" row for that set.
The closest I can get right now is
SELECT
d.[Date],
d.[Name],
v.[Type],
v.[Value],
[PreviousValue] = COALESCE(LAG(v.[Value]) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY d.[Date]), 0),
[Difference] = v.[Value] - COALESCE(LAG(v.[Value]) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY v.[Date]), 0)
FROM
[Dates] d
LEFT JOIN
[Values] v
ON
d.[Date] = v.[Date]
AND d.[Name] = v.[Name]
But this doesn't produce the difference for the last row.
Since some data is missing on either side, you have to make up for it somehow.
One trick is to create such missing data by carefull joining.
The example below first joines the types to the Dates data. So that a FULL JOIN with the Values data can also be done on the type.
Then after adding enough COALESCE's or ISNULL's , calculating the metrics becomes easy.
CREATE TABLE [Dates](
[Date] DATE NOT NULL,
[Name] VARCHAR(8) NOT NULL,
PRIMARY KEY ([Date], [Name])
);
INSERT INTO [Dates]
([Date], [Name]) VALUES
('2019-10-10','A')
, ('2019-10-11','A')
, ('2019-10-12','A')
, ('2019-10-13','A')
, ('2019-10-14','A')
, ('2019-10-15','A')
, ('2019-10-10','B')
, ('2019-10-11','B')
, ('2019-10-12','B')
, ('2019-10-13','B')
, ('2019-10-15','B')
;
CREATE TABLE [Values](
[Id] INT IDENTITY(1,1) PRIMARY KEY,
[Date] DATE NOT NULL,
[Name] VARCHAR(8) NOT NULL,
[Value] INTEGER NOT NULL,
[Type] VARCHAR(8) NOT NULL
);
INSERT INTO [Values]
([Date], [Value], [Name], [Type]) VALUES
('2019-10-11', 10, 'A', 'X')
, ('2019-10-12', 11, 'A', 'X')
, ('2019-10-14', 20, 'A', 'X')
, ('2019-10-11', 10, 'A', 'Y')
, ('2019-10-12', 22, 'A', 'Y')
, ('2019-10-14', 30, 'A', 'Y')
, ('2019-10-11', 10, 'B', 'X')
, ('2019-10-12', 33, 'B', 'X')
, ('2019-10-14', 40, 'B', 'X')
, ('2019-10-11', 10, 'B', 'Y')
, ('2019-10-12', 44, 'B', 'Y')
, ('2019-10-15', 50, 'B', 'Y')
;
WITH CTE_DATA AS
(
SELECT
[Name] = COALESCE(d.[Name],v.[Name])
, [Type] = COALESCE(tp.[Type],v.[Type])
, [Date] = COALESCE(d.[Date],v.[Date])
, [Value] = ISNULL(v.[Value], 0)
FROM [Dates] AS d
INNER JOIN
(
SELECT [Name], [Type], MAX([Date]) AS [Date]
FROM [Values]
GROUP BY [Name], [Type]
) AS tp
ON tp.[Name] = d.[Name]
FULL JOIN [Values] AS v
ON v.[Date] = d.[Date]
AND v.[Name] = d.[Name]
AND v.[Type] = tp.[Type]
WHERE v.[Type] IS NOT NULL
OR d.[Date] > tp.[Date]
)
SELECT
[Name], [Type], [Date], [Value]
, [PreviousValue] = ISNULL(LAG([Value]) OVER (PARTITION BY [Name], [Type] ORDER BY [Date]), 0)
, [Difference] = [Value] - ISNULL(LAG([Value]) OVER (PARTITION BY [Name], [Type] ORDER BY [Date]), 0)
FROM CTE_DATA
ORDER BY [Name], [Type], [Date]
Name | Type | Date | Value | PreviousValue | Difference
:--- | :--- | :------------------ | ----: | ------------: | ---------:
A | X | 11/10/2019 00:00:00 | 10 | 0 | 10
A | X | 12/10/2019 00:00:00 | 11 | 10 | 1
A | X | 14/10/2019 00:00:00 | 20 | 11 | 9
A | X | 15/10/2019 00:00:00 | 0 | 20 | -20
A | Y | 11/10/2019 00:00:00 | 10 | 0 | 10
A | Y | 12/10/2019 00:00:00 | 22 | 10 | 12
A | Y | 14/10/2019 00:00:00 | 30 | 22 | 8
A | Y | 15/10/2019 00:00:00 | 0 | 30 | -30
B | X | 11/10/2019 00:00:00 | 10 | 0 | 10
B | X | 12/10/2019 00:00:00 | 33 | 10 | 23
B | X | 14/10/2019 00:00:00 | 40 | 33 | 7
B | X | 15/10/2019 00:00:00 | 0 | 40 | -40
B | Y | 11/10/2019 00:00:00 | 10 | 0 | 10
B | Y | 12/10/2019 00:00:00 | 44 | 10 | 34
B | Y | 15/10/2019 00:00:00 | 50 | 44 | 6
Test on db<>fiddle here
Just use lag() with the default value argument:
[PreviousValue] = COALESCE(LAG(v.Value, 1, 0) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY d.[Date]), 0)
[Difference] = v.[Value] - COALESCE(LAG(v.Value, 1, 0) OVER (PARTITION BY d.[Name], v.[Type] ORDER BY v.[Date]), 0)
I have a simple Invoice table that has each item sold and the date it was sold.
Here is some sample data of taking the base database and counting how much times each item was sold per week.
+------+-----------------+------------+---------+
| Week | Item_Number | Color_Code | Touches |
+------+-----------------+------------+---------+
| 1 | 11073900LRGMO | 02000 | 7 |
| 1 | 11073900MEDMO | 02000 | 9 |
| 2 | 1114900011BMO | 38301 | 62 |
| 2 | 1114910012BMO | 21701 | 147 |
| 2 | 1114910012BMO | 38301 | 147 |
| 2 | 1114910012BMO | 46260 | 147 |
| 3 | 13MK430R03R | 00101 | 2 |
| 3 | 13MK430R03R | 10001 | 2 |
| 3 | 13MK430R03R | 65004 | 8 |
| 3 | 13MK430R03S | 00101 | 2 |
| 3 | 13MK430R03S | 10001 | 2 |
+------+-----------------+------------+---------+
Then I created a matrix out of this data using a dynamic query and the pivot operator. Here is how I did that,
First, I create a temporary table
DECLARE #cols AS NVARCHAR(MAX)
DECLARE #query AS NVARCHAR(MAX)
IF OBJECT_ID('tempdb..#VTable') IS NOT NULL
DROP TABLE #VTable
CREATE TABLE #VTable
(
[Item_Number] NVARCHAR(100),
[Color_Code] NVARCHAR(100),
[Item_Cost] NVARCHAR(100),
[Week] NVARCHAR(10),
[xCount] int
);
Then I insert my data into that table,
INSERT INTO #VTable
(
[Item_Number],
[Color_Code],
[Item_Cost],
[Week],
[xCount]
)
SELECT
*
FROM (
SELECT
Item_Number
,Color_Code
,Item_Cost
,Week
,Count(Item_Number) Touches
FROM (
SELECT
DATEPART (year, I.Date_Invoiced) Year
,DATEPART (month, I.Date_Invoiced) Month
,Concat(CASE WHEN DATEPART (week, I.Date_Invoiced) <10 THEN CONCAT('0',DATEPART (week, I.Date_Invoiced)) ELSE CAST(DATEPART (week, I.Date_Invoiced) AS NVARCHAR) END,'-',RIGHT(DATEPART (year, I.Date_Invoiced),2) ) WEEK
,DATEPART (day, I.Date_Invoiced) Day
,I.Invoice_Number
,I.Customer_Number
,I.Warehouse_Code
,S.Pack_Type
,S.Quantity_Per_Carton
,S.Inner_Pack_Quantity
,LTRIM(RTRIM(ID.Item_Number)) Item_Number
,LTRIM(RTRIM(ID.Color_Code)) Color_Code
,CASE
WHEN ISNULL(s.Actual_Cost, 0) = 0
THEN ISNULL(s.Standard_Cost, 0)
ELSE s.Actual_Cost
END Item_Cost
,ID.Quantity
,case when s.Pack_Type='carton' then id.Quantity/s.Quantity_Per_Carton when s.Pack_Type='Inner Poly' then id.Quantity/s.Inner_Pack_Quantity end qty
,ID.Line_Number
FROM Invoices I
LEFT JOIN Invoices_Detail ID on I.Company_Code = ID.Company_Code and I.Division_Code = ID.Division_Code and I.Invoice_Number = ID.Invoice_Number
LEFT JOIN Style S on I.Company_Code = S.Company_Code and I.Division_Code = S.Division_Code and ID.Item_Number = S.Item_Number and ID.Color_Code = S.Color_Code
WHERE 1=1
AND (I.Company_Code = #LocalCompanyCode OR #LocalCompanyCode IS NULL)
AND (I.Division_Code = #LocalDivisionCode OR #LocalDivisionCode IS NULL)
AND (I.Warehouse_Code = #LocalWarehouse OR #LocalWarehouse IS NULL)
AND (S.Pack_Type = #LocalPackType OR #LocalPackType IS NULL)
AND (I.Customer_Number = #LocalCustomerNumber OR #LocalCustomerNumber IS NULL)
AND (I.Date_Invoiced Between #LocalFromDate And #LocalToDate)
) T
GROUP BY Item_Number,Color_Code,Item_Cost,Week
) TT
Then I use a dynamic query to create the matrix:
select #cols = STUFF((SELECT ',' + QUOTENAME(Week)
from #VTable
group by Week
order by (Right(Week,2) + LEFT(Week,2))
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = '
SELECT
*
FROM (
SELECT Item_Number,Color_Code, Item_Cost,' + #cols + ' from
(
select Item_Number, Color_Code, Item_Cost, week, xCount
from #Vtable
) x
pivot
(
sum(xCount)
for week in (' + #cols + ')
) p
)T
'
execute(#query);
This gives me what I am looking for, here is what the matrix looks like.
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| Item_Number | Color_Code | Item_Cost | 36-18 | 37-18 | 38-18 | 39-18 | 40-18 | 41-18 | 42-18 | 43-18 | 44-18 | 45-18 | 46-18 | 47-18 | 48-18 | 49-18 | 50-18 | 51-18 | 52-18 | 53-18 | 01-19 | 02-19 | 03-19 | 04-19 | 05-19 | 06-19 | 07-19 | 08-19 | 09-19 | 10-19 | 11-19 | 12-19 | 13-19 | 14-19 | 15-19 | 16-19 | 17-19 | 18-19 | 19-19 | 20-19 | 21-19 | 22-19 | 23-19 | 24-19 | 25-19 | 26-19 | 27-19 | 28-19 | 29-19 | 30-19 | 31-19 | 32-19 | 33-19 | 34-19 | 35-19 |
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| 11073900LRGMO | 02000 | 8.51 | 1 | NULL | 13 | NULL | 3 | NULL | NULL | 3 | 3 | NULL | 4 | 3 | 6 | NULL | 4 | NULL | NULL | NULL | 7 | 4 | NULL | 3 | 2 | 5 | 30 | 7 | 3 | 10 | NULL | 9 | 19 | 5 | NULL | 10 | 9 | 5 | 2 | 3 | 5 | 4 | 3 | 9 | 7 | NULL | 5 | 1 | 3 | 5 | NULL | NULL | 11 | 7 | 3 |
| 11073900MEDMO | 02000 | 8.49 | 11 | NULL | 22 | NULL | 5 | NULL | NULL | 14 | 4 | NULL | 4 | 3 | 8 | NULL | 9 | NULL | NULL | NULL | 9 | 3 | NULL | 7 | 6 | 4 | 37 | 10 | 8 | 9 | NULL | 7 | 30 | 14 | NULL | 12 | 5 | 7 | 8 | 7 | 2 | 4 | 6 | 15 | 4 | NULL | 2 | 7 | 3 | 7 | NULL | NULL | 11 | 9 | 3 |
| 11073900SMLMO | 02000 | 8.50 | 6 | NULL | 18 | NULL | 3 | NULL | NULL | 3 | 7 | NULL | 5 | NULL | 7 | NULL | 9 | NULL | NULL | NULL | 7 | 4 | NULL | 7 | 2 | 6 | 37 | 9 | 4 | 7 | NULL | 7 | 19 | 7 | NULL | 11 | 5 | 7 | 7 | 2 | 3 | 8 | 8 | 9 | 2 | NULL | 2 | 2 | 2 | 4 | NULL | NULL | 8 | 5 | 4 |
| 11073900XLGMO | 02000 | 8.51 | 2 | NULL | 6 | NULL | 3 | NULL | NULL | 2 | 4 | NULL | 3 | 1 | 3 | NULL | 4 | NULL | NULL | NULL | 4 | 4 | NULL | NULL | 3 | 1 | 27 | 4 | 3 | 4 | NULL | 8 | 11 | 9 | NULL | 7 | 2 | 4 | 1 | 5 | 1 | 6 | 5 | 6 | 1 | NULL | 1 | 3 | NULL | 3 | NULL | NULL | 3 | 4 | 2 |
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
The last thing I want to do is find a good way to sort this table. I think the best way to do that would be to sort by which item numbers are picked the most across all weeks. Doing column wise sum will give me the total amount of touches per week for all items, but I want to do a row wise sum where there is another column at the end that has the touches per item. Does anyone know how I would do this? I've tried messing around with another dynamic query from this link -> (calculate Row Wise Sum - Sql server )
but I couldn't get it to work.
Here is a quick-and-dirty solution based on this answer to do the "coalesce sum" over your wk-yr columns. This does not modify your existing code, but for efficiency it may be better do as #Sean Lange suggests.
Tested on SQL Server 2017 latest (linux docker image).
Input dataset:
(Only 3 wk-yr columns here for simplicity. The code should work on arbitrary amount of columns):
create table WeeklySum (
Item_Number varchar(50),
Color_Code varchar(10),
Item_Cost float,
[36-18] float,
[37-18] float,
[38-18] float
)
insert into WeeklySum (Item_Number, Color_Code, Item_Cost, [36-18], [37-18], [38-18])
values ('11073900LRGMO', '02000', 8.51, 1, NULL, 13),
('11073900MEDMO', '02000', 8.49, 11, NULL, 22),
('11073900SMLMO', '02000', 8.50, 6, NULL, 18),
('11073900XLGMO', '02000', 8.51, 2, NULL, 6);
select * from WeeklySum;
Sample Code:
/* 1. Expression of the sum of coalesce(wk-yr, 0) */
declare #s varchar(max);
-- In short, this query select wanted columns by exclusion in sys.columns
-- and then do the "coalesce sum" over the selected columns in a row.
-- The "#s = coalesce()" expression is to avoid redundant '+' at beginning.
-- NOTE: May have to change sys.columns -> syscolumns for SQL Server 2005
-- or earlier versions
select #s = coalesce(#s + ' + coalesce([' + C.name + '], 0)', 'coalesce([' + C.name + '], 0)')
from sys.columns as C
where C.object_id = (select top 1 object_id from sys.objects
where name = 'WeeklySum')
and C.name not in ('Item_Number', 'Color_Code', 'Item_Cost');
print #s;
/* 2. Perform the sorting query */
declare #sql varchar(max);
set #sql = 'select *, ' + #s + ' as totalCount ' +
'from WeeklySum ' +
'order by totalCount desc';
print #sql;
execute(#sql);
Output:
| Item_Number | Color_Code | Item_Cost | 36-18 | 37-18 | 38-18 | totalCount |
|---------------|------------|-----------|-------|-------|-------|------------|
| 11073900MEDMO | 02000 | 8.49 | 11 | NULL | 22 | 33 |
| 11073900SMLMO | 02000 | 8.5 | 6 | NULL | 18 | 24 |
| 11073900LRGMO | 02000 | 8.51 | 1 | NULL | 13 | 14 |
| 11073900XLGMO | 02000 | 8.51 | 2 | NULL | 6 | 8 |
Also check the generated expressions on the messages window:
#s:
coalesce([36-18], 0) + coalesce([37-18], 0) + coalesce([38-18], 0) as totalCount
#sql:
select *, coalesce([36-18], 0) + coalesce([37-18], 0) + coalesce([38-18], 0) as totalCount from WeeklySum order by totalCount desc
I have a table that looks like the following.
What I want is the the rows in continuation of each other to be grouped together - for each "ID".
The column IsContinued marks if the next row should be combined with the current row
My data looks like this:
+-----+--------+-------------+-----------+----------+
| ID | Period | IsContinued | StartDate | EndDate |
+-----+--------+-------------+-----------+----------+
| 123 | 1 | 1 | 20180101 | 20180404 |
+-----+--------+-------------+-----------+----------+
| 123 | 2 | 1 | 20180501 | 20180910 |
+-----+--------+-------------+-----------+----------+
| 123 | 3 | 0 | 20181001 | 20181201 |
+-----+--------+-------------+-----------+----------+
| 123 | 4 | 1 | 20190105 | 20190228 |
+-----+--------+-------------+-----------+----------+
| 123 | 5 | 0 | 20190401 | 20190430 |
+-----+--------+-------------+-----------+----------+
| 456 | 2 | 1 | 20180201 | 20180215 |
+-----+--------+-------------+-----------+----------+
| 456 | 3 | 0 | 20180301 | 20180401 |
+-----+--------+-------------+-----------+----------+
| 456 | 4 | 0 | 20180501 | 20180530 |
+-----+--------+-------------+-----------+----------+
| 456 | 5 | 0 | 20180701 | 20180705 |
+-----+--------+-------------+-----------+----------+
The end result I want is this:
+-----+-------------+-----------+-----------+----------+
| ID | PeriodStart | PeriodEnd | StartDate | EndDate |
+-----+-------------+-----------+-----------+----------+
| 123 | 1 | 3 | 20180101 | 20181201 |
+-----+-------------+-----------+-----------+----------+
| 123 | 4 | 5 | 20190105 | 20190430 |
+-----+-------------+-----------+-----------+----------+
| 456 | 2 | 3 | 20180201 | 20180401 |
+-----+-------------+-----------+-----------+----------+
| 456 | 4 | 4 | 20180501 | 20180530 |
+-----+-------------+-----------+-----------+----------+
| 456 | 5 | 5 | 20180701 | 20180705 |
+-----+-------------+-----------+-----------+----------+
DDL Statement:
CREATE TABLE #Period (ID INT, PeriodNr INT, IsContinued INT, STARTDATE DATE, ENDDATE DATE)
INSERT INTO #Period VALUES (123,1,1,'20180101', '20180404'),
(123,2,1,'20180501', '20180910'),
(123,3,0,'20181001', '20181201'),
(123,4,1,'20190105', '20190228'),
(123,5,0,'20190401', '20190430'),
(456,2,1,'20180201', '20180215'),
(456,3,0,'20180301', '20180401'),
(456,4,0,'20180501', '20180530'),
(456,5,0,'20180701', '20180705')
The code should be run on SQL Server 2016
Thanks!
Here is one approach:
with removeFluff as
(
SELECT *
FROM (
SELECT ID, PeriodNr, IsContinued, STARTDATE, ENDDATE, LAG(IsContinued,1,2) OVER (PARTITION BY ID ORDER BY PERIODNR) Lag
FROM #Period
) A
WHERE (IsContinued <> Lag) OR (IsContinued + Lag = 0)
)
,getValues as
(
SELECT ID,
CASE WHEN LAG(IsContinued) OVER (PARTITION BY ID ORDER BY PeriodNr) = 1 THEN LAG(PeriodNr) OVER (PARTITION BY ID ORDER BY PeriodNr) ELSE PeriodNr END PeriodStart,
PeriodNr PeriodEnd,
CASE WHEN LAG(IsContinued) OVER (PARTITION BY ID ORDER BY PeriodNr) = 1 THEN LAG(STARTDATE) OVER (PARTITION BY ID ORDER BY PeriodNr) ELSE STARTDATE END StartDate,
EndDate,
IsContinued
FROM removeFluff r
)
SELECT ID, PeriodStart, PeriodEnd, StartDate, EndDate
FROM getValues
WHERE IsContinued = 0
Output:
ID PeriodStart PeriodEnd StartDate EndDate
123 1 3 2018-01-01 2018-12-01
123 4 5 2019-01-05 2019-04-30
456 2 3 2018-02-01 2018-04-01
456 4 4 2018-05-01 2018-05-30
456 5 5 2018-07-01 2018-07-05
Method:
removeFluff cte removes lines that are unimportant. Theses are the records that don't start or end a segment (line 2 in your sample data)
Now that the fluff is removed, we know that either:
A.) The line is complete on it's own (LAG(IsContinued) ... = 0), ie. previous line is complete
B.) The line needs the "start" info from the previous line (LAG(IsContinued) ... = 1)
We apply these two cases in the CASE expression of the getValues cte
Last, the results are narrowed to only the important rows in the final select with IsContinued = 0. This is because we have used LAG to get "start" data on the "end" data row, so we only want to select the end rows
I'll pivot date and time from 2 joined tables.
Table: Shipping
+-----+-------+-------------+---------------+-----------------+
| ID |PartNum|ForecastTime | ForecastDate | ForecastNetqty |
+-------------+-------------+---------------+-----------------+
| 1 | x001 | 8:00 | 20180101 | 5 |
| 2 | x001 | 12:00 | 20180101 | 10 |
| 3 | x002 | 12:00 | 20180102 | 15 |
| 4 | x003 | 08:00 | 20180101 | 13 |
| 5 | x003 | 12:00 | 20180103 | 12 |
| 6 | x004 | 8:00 | 20180104 | 10 |
| 7 | x004 | 12:00 | 20180104 | 5 |
| 8 | x005 | | 20180103 | 5 |
| 9 | x005 | 8:00 | 20180104 | 13 |
| 10 | x005 | 12:00 | 20180104 | 15 |
+-----+-------+-----------------------------+-----------------+
Table: Masterdata
+-----+--------+-------------+---------------+----------------+
| ID |Material| Shipto | DV | CusMaterialNum |
+-----+--------+-------------+---------------+----------------+
| 1 | 12345 | 11200 | 0101 | x001 |
| 2 | 98765 | 11201 | 0202 | x002 |
| 3 | 45678 | 11202 | 0303 | x003 |
| 4 | 12354 | 11203 | 0404 | x004 |
| 5 | 12365 | 11204 | 0505 | x005 |
+-----+--------+-----------------------------+----------------+
I'll to this report with looping date min to max in Forecastdate
+-------+--------+------+-----------------+--------+--------+--------+--------+
|PartNUm|Material|Shipto| DV |ForecastTime|20180101|20180102|20180103|20180104|
+-------+--------+------+-----------------+--------+--------+--------+--------+
| x001 | 12345 |11200 |0101| 08:00 | 5 | | | |
| | 12345 |11200 |0101| 12:00 | 10 | | | |
| x002 | 98765 |11201 |0202| 12:00 | | 15 | | |
| x003 | 45678 |11202 |0303| 8:00 | 13 | | | |
| | 45678 |11202 |0303| 12:00 | | | 12 | |
| x004 | 12354 |11203 |0404| 08:00 | 5 | | | 10 |
| | 12354 |11203 |0404| 12:00 | 10 | | | 5 |
| x005 | 12365 |11204 |0505| | | | 5 | |
| | 12365 |11204 |0505| 8:00 | | | | 13 |
| | 12365 |11204 |0505| 12:00 | | | | 15 |
+-------+--------+------+----+------------+--------+--------+--------+--------+
If you know all your dates you can use this code:
declare #Shipping table( ID int ,PartNum varchar(100),ForecastTime time, ForecastDate date, ForecastNetqty int);
insert into #Shipping values
( 1 , 'x001' , '8:00' , '20180101' , 5 ),
( 2 , 'x001' , '12:00' , '20180101' , 10 ),
( 3 , 'x002' , '12:00' , '20180102' , 15 ),
( 4 , 'x003' , '8:00' , '20180101' , 13 ),
( 5 , 'x003' , '12:00' , '20180103' , 12 ),
( 6 , 'x004' , '8:00' , '20180104' , 10 ),
( 7 , 'x004' , '12:00' , '20180104' , 5 ),
( 8 , 'x005' , null , '20180103' , 5 ),
( 9 , 'x005' , '8:00' , '20180104' , 13 ),
( 10 , 'x005' , '12:00' , '20180104' , 15 );
declare #Masterdata table( ID int ,Material int, Shipto int, DV varchar(100), CusMaterialNum varchar(100));
insert into #Masterdata values
( 1 , 12345 , 11200 , '0101' , 'x001' ),
( 2 , 98765 , 11201 , '0202' , 'x002' ),
( 3 , 45678 , 11202 , '0303' , 'x003' ),
( 4 , 12354 , 11203 , '0404' , 'x004' ),
( 5 , 12365 , 11204 , '0505' , 'x005' );
with d as
(
select s.PartNum,
m.Material,
m.Shipto,
m.DV,
s.ForecastTime,
s.ForecastDate,
s.ForecastNetqty
from #Shipping s join #Masterdata m
on s.PartNum = m.CusMaterialNum
)
select *
from d pivot (sum(ForecastNetqty) for ForecastDate in ([2018-01-01], [2018-01-02], [2018-01-03], [2018-01-04]))p;
If your dates are dynamic and you cannot directly write in ([2018-01-01], [2018-01-02], [2018-01-03], [2018-01-04]) you should use dynamic sql to define your in
I want to convert columns to rows in SQL Server:
Id Value Jan1 Jan2
----------------------
1 2 25 35
2 5 45 45
result should be
Id Value Month 1 2
----------------------
1 2 Jan 25 35
2 5 Jan 45 45
How can I get this result? Anyone please help
What you are asking seems a little strange. If I extend your example to include columns for Feb1 and Feb2, then I see two options for transposing your columns from this:
+----+-------+------+------+------+------+
| Id | Value | Jan1 | Jan2 | Feb1 | feb2 |
+----+-------+------+------+------+------+
| 1 | 2 | 25 | 35 | 15 | 28 |
| 2 | 5 | 45 | 45 | 60 | 60 |
+----+-------+------+------+------+------+
Transpose just the month part:
select Id, Value, MonthName, MonthValue1, MonthValue2
from t
cross apply (values ('Jan',Jan1,Jan2),('Feb',Feb1,Feb2)
) v (MonthName,MonthValue1,MonthValue2)
returns:
+----+-------+-----------+-------------+-------------+
| Id | Value | MonthName | MonthValue1 | MonthValue2 |
+----+-------+-----------+-------------+-------------+
| 1 | 2 | Jan | 25 | 35 |
| 1 | 2 | Feb | 15 | 28 |
| 2 | 5 | Jan | 45 | 45 |
| 2 | 5 | Feb | 60 | 60 |
+----+-------+-----------+-------------+-------------+
Or completely transpose the month columns like so:
select Id, Value, MonthName, MonthValue
from t
cross apply (values ('Jan1',Jan1),('Jan2',Jan2),('Feb1',Feb1),('Feb2',Feb2)
) v (MonthName,MonthValue)
returns:
+----+-------+-----------+------------+
| Id | Value | MonthName | MonthValue |
+----+-------+-----------+------------+
| 1 | 2 | Jan1 | 25 |
| 1 | 2 | Jan2 | 35 |
| 1 | 2 | Feb1 | 15 |
| 1 | 2 | Feb2 | 28 |
| 2 | 5 | Jan1 | 45 |
| 2 | 5 | Jan2 | 45 |
| 2 | 5 | Feb1 | 60 |
| 2 | 5 | Feb2 | 60 |
+----+-------+-----------+------------+
rextester demo: http://rextester.com/KZV45690
This would appear to be:
select Id, Value, 'Jan' as [month], Jan1 as [1], Jan2 as [2]
from t;
You are basically just adding another column to the output.
I don't recommend using numbers as column names, nor SQL Server keywords such as month.
Here is an option that you won't have to specify up to 365 fields
Declare #YourTable table (Id int,Value int,Jan1 int,Jan2 int,Feb1 int, Feb2 int)
Insert Into #YourTable values
(1, 2, 25, 35, 100, 101),
(2, 5, 45, 45, 200, 201)
Select [Id],[Value],[Month],[1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[25],[26],[27],[28],[29],[30],[31]
From (
Select A.Id
,A.Value
,[Month] = Left(C.Item,3)
,[Col] = substring(C.Item,4,5)
,[Measure] = C.Value
From #YourTable A
Cross Apply (Select XMLData = cast((Select A.* for XML Raw) as xml)) B
Cross Apply (
Select Item = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','int')
From B.XMLData.nodes('/row') as A(r)
Cross Apply A.r.nodes('./#*') AS B(attr)
Where attr.value('local-name(.)','varchar(100)') not in ('ID','Value')
) C
) A
Pivot (sum(Measure) For [Col] in ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15],[16],[17],[18],[19],[20],[21],[22],[23],[24],[25],[26],[27],[28],[29],[30],[31]) ) p
Returns