Converting rows from a table into days of the week - sql
What I thought was going to be a fairly easy task is becoming a lot more difficult than I expected. We have several tasks that get performed sometimes several times per day, so we have a table that gets a row added whenever a user performs the task. What I need is a snapshot of the month with the initials and time of the person that did the task like this:
The 'activity log' table is pretty simple, it just has the date/time the task was performed along with the user that did it and the scheduled time (the "Pass Time" column in the image); this is the table I need to flatten out into days of the week.
Each 'order' can have one or more 'pass times' and each pass time can have zero or more initials for that day. For example, for pass time 8:00, it can be done several times during that day or not at all.
I have tried standard joins to get the orders and the scheduled pass times with no issues, but getting the days of the week is escaping me. I have tried creating a function to get all the initials for the day and just creating
'select FuncCall() as 1, FuncCall() as 2', etc. for each day of the week but that is a real performance suck.
Does anyone know of a better technique?
Update: I think the comment about PIVOT looks promising, but not quite sure because everything I can find uses an aggregate function in the PIVOT part. So if I have the following table:
create table #MyTable (OrderName nvarchar(10),DateDone date, TimeDone time, Initials nvarchar(4), PassTime nvarchar(8))
insert into #MyTable values('Order 1','2018/6/1','2:00','ABC','1st Pass')
insert into #MyTable values('Order 1','2018/6/1','2:20','DEF','1st Pass')
insert into #MyTable values('Order 1','2018/6/1','4:40','XYZ','2nd Pass')
insert into #MyTable values('Order 1','2018/6/3','5:00','ABC','1st Pass')
insert into #MyTable values('Order 1','2018/6/4','4:00','QXY','2nd Pass')
insert into #MyTable values('Order 1','2018/6/10','2:00','ABC','1st Pass')
select * from #MyTable
pivot () -- Can't figure out what goes here since all examples I see have an aggregate function call such as AVG...
drop table #MyTable
I don't see how to get this output since I am not aggregating anything other than the initials column:
Something like this?
DECLARE #taskTable TABLE(ID INT IDENTITY,Task VARCHAR(100),TaskPerson VARCHAR(100),TaskDate DATETIME);
INSERT INTO #taskTable VALUES
('Task before June 2018','AB','2018-05-15T12:00:00')
,('Task 1','AB','2018-06-03T13:00:00')
,('Task 1','CD','2018-06-04T14:00:00')
,('Task 2','AB','2018-06-05T15:00:00')
,('Task 1','CD','2018-06-06T16:00:00')
,('Task 1','EF','2018-06-06T17:00:00')
,('Task 1','EF','2018-06-06T18:00:00')
,('Task 2','GH','2018-06-07T19:00:00')
,('Task 1','CD','2018-06-07T20:00:00')
,('After June 2018','CD','2018-07-15T21:00:00');
SELECT p.*
FROM
(
SELECT t.Task
,ROW_NUMBER() OVER(PARTITION BY t.Task,CAST(t.TaskDate AS DATE) ORDER BY t.TaskDate) AS Taskindex
,CONCAT(t.TaskPerson,' ',CONVERT(VARCHAR(5),t.TaskDate,114)) AS Content
,DAY(TaskDate) AS ColumnName
FROM #taskTable t
WHERE YEAR(t.TaskDate)=2018 AND MONTH(t.TaskDate)=6
) tbl
PIVOT
(
MAX(Content) FOR ColumnName IN([1],[2],[3],[4],[5],[6],[7],[8],[9],[10]
,[11],[12],[13],[14],[15],[16],[17],[18],[19],[20]
,[21],[22],[23],[24],[25],[26],[27],[28],[29],[30],[31])
) P
ORDER BY P.Task,Taskindex;
The result
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
| Task | Taskindex | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 |
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
| Task 1 | 1 | NULL | NULL | AB 13:00 | CD 14:00 | NULL | CD 16:00 | CD 20:00 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
| Task 1 | 2 | NULL | NULL | NULL | NULL | NULL | EF 17:00 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
| Task 1 | 3 | NULL | NULL | NULL | NULL | NULL | EF 18:00 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
| Task 2 | 1 | NULL | NULL | NULL | NULL | AB 15:00 | NULL | GH 19:00 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+--------+-----------+------+------+----------+----------+----------+----------+----------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+------+
The first trick is, to use the day's index (DAY()) as column name. The second trick is the ROW_NUMBER(). This will add a running index per task and day thus replicating the rows per index. Otherwise you'd get just one entry per day.
You input tables will be more complex, but I think this shows the principles...
UPDATE: So we have to get it even slicker :-D
WITH prepareData AS
(
SELECT t.Task
,t.TaskPerson
,t.TaskDate
,CONVERT(VARCHAR(10),t.TaskDate,126) AS TaskDay
,DAY(t.TaskDate) AS TaskDayIndex
,CONVERT(VARCHAR(5),t.TaskDate,114) AS TimeContent
FROM #taskTable t
WHERE YEAR(t.TaskDate)=2018 AND MONTH(t.TaskDate)=6
)
SELECT p.*
FROM
(
SELECT t.Task
,STUFF((
SELECT ', ' + CONCAT(x.TaskPerson,' ',TimeContent)
FROM prepareData AS x
WHERE x.Task=t.Task
AND x.TaskDay= t.TaskDay
ORDER BY x.TaskDate
FOR XML PATH(''),TYPE
).value(N'.',N'nvarchar(max)'),1,2,'') AS Content
,t.TaskDayIndex
FROM prepareData t
GROUP BY t.Task, t.TaskDay,t.TaskDayIndex
) p--tbl
PIVOT
(
MAX(Content) FOR TaskDayIndex IN([1],[2],[3],[4],[5],[6],[7],[8],[9],[10]
,[11],[12],[13],[14],[15],[16],[17],[18],[19],[20]
,[21],[22],[23],[24],[25],[26],[27],[28],[29],[30],[31])
) P
ORDER BY P.Task;
The result
+--------+------+------+----------+----------+----------+------------------------------+----------+------+
| Task | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
+--------+------+------+----------+----------+----------+------------------------------+----------+------+
| Task 1 | NULL | NULL | AB 13:00 | CD 14:00 | NULL | CD 16:00, EF 17:00, EF 18:00 | CD 20:00 | NULL |
+--------+------+------+----------+----------+----------+------------------------------+----------+------+
| Task 2 | NULL | NULL | NULL | NULL | AB 15:00 | NULL | GH 19:00 | NULL |
+--------+------+------+----------+----------+----------+------------------------------+----------+------+
This will use a well discussed XML trick within a correlated sub-query in order to get all common entries together as one. With this united content you can go the normal PIVOT path. The aggregate will not compute anything, as there is - for sure - just one value per cell.
Related
SQL select rows if contains any keys but not all of keys are null
I have table like this +------+-------+-------+-------+ | row | a | b | c | +------+-------+-------+-------+ | 1 | 1 | | 1 | | 2 | 2 | | 2 | | 3 | | 3 | 3 | | 4 | | 4 | 4 | | 5 | null | null | null | +------+-------+-------+-------+ I want to get rid row 5, my logic so far is where not (a is null and b is null and c is null) but it does not remove row 5. if I do where (a is not null and b is not null and c is not null) it will remove all the rows. I've tried all the possible combination of and & or that crossed in my mind but still cannot get what I try to achieve. Can someone help me?
SELECT yt.* FROM your_table yt WHERE COALESCE(a,b,c) is not null ; Example The coalesce can be used to return first not null value. If there are multiple columns, and all columns have NULL value then it returns NULL otherwise it will return first not null value.
Transposing SQL Table Columns to Rows with Count on Each Category
I have a table with 12,000 rows of data. The table is comprised of 7 columns of data (PIDA, NIDA, SIDA, IIPA, RPRP, IORS, DDSN) each column with 4 entry types ("Supported", "Not Supported", "Uncatalogued", or "NULL" entries) +--------------+-----------+--------------+-----------+ | PIDA | NIDA | SIDA | IIPA | +--------------+-----------+--------------+-----------+ | Null | Supported | Null | Null | | Uncatalogued | Supported | Null | Null | | Supported | Supported | Uncatalogued | Supported | | Supported | Null | Uncatalogued | Null | +--------------+-----------+--------------+-----------+ I would like to generate an output where each entry is counted for each column. Like column to row transpose. +---------------+------+------+------+------+ | Categories | PIDA | NIDA | SIDA | IIPA | +---------------+------+------+------+------+ | Supported | 10 | 20 | 50 | 1 | | Non Supported | 30 | 50 | 22 | 5 | | Uncatalogued | 5 | 10 | 22 | 22 | | NULL | 10 | 11 | 22 | 22 | +---------------+------+------+------+------+ Not having any luck with inline select or case statements. I have a feeling a little bit of both would be needed to first count and then list each as row in the output Thanks all,
One option is to UNPIVOT your data and then PIVOT the results Example Select * From ( Select B.* From YourTable A Cross Apply ( values (PIDA,'PIDA',1) ,(NIDA,'NIDA',1) ,(SIDA,'SIDA',1) ,(IIPA,'IIPA',1) ) B(Categories,Item,Value) ) src Pivot ( sum(Value) for Item in ([PIDA],[NIDA],[SIDA],[IIPA] ) ) pvt Results (with small sample size) Categories PIDA NIDA SIDA IIPA NULL 1 1 2 3 Supported 2 3 NULL 1 Uncatalogued 1 NULL 2 NULL
Linear extrapolate values down to 0 from variable starting points
I want to build a query which allows me to flexible linear extrapolate a number down to Age 0 starting from the last known value. The table (see below) has two columns, column Age and Volume. My last known volume is 321.60 at age 11, how can I linear extrapolate the 321.60 down to age 0 in annual steps? Also, I would like to design the query in a way which allows the age to change. For example, in another scenario the last volume is at age 27. I have been experimenting with the lead function, as a result I can extrapolate the volume at age 10 but the function does not allow me to extrapolate down to 0. How can I design a query which (A) allows me to linear extrapolate to age 0 and (B) is flexible and allows different starting points for the linear extrapolation. SELECT [age], [volume], Concat(CASE WHEN volume IS NULL THEN ( Lead(volume, 1, 0) OVER (ORDER BY age) ) / ( age + 1 ) * age END, volume) AS 'Extrapolate' FROM tbl_volume +-----+--------+-------------+ | Age | Volume | Extrapolate | +-----+--------+-------------+ | 0 | NULL | NULL | | 1 | NULL | NULL | | 2 | NULL | NULL | | 3 | NULL | NULL | | 4 | NULL | NULL | | 5 | NULL | NULL | | 6 | NULL | NULL | | 7 | NULL | NULL | | 8 | NULL | NULL | | 9 | NULL | NULL | | 10 | NULL | 292.363 | | 11 | 321.60 | 321.60 | | 12 | 329.80 | 329.80 | | 13 | 337.16 | 337.16 | | 13 | 343.96 | 343.96 | | 14 | 349.74 | 349.74 | +-----+--------+-------------+
If I assume that the value is 0 at 0, then you can use simple arithmetic. This seems to work in your case: select t.*, coalesce(t.volume, t.age * (t2.volume / t2.age)) as extrapolated_volume from t cross join (select top (1) t2.* from t t2 where t2.volume is not null order by t2.age asc ) t2; Here is a db<>fiddle
You can use a windowing function with an empty over() for this kind of thing. As a trivial example: create table t(j int, k decimal(3,2)); insert t values (1, null), (2, null), (3, 3), (4, 4); select j, j * avg(k / j) over () from t Note that avg() ignores nulls.
Repeating ID based on
I have a very simple requirement but I'm struggling to find a way around this. I have a very simple query: SELECT ServiceCode, StartDate, Available, Nights, BookingID FROM #tmpAvailability LEFT JOIN vwRSBooking B ON B.Depart = A.StartDate AND B.ServiceCode = A.SupplierCode AND B.StatusID IN (2640, 2621) ORDER BY StartDate; Made up of 2 tables #tmpAvailability which consists of the following fields: SupplierCode StartDate Available vwRSBooking which consists of the following fields BookingID DepartDate Code Nights StatusID Departure and startdate can be joined to link the first day, and the servicecode and suppliercode can be joined to make sure that the availability is linked to the same supplier. Which produces an output like this: Code | Dates | Available | Nights | BookingID TEST | 2018-01-04 | 1 | NULL | NULL TEST | 2018-01-05 | 1 | NULL | NULL TEST | 2018-01-06 | 0 | 4 | 123456 TEST | 2018-01-07 | 0 | NULL | NULL TEST | 2018-01-08 | 0 | NULL | NULL TEST | 2018-01-09 | 0 | NULL | NULL TEST | 2018-01-10 | 1 | NULL | NULL TEST | 2018-01-11 | 1 | NULL | NULL TEST | 2018-01-12 | 1 | NULL | NULL TEST | 2018-01-13 | 0 | NULL | 234567 TEST | 2018-01-14 | 0 | NULL | NULL TEST | 2018-01-15 | 0 | NULL | NULL What I need is when the BookingID in for 4 days that the bookingID and the nights are spread across those days, for example: Code | Dates | Available | Nights | BookingID TEST | 2018-01-04 | 1 | NULL | NULL TEST | 2018-01-05 | 1 | NULL | NULL TEST | 2018-01-06 | 0 | 4 | 123456 TEST | 2018-01-07 | 0 | 4 | 123456 TEST | 2018-01-08 | 0 | 4 | 123456 TEST | 2018-01-09 | 0 | 4 | 123456 TEST | 2018-01-10 | 1 | NULL | NULL TEST | 2018-01-11 | 1 | NULL | NULL TEST | 2018-01-12 | 1 | NULL | NULL TEST | 2018-01-13 | 0 | 3 | 234567 TEST | 2018-01-14 | 0 | 3 | 234567 TEST | 2018-01-15 | 0 | 3 | 234567 TEST | 2018-01-16 | 1 | NULL | NULL If anyone has any ideas on how to solve it would be most appreciated. Andrew
You could replace your vwRSBooking with another view which uses a CTE to obtain all the dates the booking covers. Then use the view's coverdate for joining to the #tmpAvailability table: CREATE VIEW vwRSBookingFull AS WITH cte ( bookingid, nights, depart, code, coverdate) AS (SELECT bookingid, nights, depart, code, depart FROM vwRSBooking UNION ALL SELECT c.bookingid, c.nights, c.depart, c.code, DATEADD(d, 1, c.coverdate) FROM cte c WHERE DATEDIFF(d, c.depart, c.coverdate) < (c.nights - 1)) SELECT c.bookingid, c.nights, c.depart, c.code, c.coverdate FROM cte c GO
You will need a calendar table with all the dates in the date range your dates may fall into. For this example, I build one for January 2018. We can then join onto this table to create the additional rows. Here is the sample code I used. You can see it at SQL Fiddle. CREATE TABLE code ( code varchar(max), dates date, available int, nights int, bookingid int ) INSERT INTO code VALUES ('TEST','2018-01-04','1',NULL,NULL), ('TEST','2018-01-05','1',NULL,NULL), ('TEST','2018-01-06','0',4,123456), ('TEST','2018-01-07','0',NULL,NULL), ('TEST','2018-01-08','0',NULL,NULL), ('TEST','2018-01-09','0',NULL,NULL), ('TEST','2018-01-10','1',NULL,NULL), ('TEST','2018-01-11','1',NULL,NULL), ('TEST','2018-01-12','1',NULL,NULL), ('TEST','2018-01-13','0',3,234567), ('TEST','2018-01-14','0',NULL,NULL), ('TEST','2018-01-15','0',NULL,NULL) CREATE TABLE dates ( dates date ) INSERT INTO dates VALUES ('2018-01-01'),('2018-01-02'),('2018-01-03'),('2018-01-04'),('2018-01-05'),('2018-01-06'),('2018-01-07'),('2018-01-08'),('2018-01-09'),('2018-01-10'),('2018-01-11'),('2018-01-12'),('2018-01-13'),('2018-01-14'),('2018-01-15'),('2018-01-16'),('2018-01-17'),('2018-01-18'),('2018-01-19'),('2018-01-20'),('2018-01-21'),('2018-01-22'),('2018-01-23'),('2018-01-24'),('2018-01-25'),('2018-01-26'),('2018-01-27'),('2018-01-28'),('2018-01-29'),('2018-01-30'),('2018-01-31') Here is the query based on this dataset: SELECT code.code, dates.dates, code.available, code.nights, code.bookingid FROM code LEFT JOIN dates ON dates.dates >= code.dates AND dates.dates < DATEADD(DAY,nights,code.dates) Edit: Here is an example using your initial query as a subquery to join your result set onto the dates table if you want a copy & paste. Still requires creating the dates table. SELECT ServiceCode, StartDate, Available, Nights, BookingID FROM ( SELECT ServiceCode, StartDate, Available, Nights, BookingID FROM #tmpAvailability LEFT JOIN vwRSBooking B ON B.Depart = A.StartDate AND B.ServiceCode = A.SupplierCode AND B.StatusID IN (2640, 2621) ) code LEFT JOIN dates ON dates.dates >= code.dates AND dates.dates < DATEADD(DAY,nights,code.dates) ORDER BY StartDate;
SQL Server: flatten PIVOT result
A PIVOT function I wrote produces the following result set: Date | User | Hour | Result | FIELD1 | FIELD2 | FIELD3 | FIELD4 | FIELD5 | FIELD6 ----------------------------------------------------------------------------------------- 2015-06-23 | Pippo | 1 | OK | NULL | NULL | 10 | NULL | NULL | NULL 2015-06-23 | Pippo | 1 | OK | NULL | 5 | NULL | NULL | NULL | NULL 2015-06-23 | Pippo | 1 | OK | 1 | NULL | NULL | NULL | NULL | NULL Is there a way, for the rows having the same Date, User, Hour, Result values to aggregate all the FIELD columns into one as following: 2015-06-23 | Pippo | 1 | OK | 1 | 5 | 10 | NULL | NULL | NULL I have tried GROUP BY on (Date,User,Hour,Result) but the PIVOT operator keeps on disaggregating, the same holds for MAX over any of the FIELD# columns. Any idea?
You can use your PIVOT as a subselect and consolidate your results on the main query SELECT Date, User, Hour, Result, SUM(ISNULL(Field1,0) Field1, SUM(ISNULL(Field2,0) Field2, ... FROM ( SELECT ... FROM ... PIVOT ... ) Subquery GROUP BY Date, User, Hour, Result
you have to leave only three columns in your subquery. The PIVOTfunction makes lines for rows with unique ALL columns, not only used in pivot