PowerPivot Rolling Total in Column - powerpivot

I have been asked to look into Response and Resolution times for incidents. Unfortunately the tool we have in place is not helpful in this area. Below is an extract of the data for 2 tickets:
Incident_Id | Status |Begin_Time |End_Time | RowTotal
------------|------------------|------------------|------------------|--------
IM3415346 | Open | 10/03/2017 11:20 | 10/03/2017 11:33 | 787
IM3415346 | Work In Progress | 10/03/2017 11:33 | 10/03/2017 11:55 | 1325
IM3415346 | Work In Progress | 10/03/2017 11:55 | 10/03/2017 13:20 | 5099
IM3415346 | Work In Progress | 10/03/2017 13:20 | 10/03/2017 13:56 | 2133
IM3415346 | Closed | 10/03/2017 13:56 | 10/03/2017 13:56 | 0
IM3415483 | Open | 10/03/2017 12:30 | 10/03/2017 12:39 | 530
IM3415483 | Work In Progress | 10/03/2017 12:39 | 10/03/2017 12:53 | 848
IM3415483 | Work In Progress | 10/03/2017 12:53 | 10/03/2017 14:10 | 4579
IM3415483 | Work In Progress | 10/03/2017 14:10 | 10/03/2017 14:30 | 1199
IM3415483 | Work In Progress | 10/03/2017 14:30 | 10/03/2017 16:55 | 8700
IM3415483 | Closed | 10/03/2017 16:55 | 10/03/2017 16:55 | 0
The tool only says that a ticket is out of SLA and this is getting attributed to the team closing the ticket even if they were not the ones responsible for the delay.
Ideally I am trying to get a running total column for each incident next to the RowTotal:
RowTotal | RunningTotal |
---------|--------------|
787 | 786 |
1325 | 2111 |
5099 | 7210 |
2133 | 9343 |
0 | 9343 |
530 | 530 |
848 | 1377 |
4579 | 5956 |
1199 | 7155 |
8700 | 15855 |
0 | 15855 |
I have seen how to make cumulative total adding up all the times for each incident in turn...
=CALCULATE(SUM(Query[TotalSeconds]),Query[Incident_Id]=EARLIER(Query[Incident_Id]),Query[TotalSeconds]>0)
But I am really struggling with this running total approach. Is there anybody who might have come across this before?

I couldn't see TotalSeconds in the data provided so I'll just use RowTotal instead to illustrate. Feel free to change it to adopt to your exact use case.
RunningTotal =
CALCULATE(
SUM(Query[RowTotal]),
FILTER(
Query,
Query[Incident_Id] = EARLIER(Query[Incident_Id]) &&
Query[Begin_Time] <= EARLIER(Query[Begin_Time])
)
)
You're missing the FILTER function for the row context to be working properly. You also need to add a filter on Begin_Time so it'll only add up the previous time logged, but not all rows in the incident.
Depends on the business logic you may need to add filters on Status or also End_Time, but that's beyond the discussion.
Result:

Related

How can I get a count of all previous registers of a user in a table for each time this user shows up?

Ok, so let's say I have a column like that:
client_calls
+------+-------+---------------------+
| id | userId| last_call_to_client |
+------+-------+---------------------+
| 3004 | 664 | 2013-04-01 |
| 3005 | 664 | 2014-05-09 |
| 3006 | 664 | 2015-12-11 |
| 3007 | 664 | 2021-11-24 |
| 3008 | 664 | 2022-03-05 |
+------+-------+---------------------+
And I need this result, a table that counts how many calls a client got before the date in a specific row:
client_calls_so_far
+------+-------+---------------------+-----------------+
| id | userId| last_call_to_client | calls_so_far |
+------+-------+---------------------+-----------------+
| 3004 | 664 | 2013-04-01 | 0 |
| 3005 | 664 | 2014-05-09 | 1 |
| 3006 | 664 | 2015-12-11 | 2 |
| 3007 | 664 | 2021-11-24 | 3 |
| 3008 | 664 | 2022-03-05 | 4 |
+------+-------+---------------------+-----------------+
How can I do that?
Example for you:
select *,count(last_call_to_client) over (partition by userId rows between unbounded preceding and current row) -1 as count_call
from client_calls;
demo : https://dbfiddle.uk/ceEDVUg-

Create summary of data in specified format

I have an SQL query which produces the following table:
+----------------------+-------+------+------+-----+------+
| PAYCODE | HOURS | COST | FROM | TO | ROLE |
+----------------------+-------+------+------+-----+------+
| EU All Paid Time | 5 | 66.5 | 583 | 471 | ASSC |
| DE Basic P | 5 | 66.5 | 583 | 471 | ASSC |
| EU All Hours Average | 4 | 53.2 | 583 | 470 | ASSC |
| EU All Hours Average | 5 | 66.5 | 583 | 471 | ASSC |
| EU All Worked TIme | 5 | 66.5 | 583 | 471 | ASSC |
| DE Basic P | 4 | 53.2 | 583 | 470 | ASSC |
| EU All Paid Time | 4 | 53.2 | 583 | 470 | ASSC |
| EU All Regular | 4 | 53.2 | 583 | 470 | ASSC |
| EU All Regular | 5 | 66.5 | 583 | 471 | ASSC |
| EU All Worked TIme | 4 | 53.2 | 583 | 470 | ASSC |
+----------------------+-------+------+------+-----+------+
I want to change the way it looks so that I have the From, To and Role columns as-is on the left, then the paycode names across the column headers with hours underneath, then another set of columns with paycode names across the column headers with costs underneath
How can this be done? I managed to do separate tables of hours/costs using PIVOT, but want the combined view I describe instead.
IT's worth noting that the paycode list will vary depending on what ones are earnt so I don't want to fix the names using a standard pivot

Select min and max values while grouped by a third column

I have a table with campaign data and need to get a list of 'spend_perc' min and max values while grouping by the client_id AND timing of these campaigns.
sample data being:
camp_id | client_id | start_date | end_date | spend_perc
7257 | 35224 | 2017-01-16 | 2017-02-11 | 100.05
7284 | 35224 | 2017-01-16 | 2017-02-11 | 101.08
7308 | 35224 | 2017-01-16 | 2017-02-11 | 101.3
7309 | 35224 | 2017-01-16 | 2017-02-11 | 5.8
6643 | 35224 | 2017-02-08 | 2017-02-24 | 79.38
6645 | 35224 | 2017-02-08 | 2017-02-24 | 6.84
6648 | 35224 | 2017-02-08 | 2017-02-24 | 100.01
6649 | 78554 | 2017-02-09 | 2017-02-27 | 2.5
6650 | 78554 | 2017-02-09 | 2017-02-27 | 18.5
6651 | 78554 | 2017-02-09 | 2017-02-27 | 98.5
what I'm trying to get is the rows with min and max 'spend_perc' values per each client_id AND within the same campaign timing (identical start/end_date):
camp_id | client_id | start_date | end_date | spend_perc
7308 | 35224 | 2017-01-16 | 2017-02-11 | 101.3
7309 | 35224 | 2017-01-16 | 2017-02-11 | 5.8
6645 | 35224 | 2017-02-08 | 2017-02-24 | 6.84
6648 | 35224 | 2017-02-08 | 2017-02-24 | 100.01
6649 | 78554 | 2017-02-09 | 2017-02-27 | 2.5
6651 | 78554 | 2017-02-09 | 2017-02-27 | 98.5
smth like:?
with a as
(select distinct
camp_id,client_id,start_date,end_date,max(spend_perc) over (partition by start_date,end_date),min(spend_perc) over (partition by start_date,end_date)
from tn
)
select camp_id,client_id,start_date,end_date,case when spend_perc=max then max when spend_perc = min then min end spend_perc
from a
order by camp_id,client_id,start_date,end_date,spend_perc
I think you will want to get rid of the camp_id field because that will be meaningless in this case. So you want something like:
SELECT client_id, start_date, end_date,
min(spend_perc) as min_spend_perc, max(spend_perc) as max_spend_perc
FROM mytable
GROUP BY client_id, start_date, end_date;
Group by the criteria you want to, and select min and max as columns per unique combination of these values (i.e. per row).

SQL consecutive rows with a fixed time span between each row

Good people at StackOverflow,
please be so kind to provide some help...
So what we have here is let's say a table of sort... containing phone calls from customers to some Contact Center (HelpDesk or whatever).
-----------------------------------------------------------------------
| DateD | DateM | Date_Time |EMPL_ID| PHONE_NO |FIRST_REP |
|----------------------------------------------------------------------
|2016-12-12| 2016-12-01| 2016-12-12 15:55| 16652 | 123456789| First |
|2016-12-22| 2016-12-01| 2016-12-22 10:42| 18178 | 123456789| First |
|2016-12-22| 2016-12-01| 2016-12-22 10:54|112981 | 123456789| Repeat |
|2016-12-22| 2016-12-01| 2016-12-22 10:57| 18179 | 123456789| Repeat |
|2016-12-23| 2016-12-01| 2016-12-23 12:27| 16653 | 123456789| Repeat |
|2017-01-05| 2017-01-01| 2017-01-05 15:20| 17896 | 123456789| First |
|2017-01-11| 2017-01-01| 2017-01-11 15:48| 17909 | 123456789| Repeat |
|2017-01-18| 2017-01-01| 2017-01-18 10:07| 18175 | 123456789| Repeat |
|2016-12-03| 2016-12-01| 2016-12-03 20:32| 17745 | 111222333| First |
|2016-12-21| 2016-12-01| 2016-12-21 18:47| 10982 | 111222333| First |
|2016-12-22| 2016-12-01| 2016-12-22 15:53| 17820 | 111222333| Repeat |
|2016-12-28| 2016-12-01| 2016-12-28 13:07| 15976 | 111222333| Repeat |
|2016-12-29| 2016-12-01| 2016-12-29 21:35| 17896 | 111222333| Repeat |
|2016-12-29| 2016-12-01| 2016-12-29 21:46| 15498 | 111222333| Repeat |
|2017-01-02| 2017-01-01| 2017-01-02 16:24| 13117 | 111222333| Repeat |
-----------------------------------------------------------------------
What I would like to do is figure out, how many calls are repeated, meaning that the customer called again.
Now the tricky part is that the repeated calls is defined as a call that originated from the 'first call' and is being repeated consecutively in the time span of 7 days from each interaction after the first call, so for instance:
----------------------------------------------------------------------------
| DateD | DateM | Date_Time |EMPL_ID| PHONE_NO |FIRST_REP |
|---------------------------------------------------------------------------
|2016-12-01 | 2016-12-12 | 2016-12-12 15:55 | 16652 | 123456789 | First |
|2016-12-01 | 2016-12-22 | 2016-12-22 10:42 | 18178 | 123456789 | First |
|2016-12-01 | 2016-12-22 | 2016-12-22 10:54 | 112981| 123456789 | Repeat |
|2016-12-01 | 2016-12-22 | 2016-12-22 10:57 | 18179 | 123456789 | Repeat |
|2016-12-01 | 2016-12-23 | 2016-12-23 12:27 | 16653 | 123456789 | Repeat |
|2017-01-01 | 2017-01-05 | 2017-01-05 15:20 | 17896 | 123456789 | First |
|2017-01-01 | 2017-01-11 | 2017-01-11 15:48 | 17909 | 123456789 | Repeat |
|2017-01-01 | 2017-01-18 | 2017-01-18 10:07 | 18175 | 123456789 | Repeat |
----------------------------------------------------------------------------
we've got :
1st row that is a First Call with no repeated calls,
2sd row that is a First Call with 3 repeated calls as every interaction is in the time span of 7 days from each previous one beginning from the First Calls
3rd row that is a First Call with 2 repeated calls just like above.
And now what we want to say is that employee (1st row) with the ID 16652 generated 0 repeated calls but the employee withe the ID 18178 generated on the other hand 3 repeated calls.
Finally it would be great to have some method that would allow to create an output like this:
| DateM | DateD | Date_Time |EMP_ID | PHONE_NO |FIRST_REP | DateM_REP | DateD_REP | Date_Time_REP | EMP_ID_REP | PHONE_NO_REP | FIRST_REP_REP
|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|2016-12-01 | 2016-12-12 | 2016-12-12 15:55 | 16652 | 123456789 | First | null | null | null | null | null | null
|2016-12-01 | 2016-12-22 | 2016-12-22 10:42 | 18178 | 123456789 | First | 2016-12-01 | 2016-12-22 | 2016-12-22 10:54 | 112981 | 123456789 | Repeat
|2016-12-01 | 2016-12-22 | 2016-12-22 10:42 | 18178 | 123456789 | First | 2016-12-01 | 2016-12-22 | 2016-12-22 10:57 | 18179 | 123456789 | Repeat
|2016-12-01 | 2016-12-22 | 2016-12-22 10:42 | 18178 | 123456789 | First | 2016-12-01 | 2016-12-23 | 2016-12-23 12:27 | 16653 | 123456789 | Repeat
|2017-01-01 | 2017-01-05 | 2017-01-05 15:20 | 17896 | 123456789 | First | 2017-01-01 | 2017-01-11 | 2017-01-11 15:48 | 17909 | 123456789 | Repeat
|2017-01-01 | 2017-01-05 | 2017-01-05 15:20 | 17896 | 123456789 | First | 2017-01-01 | 2017-01-18 | 2017-01-18 10:07 | 18175 | 123456789 | Repeat
Please help, I'm not that good at writing CTE and as I imagine that's a kind problem that has the potential of being solved with CTE.
much oblidged
LuKI.
edit:
CREATE TABLE t_calls
(
[DateM] date,
[DateD] date,
[Date_Time] datetime2(7),
[EMPL_ID] int,
[INTERACTION_ID] numeric(25,0),
[PHONE_NO] numeric(9,0),
[FIRST_REP] varchar(10)
)
Insert Into t_calls
([DateM],[DateD],[Date_Time],[EMPL_ID],[INTERACTION_ID],[PHONE_NO],[FIRST_REP])
Values
('2016-12-01 00:00:00','2016-12-12 00:00:00','2016-12-12 15:55:36',16652,340680165,123456789,'First')
,('2016-12-01 00:00:00','2016-12-22 00:00:00','2016-12-22 10:42:45',18178,343736497,123456789,'First')
,('2016-12-01 00:00:00','2016-12-22 00:00:00','2016-12-22 10:54:46',112981,343750151,123456789,'Repeat')
,('2016-12-01 00:00:00','2016-12-22 00:00:00','2016-12-22 10:57:29',18179,343750151,123456789,'Repeat')
,('2016-12-01 00:00:00','2016-12-23 00:00:00','2016-12-23 12:27:56',16653,344071359,123456789,'Repeat')
,('2017-01-01 00:00:00','2017-01-05 00:00:00','2017-01-05 15:20:47',17896,347063121,123456789,'First')
,('2017-01-01 00:00:00','2017-01-11 00:00:00','2017-01-11 15:48:20',17909,348429965,123456789,'Repeat')
,('2017-01-01 00:00:00','2017-01-18 00:00:00','2017-01-18 10:07:45',18175,350243945,123456789,'Repeat')
,('2016-12-01 00:00:00','2016-12-03 00:00:00','2016-12-03 20:32:37',17745,338392721,111222333,'First')
,('2016-12-01 00:00:00','2016-12-21 00:00:00','2016-12-21 18:47:12',10982,343633967,111222333,'First')
,('2016-12-01 00:00:00','2016-12-22 00:00:00','2016-12-22 15:53:59',17820,343885389,111222333,'Repeat')
,('2016-12-01 00:00:00','2016-12-28 00:00:00','2016-12-28 13:07:19',15976,344944219,111222333,'Repeat')
,('2016-12-01 00:00:00','2016-12-29 00:00:00','2016-12-29 21:35:44',17896,345396945,111222333,'Repeat')
,('2016-12-01 00:00:00','2016-12-29 00:00:00','2016-12-29 21:46:43',15498,345398005,111222333,'Repeat')
,('2017-01-01 00:00:00','2017-01-02 00:00:00','2017-01-02 16:24:12',13117,346045147,111222333,'Repeat')
If I understand your question what you want to know for every call, how many calls were repeated within 7 days.
SELECT
a.date_time
,a.emp_id
,a.phone_no
,count(b.phone_no) as repeat_calls --count any non-null field
,min(b.date_time) as first_repeat_call_at
FROM t_calls a
LEFT JOIN t_calls b
ON a.phone_no = b.phone_no --same phone
AND datediff(d, a.date_time, b.date_time) between 0 AND 6 --a repeat comes in today + 6 days
AND a.date_time < b.date_time --prevents self join
GROUP BY
a.date_time
,a.emp_id
,a.phone_no
For any call with 0 repeats, there'll be nothing to join, thus nothing to count so repeat_calls = 0 and first_repeat_call_at is NULL.

Interpolate every value up to 5 years - SQL Server 2008

I have below table with Swedish swaprates.
I am making a forward-looking PnL-report and therefore want to know what my basis rate would be on maturity rate for a bond so I can evaluate whether we should reinvest immediately on maturity date or not.
Lets say bond X matures at 2017-03-01, I would then like to know the interpolated value for that day based on 3 months and 6 months.
I have like 130 different bonds with different maturity dates up to 5 years.
Is there a smooth way to, based on below values, interpolate every single day up to 5 years?
name | ccy | price | datedays | timeband | Rate_Date
STIBOR | SEK | -0.562 | 1 | OVERNIGHT | 2016-10-07
STIBOR | SEK | -0.559 | 7 | 1 WEEK | 2016-10-13
STIBOR | SEK | -0.631 | 32 | 1 MONTH | 2016-11-07
STIBOR | SEK | -0.577 | 61 | 2 MONTHS | 2016-12-06
STIBOR | SEK | -0.741 | 95 | 3 MONTHS | 2017-01-09
STIBOR | SEK | -0.349 | 182 | 6 MONTHS | 2017-04-06
SWAP | SEK | -0.499 | 369 | 1 YEAR | 2017-10-10
SWAP | SEK | -0.403 | 734 | 2 YEARS | 2018-10-10
SWAP | SEK | -0.285 | 1099 | 3 YEARS | 2019-10-10
SWAP | SEK | -0.151 | 1467 | 4 YEARS | 2020-10-12
SWAP | SEK | 0.003 | 1831 | 5 YEARS | 2021-10-11
See Value1 and Value2 below, which are interpolated.
Value1 is calculated like this:
Inteprolating factor = Price(Stibor2M) - Price(Stibor1m) / ( datedays(2m)-datedays(1m) )
-->
(-0,741 - -0,631)/(61-32) = 0,0018621
So, between Stibor 1M and Stibor 2M, we add this amount every day.
To see how many days it is between first measure and desired measure:
Datedays(Value1) - Datedays(Stibor 1M) = 50-32 = 18
Interpolated value for 2016-11-25 will then be:
Price(Stibor1M) + 18 * interpolating factor = -0,631 + 18*0,0018621 = -0,597483
.
name | ccy | price | datedays | timeband | Rate_Date
STIBOR | SEK | -0.562 | 1 | OVERNIGHT | 2016-10-07
STIBOR | SEK | -0.559 | 7 | 1 WEEK | 2016-10-13
STIBOR | SEK | -0.631 | 32 | 1 MONTH | 2016-11-07
Value1 | SEK | -0.597 | 50 | - | 2016-11-25
STIBOR | SEK | -0.577 | 61 | 2 MONTHS | 2016-12-06
STIBOR | SEK | -0.741 | 95 | 3 MONTHS | 2017-01-09
Value2 | SEK | -0.607 | 146 | - | 2017-03-01
STIBOR | SEK | -0.349 | 182 | 6 MONTHS | 2017-04-06