Calculate Consecutive Concurrent Calls SQL Server - sql

I just have the basic SQL skills hoping someone can help me out. I am using SQL Server trying to come up with a query to calculate consecutive concurrent calls happening at the same time per day. My company only has the license for 300 concurrent calls and were trying to find out the max point we reach per day. Basically if 3 people are on a call at 9:00 am and all 3 calls end at 9:15 the count would be 3. if another call happens at 9:05 am and ends at 9:20 am the count is now 4,but at 9:16 am the count would only be 1 then
I have a table (conferencecall2) with following columns:
CallID, UniqueCallID, Jointime, Leavetime
We get about 5000-6000 calls per day
Below is sample of some data.

The key here is to have (or generate) a table with one row for each time period. Then it's a simple APPLY or scalar subquery:
select t.minute, c.calls
from time_table_with_one_row_per_minute t
cross apply
(
select count(*) calls
from calls c
where t.Minute >= c.JoinTime
and t.Minute <= c.LeaveTime
) c

You can do this by unpivoting the columns, then using window functions:
select x.call_time, sum(sum(x.cnt_calls)) over(order by x.call_time) as cnt
from conferencecall2 c
cross apply (values (c.jointime, 1), (c.leavetime, -1)) as x(call_time, cnt_calls)
group by x.call_time
This solution scans the table only once, so I would expect it to perform efficiently over a large dataset.
Edit: you can get the peak of concurent calls per day with another level of subquery:
select convert(date, call_time) as call_day, max(cnt) as peak_cnt
from (
select x.call_time, sum(sum(x.cnt_calls)) over(order by x.call_time) as cnt
from conferencecall2 c
cross apply (values (c.jointime, 1), (c.leavetime, -1)) as x(call_time, cnt_calls)
group by x.call_time
) c
group by convert(date, call_time)
Edit 2
If you want to filter, then you need to do that in the outer query:
select convert(date, call_time) as call_day, max(cnt) as peak_cnt
from (
select x.call_time, sum(sum(x.cnt_calls)) over(order by x.call_time) as cnt
from conferencecall2 c
cross apply (values (c.jointime, 1), (c.leavetime, -1)) as x(call_time, cnt_calls)
group by x.call_time
) c
where call_time >= #endtime and call_time < #endtime
group by convert(date, call_time)

Related

How to have different restrictions to calculate max(Date) and min(Date) in one SELECT statement

I need a query that will return the earliest and latest hour of the transaction for a specific day.
The issue is that I often get the earliest transaction before 5 AM, where I want to include them only if they are later than 5 AM. But with the latest transaction, I want to include every transaction, also ones that happened earlier than 5 AM (due to some shops being open overnight).
Below is what my script looks like. Is there any possibility of giving different restrictions to how I calculate max(s.Date) and min(s.Date)? I thought of creating two select statements but not sure how to connect them within one FROM.
from (
select l.Name,
s.ShopID,
Day,
Time,
s.Date,
max(s.Date) over (partition by s.Day) as max_date ,
min(s.Date) over (partition by s.Day) as min_date
from [Shops].[Transaction].[Transactions] s
INNER JOIN [Shops].[Location].[Locations] l ON s.ShopID= l.ShopID
WHERE s.ShopID IN (1, 2, 3, 4, 5) AND Day > 20210131 AND Time <> 4
) t
You can implement this in several ways. The easiest in my opinion is to set the condition directly in the aggregate. Like:
from (
select l.Name,
s.ShopID,
Day,
Time,
s.Date,
max(s.Date) over (partition by s.Day) as max_date ,
min(IIF(DATEPART(HOUR, s.Date) > 5, s.date, NULL)) over (partition by s.Day) as min_date
from [Shops].[Transaction].[Transactions] s
INNER JOIN [Shops].[Location].[Locations] l ON s.ShopID= l.ShopID
WHERE s.ShopID IN (1, 2, 3, 4, 5) AND Day > 20210131 AND Time <> 4
) t
UPDATE: A little clarification. Only strings with a date hour greater than 5 are included in the aggregation here. The value for strings with an hour less than 5 is set to NULL

Past 7 days running amounts average as progress per each date

So, the query is simple but i am facing issues in implementing the Sql logic. Heres the query suppose i have records like
Phoneno Company Date Amount
83838 xyz 20210901 100
87337 abc 20210902 500
47473 cde 20210903 600
Output expected is past 7 days progress as running avg of amount for each date (current date n 6 days before)
Date amount avg
20210901 100 100
20210902 500 300
20210903 600 400
I tried
Select date, amount, select
avg(lg) from (
Select case when lag(amount)
Over (order by NULL) IS NULL
THEN AMOUNT
ELSE
lag(amount)
Over (order by NULL) END AS LG)
From table
WHERE DATE>=t.date-7) as avg
From table t;
But i am getting wrong avg values. Could anyone please help?
Note: Ive tried without lag too it results the wrong avgs too
You could use a self join to group the dates
select distinct
a.dt,
b.dt as preceding_dt, --just for QA purpose
a.amt,
b.amt as preceding_amt,--just for QA purpose
avg(b.amt) over (partition by a.dt) as avg_amt
from t a
join t b on a.dt-b.dt between 0 and 6
group by a.dt, b.dt, a.amt, b.amt; --to dedupe the data after the join
If you want to make your correlated subquery approach work, you don't really need the lag.
select dt,
amt,
(select avg(b.amt) from t b where a.dt-b.dt between 0 and 6) as avg_lg
from t a;
If you don't have multiple rows per date, this gets even simpler
select dt,
amt,
avg(amt) over (order by dt rows between 6 preceding and current row) as avg_lg
from t;
Also the condition DATE>=t.date-7 you used is left open on one side meaning it will qualify a lot of dates that shouldn't have been qualified.
DEMO
You can use analytical function with the windowing clause to get your results:
SELECT DISTINCT BillingDate,
AVG(amount) OVER (ORDER BY BillingDate
RANGE BETWEEN TO_DSINTERVAL('7 00:00:00') PRECEDING
AND TO_DSINTERVAL('0 00:00:00') FOLLOWING) AS RUNNING_AVG
FROM accounts
ORDER BY BillingDate;
Here is a DBFiddle showing the query in action (LINK)

adding a row for missing data

Between a date range 2017-02-01 - 2017-02-10, i'm calculating a running balance.
I have days where we have missing data, how would I include these missing dates with the previous days balance ?
Example data:
we are missing data for 2017-02-04,2017-02-05 and 2017-02-06, how would i add a row in the query with the previous balance?
The date range is a parameter, so could change....
Can i use something like the lag function?
I would be inclined to use a recursive CTE and then fill in the values. Here is one approach using outer apply:
with dates as (
select mind as dte, mind, maxd
from (select min(date) as mind, max(date) as maxd from t) t
union all
select dateadd(day, 1, dte), mind, maxd
from dates
where dte < maxd
)
select d.dte, t.balance
from dates d outer apply
(select top 1 t.*
from t
where t.date <= d.dte
order by t.date desc
) t;
You can generate dates using tally table as below:
Declare #d1 date ='2017-02-01'
Declare #d2 date ='2017-02-10'
;with cte_dates as (
Select top (datediff(D, #d1, #d2)+1) Dates = Dateadd(day, Row_Number() over (order by (Select NULL))-1, #d1) from
master..spt_values s1, master..spt_values s2
)
Select * from cte_dates left join ....
And do left join to your table and get running total
Adding to the date range & CTE solutions, I have created Date Dimension tables in numerous databases where I just left join to them.
There are free scripts online to create date dimension tables for SQL Server. I highly recommend them. Plus, it makes aggregation by other time periods much more efficient (e.g. Quarter, Months, Year, etc....)

SQL: transposing a time series table into a start-end time table if an event occur

I am trying to use a select statement to create a view, transposing a table with datetime into a table with records in each row, the start-end time when the consecutive values by time (partition by station) in 'record' field is not 0.
Here is a sample of the initial table.
And how it should look like after transposing.
Can anyone help?
You can use the conditional_change_event analytical function to create a special grouping identifier to split these out in a simple query:
select row_number() over () unique_id,
station,
min(datetime) startdate,
max(datetime) enddate
from (
select t.*, CONDITIONAL_CHANGE_EVENT(decode(recording,0,0,1))
over (partition by station order by datetime) chg
from mytable t
) x
where recording > 0
group by station, chg
order by 1, 2
The decode is just to set up your islands and gaps (where gaps are recording <= 0 and islands are recording > 0). Then the change event on that will generate a new identifier for grouping. Also note that I am grouping on the change event even though it isn't part of the output.
ROW_NUMBER() is the best for partitioning. Next, you can do a self join on the partitioned tables to see if the difference between times is greater than five minutes. I think the best solution is to partition on the rolling sum of the timestamp difference, offset by 5 minutes based on your pattern. If the five minutes is not a regular pattern then there is probably a generalized approach that can be used with the zeroes.
Solution written as a CTE below for easy view creation (it's a slow view though).
WITH partitioned as (
SELECT datetime, station, recording,
ROW_NUMBER() OVER(PARTITION BY station
ORDER BY datetime ASC) rn
FROM table --Not sure what the tablename is
WHERE recording != 0),
diffed as (
SELECT a.datetime, a.station,
DATEDIFF(mi,ISNULL(b.datetime,a.datetime),a.datetime)-5) Difference
--The ISNULL logic is for when a.datetime is the beginning of the block,
--we want a 0
FROM partitioned a
LEFT JOIN partitioned b on a.rn = b.rn + 1 and a.station=b.station
GROUP BY a.datetime,a.station),
cumulative as (
SELECT a.datetime, a.station, SUM(b.difference) offset_grouping
FROM diff a
LEFT JOIN diff b on a.datetime >= b.datetime and a.station = b.station ),
ordered as (SELECT datetime,station,
ROW_NUMBER() OVER(PARTITION BY station,offset_grouping ORDER BY datetime asc) starter,
ROW_NUMBER() OVER(PARTITION BY station,offset_grouping ORDER BY datetime desc) ender
FROM cumulative)
SELECT ROW_NUMBER() OVER(ORDER BY a.datetime) unique_id,a.station,a.datetime startdate, b.datetime enddate
FROM ordered a
JOIN ordered b on a.starter = b.ender and a.station=b.station and a.starter=1
This is the only solution I can think of but again, it's slow depending on the amount of data you have.

Select Consecutive Numbers in SQL

This feels simple, but I can't find an answer anywhere.
I'm trying to run a query by time of day for each hour. So I'm doing a Group By on the hour part, but not all hours have data, so there are some gaps. I'd like to display every hour, regardless of whether or not there's data.
Here's a sample query:
SELECT DATEPART(HOUR, DATEADD(HH,-5, CreationDate)) As Hour,
COUNT(*) AS Count
FROM Comments
WHERE UserId = ##UserId##
GROUP BY DATEPART(HOUR, DATEADD(HH,-5, CreationDate))
My thought was to Join to a table that already had numbers 1 through 24 so that the incoming data would get put in it's place.
Can I do this with a CTE?
WITH Hours AS (
SELECT i As Hour --Not Sure on this
FROM [1,2,3...24]), --Not Sure on this
CommentTimes AS (
SELECT DATEPART(HOUR, DATEADD(HH,-5, CreationDate)) AS Hour,
COUNT(*) AS Count
FROM Comments
WHERE UserId = ##UserId##
GROUP BY DATEPART(HOUR, DATEADD(HH,-5, CreationDate))
)
SELECT h.Hour, c.Count
FROM Hours h
JOIN CommentTimes c ON h.Hour = c.Hour
###Here's a sample Query From Stack Exchange Data Explorer
You can use a recursive query to build up a table of whatever numbers you want. Here we stop at 24. Then left join that to your comments to ensure every hour is represented. You can turn these into times easily if you wanted. I also changed your use of hour as a column name as it is a keyword.
;with dayHours as (
select 1 as HourValue
union all select hourvalue + 1
from dayHours
where hourValue < 24
)
,
CommentTimes As (
SELECT DATEPART(HOUR, DATEADD(HH,-5, CreationDate)) As HourValue,
COUNT(*) AS Count
FROM Comments
WHERE UserId = ##UserId##
GROUP BY DATEPART(HOUR, DATEADD(HH,-5, CreationDate)))
SELECT h.Hour, c.Count
FROM dayHours h
left JOIN CommentTimes c ON h.HourValue = c.HourValue
You can use a table value constructor:
with hours as (
SELECT hr
FROM (VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10), (11), (12)) AS b(hr)
)
etc..
You can also use a permanent auxilliary numbers table.
http://dataeducation.com/you-require-a-numbers-table/
Use a recursive CTE to generate the hours:
with hours as (
select 1 as hour
union all
select hour + 1
from hours
where hour < 24
)
. . .
Then your full query needs a left outer join:
with hours as (
select 1 as hour
union all
select hour + 1
from hours
where hour < 24
)
CommentTimes As (
SELECT DATEPART(HOUR, DATEADD(HH,-5, CreationDate)) As Hour,
COUNT(*) AS Count
FROM Comments
WHERE UserId = ##UserId##
GROUP BY DATEPART(HOUR, DATEADD(HH,-5, CreationDate))
)
SELECT h.Hour, c.Count
FROM Hours h LEFT OUTER JOIN
CommentTimes c
ON h.Hour = c.Hour;
Below is demo without using recursive CTE for sql-server
select h.hour ,c.count
from (
select top 24 number + 1 as hour from master..spt_values
where type = 'P'
) h
left join (
select datepart(hour, creationdate) as hour,count(1) count
from comments
where userid = '9131476'
group by datepart(hour, creationdate)
) c on h.hour = c.hour
order by h.hour;
online demo link : consecutive number query demo - Stack Exchange Data Explorer
The basic idea is correct, but you will want to perform a left join instead of a standard join. The reason for the left join is because you want the answers from the left-hand side.
With respect to how to create the original hours table, you can either directly create it with something like:
SELECT 1 as hour
UNION ALL
SELECT 2 as hour
...
UNION ALL
SELECT 24 as hour
or, you can create a permanent table populated with these values. (I do not recall immediately on SqlServer if there is a better way to do this, or if selecting a value but not from a table is allowed. On Oracle, you could select from the built-in table 'dual' which is a table containing a single row).
As a more general abstraction of this issue, you can create consecutive numbers Brad and Gordon have suggested with a recursive CTE like this:
WITH Numbers AS (
SELECT 1 AS Number
UNION ALL SELECT Number + 1
FROM Numbers
WHERE Number < 1000
)
SELECT * FROM Numbers
OPTION (MaxRecursion 0)
As a note, if you plan to go over 100 numbers, you'll need to add OPTION (MaxRecursion 0) to the end of your query to prevent the error The maximum recursion 100 has been exhausted before statement completion
This technique can commonly be seen when populating or using a Tally Table in TSQL