TSQL: Loop through a table by given time intervals and count - sql

Source table
dbo.sourcetable
The table has more columns than shown here:
| ID | TrackingID | TrackingTime |....
|--------|----------------|-----------------------|
| 001 | 10 |2017-03-08 10:12:20.240|
| 003 | 50 |2017-03-08 12:30:23.240|
| 001 | 10 |2017-03-03 09:10:23.240|
| 002 | 10 |2017-03-06 10:12:23.240|
| 001 | 15 |2017-03-05 10:12:23.240|
| 001 | 20 |2017-03-08 17:12:23.240|
| 002 | 15 |2017-03-04 00:12:23.240|
| 003 | 10 |2017-03-06 01:18:23.240|
....
I also have a table which provides all possible TrackingIDs and their description. If this is of any use.
Query
I count the last given TrackingIDs für a time range with:
--Initializing---------
DECLARE #Begin datetime,
DECLARE #End datetime,
SET #Begin = '2017-03-05 00:00:00';
SET #End = '2017-03-06 00:00:00';
--Coding CTE----------
WITH CTE AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY ID ORDER BY TrackingTime DEC) rn
FROM dbo.sourcetable
WHERE (1 = 1)
AND (TrackingTime BETWEEN #Begin AND #End)
)
--Select CTE----------
SELECT TrackingID
,COUNT(TrackingID) AS TotalIDs
FROM CTE
WHERE (1 = 1)
AND rn = 1
GROUP BY TrackingID
ORDER BY Tracking ID ASC
I receive a table which shows the total of IDs for each TrackingID:
| TrackingID | TotalIDs |
|------------------|----------------|
| 10 | 12 |
| 15 | 3 |
| 20 | 10 |
...
What I want
In order to create a chart in SSRS I want to split up a time range in many intervals. Then use the beginning and end of each interval as the variable input of my query and in the end receive a table which shows the total IDs for each TrackingID in that interval.
| TrackingTime | 10 | 15 | 20 |...
|-----------------------|----------|----------|----------|
|2017-03-05 00:00:00.000| 13 | 10 | 3 |
|2017-03-05 00:00:02.000| 11 | 8 | 5 |
....
|2017-03-06 00:00:00.000| 20 | 11 | 7 |
Query
DECLARE #TotalTable Table
(
TrackingTime datetime PRIMARY KEY CLUSTERED,
----------------------------
Alle the columns for
the TrackingIDs
----------------------------
)
WHILE #Begin <= #End
BEGIN
INSERT INTO #TotalTable (TrackingTime) VALUES (#Begin)
----------------------------------------
I don't know, so maybe insert
magic code here?!
----------------------------------------
SET #Begin = DATEADD(SS, 120, #Begin)
END;
SELECT *
FROM #TotalTable AS TT
I think I have to work with dynamic SQL somehow as not for every interval something can be counted. I am sorry if this is to much to ask for. Thank you for your help.

Related

SQL SUM label summary

Is there any possibility in SQL to add a ROW with a summary MEAN, for example, sum and average. For example, something like this
| 2021-01 | 16 |
| 2020-12 | 15 |
| -------- | -------------- |
| SUM | 31 |
| Mean | 15.5 |
My code:
proc sql;
create table diff as
select today.policy_vintage
, today.number_policy as POLICY_TODAY
, prior.number_policy as POLICY_PRIOR
, today.number_policy - prior.number_policy as DIFFERENCE
, avg(prior.number_policy) as POLICY_MEAN_PRIOR
, today.number_policy - mean(prior.number_policy) as DIFFRENCE_MEAN
from policy_vintage_weekly today
LEFT JOIN
(select *
from _work.POLICY_VINTAGE_WEEKLY
where run_date < today()
having run_date = max(run_date)
) prior
ON today.policy_vintage = prior.policy_vintage
;
quit;
If your table contains:
Date
value
2021-01-00
16
2020-12-00
15
Than this query will get you the result you want:
SELECT * FROM test.test
union
select "SUM", sum(value) from test.test
union
select "Mean", avg(value) from test.test;
+------------+---------+
| date | value |
+------------+---------+
| 2021-01-00 | 16.0000 |
| 2020-12-00 | 15.0000 |
| SUM | 31.0000 |
| Mean | 15.5000 |
+------------+---------+
4 rows in set (0.000 sec)
Tested on Mariadb 10.6.4
But having said that, it would be more something that is calculated in some client software you are using.

Calculating a running total of when a value changes over a partition

I am having trouble figuring out how to write a window function that solves my problem. I am quite the novice at window functions, but I think one could be written to meet my needs.
Problem Statement:
I want to calculate a transfer sequence showing when person has changed locations based on the corresponding location ID over time.
Sample Data (Table1)
+----------+------------+-----------+---------+
| PersonID | LocationID | Date | Time |
+----------+------------+-----------+---------+
| 12 | A | 6/17/2020 | 12:00PM |
+----------+------------+-----------+---------+
| 12 | A | 6/18/2020 | 1:00PM |
+----------+------------+-----------+---------+
| 12 | B | 6/18/2020 | 6:00AM |
+----------+------------+-----------+---------+
| 12 | C | 6/19/2020 | 3:00PM |
+----------+------------+-----------+---------+
| 13 | A | 6/16/2020 | 8:00AM |
+----------+------------+-----------+---------+
| 13 | A | 6/16/2020 | 11:00AM |
+----------+------------+-----------+---------+
| 13 | A | 6/16/2020 | 12:00AM |
+----------+------------+-----------+---------+
| 13 | B | 6/16/2020 | 4:00PM |
+----------+------------+-----------+---------+
Expected Results
+----------+------------+-----------+---------+-------------------+
| PersonID | LocationID | Date | Time | Transfer Sequence |
+----------+------------+-----------+---------+-------------------+
| 12 | A | 6/17/2020 | 12:00PM | 1 |
+----------+------------+-----------+---------+-------------------+
| 12 | A | 6/18/2020 | 1:00PM | 1 |
+----------+------------+-----------+---------+-------------------+
| 12 | B | 6/18/2020 | 6:00AM | 2 |
+----------+------------+-----------+---------+-------------------+
| 12 | C | 6/19/2020 | 3:00PM | 3 |
+----------+------------+-----------+---------+-------------------+
| 13 | A | 6/16/2020 | 8:00AM | 1 |
+----------+------------+-----------+---------+-------------------+
| 13 | A | 6/16/2020 | 11:00AM | 1 |
+----------+------------+-----------+---------+-------------------+
| 13 | A | 6/16/2020 | 12:00AM | 1 |
+----------+------------+-----------+---------+-------------------+
| 13 | B | 6/16/2020 | 4:00PM | 2 |
+----------+------------+-----------+---------+-------------------+
What I Tried
SELECT
[t1].[PersonID]
,[t1].[LocationID]
,[t1].[Date]
,[t1].[Time]
,DENSE_RANK()
OVER(
partition BY [t1].[PersonID], [t1].[LocationID]
ORDER BY [t1].[Date] ASC, [t1].[Time] ASC) AS
[Transfer Sequence]
FROM Table1 [t1]
Unfortunately, I believe DENSE_RANK() is assigning a rank regardless of whether the value of LocationID has changed. I need a function that will only add one to the sequence when the LocationID has changed.
Any help would be greatly appreciated.
Thank you!
You want to put "adjacent" rows in the same group. Straigt window functions cannot do that for you - we would need to use a gaps-and-island technique:
select
t.*,
sum(case when locationID = lagLocationID then 0 else 1 end)
over(partition by personID order by date, time)
as transfert_sequence
from (
select
t.*,
lag(locationID)
over(partition by personID order by date, time)
as lagLocationID
from mytable t
) t
The idea is to compute a window sum that increments everytime the locationID changes.
Note that this would properly handle the case when a person comes back to a location they have already been before.
What wuold I do (and I'm sure it's not the best way) is create a second table orderd with PersonID, locationID, Date, time and and empty field for the transfer sequence (sequence), then a cursor:
DECLARE transaction CURSOR
FOR select PersonID, LocationID, Date, Time from table1;
Then a loop:
OPEN CURSOR transaction
set #count = 0
set #person_saved = ""
set #location_saed = ""
FETCH NEXT FROM transaction INTO #person, #location, #date, #time
WHILE ##FETCH_STATUS = 0
BEGIN
if #person_saved <> #person -- changing personID, reset count
begin
set count = 0
set persone_saved = #person
end
if #location_saved <> #location. -- changing location, add count
begin
set #count = #count + 1
set #location_saved = #location
end
update table1 set sequence = #count where PersonId = #person and locationId = #location and date = #date and time = #time
FETCH NEXT FROM transaction INTO #person, #location, #date, #time
END
CLOSE transaction
DEALLOCATE transaction

SQL: Get an aggregate (SUM) of a calculation of two fields (DATEDIFF) that has conditional logic (CASE WHEN)

I have a dataset that includes a bunch of stay data (at a hotel). Each row contains a start date and an end date, but no duration field. I need to get a sum of the durations.
Sample Data:
| Stay ID | Client ID | Start Date | End Date |
| 1 | 38 | 01/01/2018 | 01/31/2019 |
| 2 | 16 | 01/03/2019 | 01/07/2019 |
| 3 | 27 | 01/10/2019 | 01/12/2019 |
| 4 | 27 | 05/15/2019 | NULL |
| 5 | 38 | 05/17/2019 | NULL |
There are some added complications:
I am using Crystal Reports and this is a SQL Expression, which obeys slightly different rules. Basically, it returns a single scalar value. Here is some more info: http://www.cogniza.com/wordpress/2005/11/07/crystal-reports-using-sql-expression-fields/
Sometimes, the end date field is blank (they haven't booked out yet). If blank, I would like to replace it with the current timestamp.
I only want to count nights that have occurred in the past year. If the start date of a given stay is more than a year ago, I need to adjust it.
I need to get a sum by Client ID
I'm not actually any good at SQL so all I have is guesswork.
The proper syntax for a Crystal Reports SQL Expression is something like this:
(
SELECT (CASE
WHEN StayDateStart < DATEADD(year,-1,CURRENT_TIMESTAMP) THEN DATEDIFF(day,DATEADD(year,-1,CURRENT_TIMESTAMP),ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
ELSE DATEDIFF(day,StayDateStart,ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
END)
)
And that's giving me the correct value for a single row, if I wanted to do this:
| Stay ID | Client ID | Start Date | End Date | Duration |
| 1 | 38 | 01/01/2018 | 01/31/2019 | 210 | // only days since June 4 2018 are counted
| 2 | 16 | 01/03/2019 | 01/07/2019 | 4 |
| 3 | 27 | 01/10/2019 | 01/12/2019 | 2 |
| 4 | 27 | 05/15/2019 | NULL | 21 |
| 5 | 38 | 05/17/2019 | NULL | 19 |
But I want to get the SUM of Duration per client, so I want this:
| Stay ID | Client ID | Start Date | End Date | Duration |
| 1 | 38 | 01/01/2018 | 01/31/2019 | 229 | // 210+19
| 2 | 16 | 01/03/2019 | 01/07/2019 | 4 |
| 3 | 27 | 01/10/2019 | 01/12/2019 | 23 | // 2+21
| 4 | 27 | 05/15/2019 | NULL | 23 |
| 5 | 38 | 05/17/2019 | NULL | 229 |
I've tried to just wrap a SUM() around my CASE but that doesn't work:
(
SELECT SUM(CASE
WHEN StayDateStart < DATEADD(year,-1,CURRENT_TIMESTAMP) THEN DATEDIFF(day,DATEADD(year,-1,CURRENT_TIMESTAMP),ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
ELSE DATEDIFF(day,StayDateStart,ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
END)
)
It gives me an error that the StayDateEnd is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. But I don't even know what that means, so I'm not sure how to troubleshoot, or where to go from here. And then the next step is to get the SUM by Client ID.
Any help would be greatly appreciated!
Although the explanation and data set are almost impossible to match, I think this is an approximation to what you want.
declare #your_data table (StayId int, ClientId int, StartDate date, EndDate date)
insert into #your_data values
(1,38,'2018-01-01','2019-01-31'),
(2,16,'2019-01-03','2019-01-07'),
(3,27,'2019-01-10','2019-01-12'),
(4,27,'2019-05-15',NULL),
(5,38,'2019-05-17',NULL)
;with data as (
select *,
datediff(day,
case
when datediff(day,StartDate,getdate())>365 then dateadd(year,-1,getdate())
else StartDate
end,
isnull(EndDate,getdate())
) days
from #your_data
)
select *,
sum(days) over (partition by ClientId)
from data
https://rextester.com/HCKOR53440
You need a subquery for sum based on group by client_id and a join between you table the subquery eg:
select Stay_id, client_id, Start_date, End_date, t.sum_duration
from your_table
inner join (
select Client_id,
SUM(CASE
WHEN StayDateStart < DATEADD(year,-1,CURRENT_TIMESTAMP) THEN DATEDIFF(day,DATEADD(year,-1,CURRENT_TIMESTAMP),ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
ELSE DATEDIFF(day,StayDateStart,ISNULL(StayDateEnd,CURRENT_TIMESTAMP))
END) sum_duration
from your_table
group by Client_id
) t on t.Client_id = your_table.client_id

Add extra column in sql to show ratio with previous row

I have a SQL table with a format like this:
SELECT period_id, amount FROM table;
+--------------------+
| period_id | amount |
+-----------+--------+
| 1 | 12 |
| 2 | 11 |
| 3 | 15 |
| 4 | 20 |
| .. | .. |
+-----------+--------+
I'd like to add an extra column (just in my select statement) that calculates the growth ratio with the previous amount, like so:
SELECT period_id, amount, [insert formula here] AS growth FROM table;
+-----------------------------+
| period_id | amount | growth |
+-----------+-----------------+
| 1 | 12 | |
| 2 | 11 | 0.91 | <-- 11/12
| 3 | 15 | 1.36 | <-- 15/11
| 4 | 20 | 1.33 | <-- 20/15
| .. | .. | .. |
+-----------+-----------------+
Just need to work out how to perform the operation with the line before. Not interested in adding to the table. Any help appreciated :)
** also want to point out that period_id is in order but not necessarily increasing incrementally
The window function Lag() would be a good fit here.
You may notice that we use (amount+0.0). This is done just in case AMOUNT is an INT, and NullIf() to avoid the dreaded divide by zero
Declare #YourTable table (period_id int,amount int)
Insert Into #YourTable values
( 1,12),
( 2,11),
( 3,15),
( 4,20)
Select period_id
,amount
,growth = cast((amount+0.0) / NullIf(lag(amount,1) over (Order By Period_ID),0) as decimal(10,2))
From #YourTable
Returns
period_id amount growth
1 12 NULL
2 11 0.92
3 15 1.36
4 20 1.33
If you are using SQL Server 2012+ then go for John Cappelletti answer.
And If you are also less blessed like me then this below code work for you in the 2008 version too.
Declare #YourTable table (period_id int,amount int)
Insert Into #YourTable values
( 1,12),
( 2,11),
( 3,15),
( 4,20)
;WITH CTE AS (
SELECT ROW_NUMBER() OVER (
ORDER BY period_id
) SNO
,period_id
,amount
FROM #YourTable
)
SELECT C1.period_id
,C1.amount
,CASE
WHEN C2.amount IS NOT NULL AND C2.amount<>0
THEN CAST(C1.amount / CAST(C2.amount AS FLOAT) AS DECIMAL(18, 2))
END AS growth
FROM CTE C1
LEFT JOIN CTE C2 ON C1.SNO = C2.SNO + 1
Which works same as LAG.
+-----------+--------+--------+
| period_id | amount | growth |
+-----------+--------+--------+
| 1 | 12 | NULL |
| 2 | 11 | 0.92 |
| 3 | 15 | 1.36 |
| 4 | 20 | 1.33 |
+-----------+--------+--------+

Finding simultaneous events in a database between times

I have a database that stores phone call records. Each phone call record has a start time and an end time. I want to find out what is the maximum amount of phone calls that are simultaneously happening in order to know if we have exceed the amount of available phone lines in our phone bank. How could I go about solving this problem?
Disclaimer: I'm writing my answer based on the (excelent) following post:
https://www.itprotoday.com/sql-server/calculating-concurrent-sessions-part-3 (Part1 and 2 are recomended also)
The first thing to understand here with that problem is that most of the current solutions found in the internet can have basically two issues
The result is not the correct answer (for example if range A overlaps with B and C but B dosen't overlaps with C they count as 3 overlapping ranges).
The way to compute it is very innefficient (because is O(n^2) and / or they cicle for each second in the period)
The common performance problem in solutions like the proposed by Unreasons is a cuadratic solution, for each call you need to check all the other calls if they are overlaped.
there is an algoritmical linear common solution that is list all the "events" (start call and end call) ordered by date, and add 1 for a start and substract 1 for a hang-up, and remember the max. That can be implemented easily with a cursor (solution proposed by Hafhor seems to be in that way) but cursors are not the most efficient ways to solve problems.
The referenced article has excelent examples, differnt solutions, performance comparison of them. The proposed solution is:
WITH C1 AS
(
SELECT starttime AS ts, +1 AS TYPE,
ROW_NUMBER() OVER(ORDER BY starttime) AS start_ordinal
FROM Calls
UNION ALL
SELECT endtime, -1, NULL
FROM Calls
),
C2 AS
(
SELECT *,
ROW_NUMBER() OVER( ORDER BY ts, TYPE) AS start_or_end_ordinal
FROM C1
)
SELECT MAX(2 * start_ordinal - start_or_end_ordinal) AS mx
FROM C2
WHERE TYPE = 1
Explanation
suppose this set of data
+-------------------------+-------------------------+
| starttime | endtime |
+-------------------------+-------------------------+
| 2009-01-01 00:02:10.000 | 2009-01-01 00:05:24.000 |
| 2009-01-01 00:02:19.000 | 2009-01-01 00:02:35.000 |
| 2009-01-01 00:02:57.000 | 2009-01-01 00:04:04.000 |
| 2009-01-01 00:04:12.000 | 2009-01-01 00:04:52.000 |
+-------------------------+-------------------------+
This is a way to implement with a query the same idea, adding 1 for each starting of a call and substracting 1 for each ending.
SELECT starttime AS ts, +1 AS TYPE,
ROW_NUMBER() OVER(ORDER BY starttime) AS start_ordinal
FROM Calls
this part of the C1 CTE will take each starttime of each call and number it
+-------------------------+------+---------------+
| ts | TYPE | start_ordinal |
+-------------------------+------+---------------+
| 2009-01-01 00:02:10.000 | 1 | 1 |
| 2009-01-01 00:02:19.000 | 1 | 2 |
| 2009-01-01 00:02:57.000 | 1 | 3 |
| 2009-01-01 00:04:12.000 | 1 | 4 |
+-------------------------+------+---------------+
Now this code
SELECT endtime, -1, NULL
FROM Calls
Will generate all the "endtimes" without row numbering
+-------------------------+----+------+
| endtime | | |
+-------------------------+----+------+
| 2009-01-01 00:02:35.000 | -1 | NULL |
| 2009-01-01 00:04:04.000 | -1 | NULL |
| 2009-01-01 00:04:52.000 | -1 | NULL |
| 2009-01-01 00:05:24.000 | -1 | NULL |
+-------------------------+----+------+
Now making the UNION to have the full C1 CTE definition, you will have both tables mixed
+-------------------------+------+---------------+
| ts | TYPE | start_ordinal |
+-------------------------+------+---------------+
| 2009-01-01 00:02:10.000 | 1 | 1 |
| 2009-01-01 00:02:19.000 | 1 | 2 |
| 2009-01-01 00:02:57.000 | 1 | 3 |
| 2009-01-01 00:04:12.000 | 1 | 4 |
| 2009-01-01 00:02:35.000 | -1 | NULL |
| 2009-01-01 00:04:04.000 | -1 | NULL |
| 2009-01-01 00:04:52.000 | -1 | NULL |
| 2009-01-01 00:05:24.000 | -1 | NULL |
+-------------------------+------+---------------+
C2 is computed sorting and numbering C1 with a new column
C2 AS
(
SELECT *,
ROW_NUMBER() OVER( ORDER BY ts, TYPE) AS start_or_end_ordinal
FROM C1
)
+-------------------------+------+-------+--------------+
| ts | TYPE | start | start_or_end |
+-------------------------+------+-------+--------------+
| 2009-01-01 00:02:10.000 | 1 | 1 | 1 |
| 2009-01-01 00:02:19.000 | 1 | 2 | 2 |
| 2009-01-01 00:02:35.000 | -1 | NULL | 3 |
| 2009-01-01 00:02:57.000 | 1 | 3 | 4 |
| 2009-01-01 00:04:04.000 | -1 | NULL | 5 |
| 2009-01-01 00:04:12.000 | 1 | 4 | 6 |
| 2009-01-01 00:04:52.000 | -1 | NULL | 7 |
| 2009-01-01 00:05:24.000 | -1 | NULL | 8 |
+-------------------------+------+-------+--------------+
And there is where the magic occurs, at any time the result of #start - #ends is the amount of cocurrent calls at this moment.
for each Type = 1 (start event) we have the #start value in the 3rd column. and we also have the #start + #end (in the 4th column)
#start_or_end = #start + #end
#end = (#start_or_end - #start)
#start - #end = #start - (#start_or_end - #start)
#start - #end = 2 * #start - #start_or_end
so in SQL:
SELECT MAX(2 * start_ordinal - start_or_end_ordinal) AS mx
FROM C2
WHERE TYPE = 1
In this case with the prposed set of calls, the result is 2.
In the proposed article, there is a little improvment to have a grouped result by for example a service or a "phone company" or "phone central" and this idea can also be used to group for example by time slot and have the maximum concurrency hour by hour in a given day.
Given the fact that the maximum number of connections is going to be a StartTime points, you can
SELECT TOP 1 count(*) as CountSimultaneous
FROM PhoneCalls T1, PhoneCalls T2
WHERE
T1.StartTime between T2.StartTime and T2.EndTime
GROUP BY
T1.CallID
ORDER BY CountSimultaneous DESC
The query will return for each call the number of simultaneous calls. Either order them descending and select first one or SELECT MAX(CountSimultaneous) from the above (as subquery without ordering and without TOP).
try this:
DECLARE #Calls table (callid int identity(1,1), starttime datetime, endtime datetime)
INSERT #Calls (starttime,endtime) values ('6/12/2010 10:10am','6/12/2010 10:15am')
INSERT #Calls (starttime,endtime) values ('6/12/2010 11:10am','6/12/2010 10:25am')
INSERT #Calls (starttime,endtime) values ('6/12/2010 12:10am','6/12/2010 01:15pm')
INSERT #Calls (starttime,endtime) values ('6/12/2010 11:10am','6/12/2010 10:35am')
INSERT #Calls (starttime,endtime) values ('6/12/2010 12:10am','6/12/2010 12:15am')
INSERT #Calls (starttime,endtime) values ('6/12/2010 10:10am','6/12/2010 10:15am')
DECLARE #StartDate datetime
,#EndDate datetime
SELECT #StartDate='6/12/2010'
,#EndDate='6/13/2010'
;with AllDates AS
(
SELECT #StartDate AS DateOf
UNION ALL
SELECT DATEADD(second,1,DateOf) AS DateOf
FROM AllDates
WHERE DateOf<#EndDate
)
SELECT
a.DateOf,COUNT(c.callid) AS CountOfCalls
FROM AllDates a
INNER JOIN #Calls c ON a.DateOf>=c.starttime and a.DateOf<=c.endtime
GROUP BY a.DateOf
ORDER BY 2 DESC
OPTION (MAXRECURSION 0)
OUTPUT:
DateOf CountOfCalls
----------------------- ------------
2010-06-12 10:10:00.000 3
2010-06-12 10:10:01.000 3
2010-06-12 10:10:02.000 3
2010-06-12 10:10:03.000 3
2010-06-12 10:10:04.000 3
2010-06-12 10:10:05.000 3
2010-06-12 10:10:06.000 3
2010-06-12 10:10:07.000 3
2010-06-12 10:10:08.000 3
2010-06-12 10:10:09.000 3
2010-06-12 10:10:10.000 3
2010-06-12 10:10:11.000 3
2010-06-12 10:10:12.000 3
2010-06-12 10:10:13.000 3
2010-06-12 10:10:14.000 3
2010-06-12 10:10:15.000 3
2010-06-12 10:10:16.000 3
2010-06-12 10:10:17.000 3
2010-06-12 10:10:18.000 3
2010-06-12 10:10:19.000 3
2010-06-12 10:10:20.000 3
2010-06-12 10:10:21.000 3
2010-06-12 10:10:22.000 3
2010-06-12 10:10:23.000 3
2010-06-12 10:10:24.000 3
2010-06-12 10:10:25.000 3
2010-06-12 10:10:26.000 3
2010-06-12 10:10:27.000 3
....
add a TOP 1 or put this query in a derived table and further aggergate it if necessary.
SELECT COUNT(*) FROM calls
WHERE '2010-06-15 15:00:00' BETWEEN calls.starttime AND calls.endtime
and repeat this for every second.
The only practical method I can think of is as follows:
Split the period you want to analyze in arbitrary "buckets", say, 24 1-hour buckets over the day. For each Bucket count how many calls either started or finished between the start or the end of the interval
Note that the 1-hour limit is not a hard-and-fast rule. You could make this shorter or longer, depending on how precise you want the calculation to be.
You could make the actual "length" of the bucket a function of the average call duration.
So, let's assume that your average call is 3 minutes. If it is not too expensive in terms of calculations, use buckets that are 3 times longer than your average call (9 minutes) this should be granular enough to give precise results.
-- assuming calls table with columns starttime and endtime
declare #s datetime, #e datetime;
declare #t table(d datetime);
declare c cursor for select starttime,endtime from calls order by starttime;
open c
while(1=1) begin
fetch next from c into #s,#e
if ##FETCH_STATUS<>0 break;
update top(1) #t set d=#e where d<=#s;
if ##ROWCOUNT=0 insert #t(d) values(#e);
end
close c
deallocate c
select COUNT(*) as MaxConcurrentCalls from #t