Finding concurrent users in a sessions table created from log entries - google-bigquery

We are exploring using Bigquery to store and analyze 100s of million of log entries representing user sessions. The source raw log entries contain a "connect" log type and "disconnect" log type.
We have the option of processing the logs before they are ingested to bigquery so that we have one entry per session, containing the session start TIMESTAMP and a "duration" value, or to insert each log entry individually and calculate session times at the analysis stage. Let's imagine our table schema is of the form:
sessionStartTime: TIMESTAMP,
clientId: STRING,
duration: INTEGER
or (in the case we store two log entries per session: one connect and one disconnect):
time: TIMESTAMP,
type: INTEGER, //enum, 0 for connect, 1 for disconnect
clientId: STRING
Our problem is we cannot find a way to get concurrent users using bigquery: ideally we would be able to write a query that partitions the sessions table by timestamp "buckets" (let's say every minute) and perform a query which would give us concurrents per minute over a certain time range.
The simple way to think about concurrents with respect to log entries is that at any moment in time they are calculated using the function f(t) = x0 + connects(t) - disconnects(t), where x0 is the initial concurrent users count (at time t0), and t is the "timestamp" bucket (in minutes in this example).
Can anybody recommend a way to do this?
Thanks!

Thanks for the sample data! (Available at https://bigquery.cloud.google.com/table/imgdge:sopub.sessions)
I'll take your offer to "We have the option of processing the logs before they are ingested to bigquery so that we have one entry per session, containing the session start TIMESTAMP and a 'duration' value". This time, I'll make the processing with BigQuery, and leave the results on a table of my own with:
SELECT u, start, MIN(end) end FROM (
SELECT a.f0_ u, a.time start, b.time end
FROM [imgdge:sopub.sessions] a
JOIN EACH [imgdge:sopub.sessions] b
ON a.f0_ = b.f0_
WHERE a.type = 'connect'
AND b.type='disconnect'
AND a.time < b.time
)
GROUP BY 1, 2
That gives me 819,321 rows. Not a big number for BigQuery, but since we are going to be doing combinations of it, it might explode. We'll limit the date range for calculating the concurrent sessions to keep it sane. I'll save the results of this query to [fh-bigquery:public_dump.imgdge_sopub_sessions_startend].
Once I have all the sessions with start and end time, I can go find how many concurrent sessions are per each interesting instant. By minute you said?
All the interesting minutes happen to be:
SELECT SEC_TO_TIMESTAMP(FLOOR(TIMESTAMP_TO_SEC(time)/60)*60) time
FROM [imgdge:sopub.sessions]
GROUP BY 1
Now let's combine this list of interesting times with all the sessions in my new table. For each minute we'll count all the sessions that started before this time, and ended after it:
SELECT time, COUNT(*) concurrent
FROM (
SELECT u, start, end, 99 x
FROM [fh-bigquery:public_dump.imgdge_sopub_sessions_startend]
WHERE start < '2013-09-30 00:00:00'
) a
JOIN
(
SELECT SEC_TO_TIMESTAMP(FLOOR(TIMESTAMP_TO_SEC(time)/60)*60) time, 99 x FROM [imgdge:sopub.sessions] GROUP BY 1) b
ON a.x = b.x
WHERE b.time < a.end
AND b.time >= a.start
GROUP BY 1
Notice the 99 x. It could be any number, I'm just using it to generate the combinatorial all the session * all the times. There are too many sessions for this kind of combinatorial game, so I'm limiting them with the WHERE start < '2013-09-30 00:00:00'.
And that's how you can count concurrent users.

Could you instead of sessionStartTime get sessionEndTime (or just add duration+sessionStartTime)? If you could do that something like this can be made. It is not perfect but it will give you somewhat relevant data.
SELECT AVG(perMinute) as avgUsersMin FROM
(
SELECT COUNT(distinct clientId, 1000000) as perMinute, YEAR(sessionEndTime) as y,
MONTH(sessionEndTime) as m, DAY(sessionEndTime) as d, HOUR(sessionEndTime) as h, MINUTE(sessionEndTime) as mn FROM [MyProject:MyTable]
WHERE sessionEndTime BETWEEN someDate AND someOtherDate
GROUP BY y,m,d,h,mn
);

Related

Group timestamps into sessions with a defined minimum gap between sessions

I'm trying to group my timestamp data into user sessions of various lengths where a session end is defined by a minimum time gap between sessions.
So if the thr is e.g. 5 minutes, two timestamps with a 2 minute gap would be considered the same session, while two timestamps with the gap of 6 minutes would be considered two sessions.
I've seen several examples where people try to group into sessions of a certain length, which is kinda easy. But my case is too "online" and I can't figure out what trick to use.
I have a base query defining the timestamps and creating sessions with 1 minute granularity. But I get one long session for each object, merging several ones into one long one.
How could I split my long merged session into several ones, with a defined gap of e.g. 5 minutes?
SELECT
count(distinct timestamps.*) as min_spent,
timestamps.object_id,
timestamps.user_id,
min(created_min) as session_start,
max(created_min) as session_end
FROM
(
SELECT
date_trunc('minute', datetime) as created_min,
object_id,
user_id
FROM timestamp_metrics
GROUP BY created_min, object_id, user_id
) as timestamps
left join objects o on o.id = timestamps.object_id
left join users u on u.id = timestamps.user_id
GROUP BY object_id, user_id

Analysis of sparse time series events with Spark or SQL

I have a set of status change events for users, for simplicty let's say ACTIVATED (A) and DEACTIVATED (D).
Scenario is similar to e.g. a youtube premium subscription, where user might activate or deactivate their subscription multiple times. Hence, both events can occur multiple times for the same user with multiple time points (e.g. days, months) in-between.
I want to calculate from the event history the number of user with an ACTIVATED status per Month.
A example timeline could be
t: Time point (end of month) of aggregation
u: One user
A: ACTIVATED event
D: DEACTIVATED even
t: Jan Feb Mar Apr May
u1 A
u2 A D A
u3 A D A
Expected: 2 2 2 1 3
The data itself is available in a CSV / table with columns user-id, event-type time-stamp. For the example above the raw data would be:
user-id event-type time-stamp
u1 A 2020-Jan-01
u2 A 2020-Jan-15
u2 D 2020-Feb-05
u2 A 2020-May-17
u3 A 2020-Feb-04
u3 D 2020-Apr-10
u3 A 2020-May-09
Note, that even-though I want to have the count at the end of each Month, the events of course do not happen all at the same time. One user could also have more than one event in the same month.
The absolute count is not problematic, "count all users where last event is A".
The tricky thing is to calculate it for the individual months, where I have no change event. E.g. the Mar in the example timeline.
I can not group-by month, since in Mar no event happened, but I need to be aware, that an ACTIVATION or DEACTIVATION happened the time points before.
I can come up with two approaches:
Calculate for each time point with an increasing partitioning window in some loop. Hence "for tCursor in Jan to May do: count all users where last event in rang 'Jan - tCursor' is ACTIVATED".
Populate the history with redundant events in the time granularity of interest with some pre-processing loop for each user. Then I can avoid the iteratively increased time window.
Both approaches seem somewhat rough (though they would work).
Is there some good alternative? Maybe some magic Spark function that I should be aware off?
Happy to get some input here. I am not 100% sure what to google for, too. I would think there might be even a name for this general issue, since like said, all on / off subscription services with sparse events should have the same issue.
Thanks
You can unpivot the data, aggregate, and use window functions:
with t as (
select userid, 't1' as t,
(case when t1 = 'A' then 1 else -1 end)
from t
where t1 in ('A', 'D')
union all
select userid, 't2' as t,
(case when t2 = 'A' then 1 else -1 end)
from t
where t2 in ('A', 'D')
union all
. . . -- need to repeat for all times
)
select t, sum(inc) as change_at_time,
sum(sum(inc)) over (order by t) as active_on_day
from t
group by t
order by t;
The 't1' is whatever the time is that is represented by the column. It might really be a number (your question is not clear on the representation of the data).
The query would be simpler if you simply had rows with userid, time, and 'A'/'D' rather then having the values splayed across many columns.

SQL question: count of occurrence greater than N in any given hour

I'm looking through login logs (in Netezza) and trying to find users who have greater than a certain number of logins in any 1 hour time period (any consecutive 60 minute period, as opposed to strictly a clock hour) since December 1st. I've viewed the following posts, but most seem to address searching within a specific time range, not ANY given time period. Thanks.
https://dba.stackexchange.com/questions/137660/counting-number-of-occurences-in-a-time-period
https://dba.stackexchange.com/questions/67881/calculating-the-maximum-seen-so-far-for-each-point-in-time
Count records per hour within a time span
You could use the analytic function lag to look back in a sorted sequence of time stamps to see whether the record that came 19 entries earlier is within an hour difference:
with cte as (
select user_id,
login_time,
lag(login_time, 19) over (partition by user_id order by login_time) as lag_time
from userlog
order by user_id,
login_time
)
select user_id,
min(login_time) as login_time
from cte
where extract(epoch from (login_time - lag_time)) < 3600
group by user_id
The output will show the matching users with the first occurrence when they logged a twentieth time within an hour.
I think you might do something like that (I'll use a login table, with user, datetime as single column for the sake of simplicity):
with connections as (
select ua.user
, ua.datetime
from user_logons ua
where ua.datetime >= timestamp'2018-12-01 00:00:00'
)
select ua.user
, ua.datetime
, (select count(*)
from connections ut
where ut.user = ua.user
and ut.datetime between ua.datetime and (ua.datetime + 1 hour)
) as consecutive_logons
from connections ua
It is up to you to complete with your columns (user, datetime)
It is up to you to find the dateadd facilities (ua.datetime + 1 hour won't work); this is more or less dependent on the DB implementation, for example it is DATE_ADD in mySQL (https://www.w3schools.com/SQl/func_mysql_date_add.asp)
Due to the subquery (select count(*) ...), the whole query will not be the fastest because it is a corelative subquery - it needs to be reevaluated for each row.
The with is simply to compute a subset of user_logons to minimize its cost. This might not be useful, however this will lessen the complexity of the query.
You might have better performance using a stored function or a language driven (eg: java, php, ...) function.

Postgres SQL select a range of records spaced out by a given interval

I am trying to determine if it is possible, using only sql for postgres, to select a range of time ordered records at a given interval.
Lets say I have 60 records, one record for each minute in a given hour. I want to select records at 5 minute intervals for that hour. The resulting rows should be 12 records each one 5 minutes apart.
This is currently accomplished by selecting the full range of records and then looping thru the results and pulling out the records at the given interval. I am trying to see if I can do this purly in sql as our db is large and we may be dealing with tens of thousands of records.
Any thoughts?
Yes you can. Its really easy once you get the hang of it. I think its one of jewels of SQL and its especially easy in PostgreSQL because of its excellent temporal support. Often, complex functions can turn into very simple queries in SQL that can scale and be indexed properly.
This uses generate_series to draw up sample time stamps that are spaced 1 minute apart. The outer query then extracts the minute and uses modulo to find the values that are 5 minutes apart.
select
ts,
extract(minute from ts)::integer as minute
from
( -- generate some time stamps - one minute apart
select
current_time + (n || ' minute')::interval as ts
from generate_series(1, 30) as n
) as timestamps
-- extract the minute check if its on a 5 minute interval
where extract(minute from ts)::integer % 5 = 0
-- only pick this hour
and extract(hour from ts) = extract(hour from current_time)
;
ts | minute
--------------------+--------
19:40:53.508836-07 | 40
19:45:53.508836-07 | 45
19:50:53.508836-07 | 50
19:55:53.508836-07 | 55
Notice how you could add an computed index on the where clause (where the value of the expression would make up the index) could lead to major speed improvements. Maybe not very selective in this case, but good to be aware of.
I wrote a reservation system once in PostgreSQL (which had lots of temporal logic where date intervals could not overlap) and never had to resort to iterative methods.
http://www.amazon.com/SQL-Design-Patterns-Programming-Focus/dp/0977671542 is an excellent book that goes has lots of interval examples. Hard to find in book stores now but well worth it.
Extract the minutes, convert to int4, and see, if the remainder from dividing by 5 is 0:
select *
from TABLE
where int4 (date_part ('minute', COLUMN)) % 5 = 0;
If the intervals are not time based, and you just want every 5th row; or
If the times are regular and you always have one record per minute
The below gives you one record per every 5
select *
from
(
select *, row_number() over (order by timecolumn) as rown
from tbl
) X
where mod(rown, 5) = 1
If your time records are not regular, then you need to generate a time series (given in another answer) and left join that into your table, group by the time column (from the series) and pick the MAX time from your table that is less than the time column.
Pseudo
select thetimeinterval, max(timecolumn)
from ( < the time series subquery > ) X
left join tbl on tbl.timecolumn <= thetimeinterval
group by thetimeinterval
And further join it back to the table for the full record (assuming unique times)
select t.* from
tbl inner join
(
select thetimeinterval, max(timecolumn) timecolumn
from ( < the time series subquery > ) X
left join tbl on tbl.timecolumn <= thetimeinterval
group by thetimeinterval
) y on tbl.timecolumn = y.timecolumn
How about this:
select min(ts), extract(minute from ts)::integer / 5
as bucket group by bucket order by bucket;
This has the advantage of doing the right thing if you have two readings for the same minute, or your readings skip a minute. Instead of using min even better would be to use one of the the first() aggregate functions-- code for which you can find here:
http://wiki.postgresql.org/wiki/First_%28aggregate%29
This assumes that your five minute intervals are "on the fives", so to speak. That is, that you want 07:00, 07:05, 07:10, not 07:02, 07:07, 07:12. It also assumes you don't have two rows within the same minute, which might not be a safe assumption.
select your_timestamp
from your_table
where cast(extract(minute from your_timestamp) as integer) in (0,5);
If you might have two rows with timestamps within the same minute, like
2011-01-01 07:00:02
2011-01-01 07:00:59
then this version is safer.
select min(your_timestamp)
from your_table
group by (cast(extract(minute from your_timestamp) as integer) / 5)
Wrap either of those in a view, and you can join it to your base table.

Calculating different tariff-periods for a call in SQL Server

For a call-rating system, I'm trying to split a telephone call duration into sub-durations for different tariff-periods. The calls are stored in a SQL Server database and have a starttime and total duration. Rates are different for night (0000 - 0800), peak (0800 - 1900) and offpeak (1900-235959) periods.
For example:
A call starts at 18:50:00 and has a duration of 1000 seconds. This would make the call end at 19:06:40, making it 10 minutes / 600 seconds in the peak-tariff and 400 seconds in the off-peak tariff.
Obviously, a call can wrap over an unlimited number of periods (we do not enforce a maximum call duration). A call lasting > 24 h can wrap all 3 periods, starting in peak, going through off-peak, night and back into peak tariff.
Currently, we are calculating the different tariff-periods using recursion in VB. We calculate how much of the call goes in the same tariff-period the call starts in, change the starttime and duration of the call accordingly and repeat this process till the full duration of the call has been reach (peakDuration + offpeakDuration + nightDuration == callDuration).
Regarding this issue, I have 2 questions:
Is it possible to do this effectively in a SQL Server statement? (I can think of subqueries or lots of coding in stored procedures, but that would not generate any performance improvement)
Will SQL Server be able to do such calculations in a way more resource-effective than the current VB scripts are doing it?
It seems to me that this is an operation with two phases.
Determine which parts of the phone call use which rates at which time.
Sum the times in each of the rates.
Phase 1 is trickier than Phase 2. I've worked the example in IBM Informix Dynamic Server (IDS) because I don't have MS SQL Server. The ideas should translate easily enough. The INTO TEMP clause creates a temporary table with an appropriate schema; the table is private to the session and vanishes when the session ends (or you explicitly drop it). In IDS, you can also use an explicit CREATE TEMP TABLE statement and then INSERT INTO temp-table SELECT ... as a more verbose way of doing the same job as INTO TEMP.
As so often in SQL questions on SO, you've not provided us with a schema, so everyone has to invent a schema that might, or might not, match what you describe.
Let's assume your data is in two tables. The first table has the call log records, the basic information about the calls made, such as the phone making the call, the number called, the time when the call started and the duration of the call:
CREATE TABLE clr -- call log record
(
phone_id VARCHAR(24) NOT NULL, -- billing plan
called_number VARCHAR(24) NOT NULL, -- needed to validate call
start_time TIMESTAMP NOT NULL, -- date and time when call started
duration INTEGER NOT NULL -- duration of call in seconds
CHECK(duration > 0),
PRIMARY KEY(phone_id, start_time)
-- other complicated range-based constraints omitted!
-- foreign keys omitted
-- there would probably be an auto-generated number here too.
);
INSERT INTO clr(phone_id, called_number, start_time, duration)
VALUES('650-656-3180', '650-794-3714', '2009-02-26 15:17:19', 186234);
For convenience (mainly to save writing the addition multiple times), I want a copy of the clr table with the actual end time:
SELECT phone_id, called_number, start_time AS call_start, duration,
start_time + duration UNITS SECOND AS call_end
FROM clr
INTO TEMP clr_end;
The tariff data is stored in a simple table:
CREATE TABLE tariff
(
tariff_code CHAR(1) NOT NULL -- code for the tariff
CHECK(tariff_code IN ('P','N','O'))
PRIMARY KEY,
rate_start TIME NOT NULL, -- time when rate starts
rate_end TIME NOT NULL, -- time when rate ends
rate_charged DECIMAL(7,4) NOT NULL -- rate charged (cents per second)
);
INSERT INTO tariff(tariff_code, rate_start, rate_end, rate_charged)
VALUES('N', '00:00:00', '08:00:00', 0.9876);
INSERT INTO tariff(tariff_code, rate_start, rate_end, rate_charged)
VALUES('P', '08:00:00', '19:00:00', 2.3456);
INSERT INTO tariff(tariff_code, rate_start, rate_end, rate_charged)
VALUES('O', '19:00:00', '23:59:59', 1.2345);
I debated whether the tariff table should use TIME or INTERVAL values; in this context, the times are very similar to intervals relative to midnight, but intervals can be added to timestamps where times cannot. I stuck with TIME, but it made things messy.
The tricky part of this query is generating the relevant date and time ranges for each tariff without loops. In fact, I ended up using a loop embedded in a stored procedure to generate a list of integers. (I also used a technique that is specific to IBM Informix Dynamic Server, IDS, using the table ID numbers from the system catalog as a source of contiguous integers in the range 1..N, which works for numbers from 1 to 60 in version 11.50.)
CREATE PROCEDURE integers(lo INTEGER DEFAULT 0, hi INTEGER DEFAULT 0)
RETURNING INT AS number;
DEFINE i INTEGER;
FOR i = lo TO hi STEP 1
RETURN i WITH RESUME;
END FOR;
END PROCEDURE;
In the simple case (and the most common case), the call falls in a single-tariff period; the multi-period calls add the excitement.
Let's assume we can create a table expression that matches this schema and covers all the timestamp values we might need:
CREATE TEMP TABLE tariff_date_time
(
tariff_code CHAR(1) NOT NULL,
rate_start TIMESTAMP NOT NULL,
rate_end TIMESTAMP NOT NULL,
rate_charged DECIMAL(7,4) NOT NULL
);
Fortunately, you haven't mentioned weekend rates, so you charge the customers the same
rates at the weekend as during the week. However, the answer should adapt to such
situations if at all possible. If you were to get as complex as giving weekend rates on
public holidays, except that at Christmas or New Year, you charge peak rate instead of
weekend rate because of the high demand, then you would be best off storing the rates in a permanent tariff_date_time table.
The first step in populating tariff_date_time is to generate a list of dates which are relevant to the calls:
SELECT DISTINCT EXTEND(DATE(call_start) + number, YEAR TO SECOND) AS call_date
FROM clr_end,
TABLE(integers(0, (SELECT DATE(call_end) - DATE(call_start) FROM clr_end)))
AS date_list(number)
INTO TEMP call_dates;
The difference between the two date values is an integer number of days (in IDS).
The procedure integers generates values from 0 to the number of days covered by the call and stores the result in a temp table. For the more general case of multiple records, it might be better to calculate the minimum and maximum dates and generate the dates in between rather than generate dates multiple times and then eliminate them with the DISTINCT clause.
Now use a cartesian product of the tariff table with the call_dates table to generate the rate information for each day. This is where the tariff times would be neater as intervals.
SELECT r.tariff_code,
d.call_date + (r.rate_start - TIME '00:00:00') AS rate_start,
d.call_date + (r.rate_end - TIME '00:00:00') AS rate_end,
r.rate_charged
FROM call_dates AS d, tariff AS r
INTO TEMP tariff_date_time;
Now we need to match the call log record with the tariffs that apply. The condition is a standard way of dealing with overlaps - two time periods overlap if the end of the first is later than the start of the second and if the start of the first is before the end of the second:
SELECT tdt.*, clr_end.*
FROM tariff_date_time tdt, clr_end
WHERE tdt.rate_end > clr_end.call_start
AND tdt.rate_start < clr_end.call_end
INTO TEMP call_time_tariff;
Then we need to establish the start and end times for the rate. The start time for the rate is the later of the start time for the tariff and the start time of the call. The end time for the rate is the earlier of the end time for the tariff and the end time of the call:
SELECT phone_id, called_number, tariff_code, rate_charged,
call_start, duration,
CASE WHEN rate_start < call_start THEN call_start
ELSE rate_start END AS rate_start,
CASE WHEN rate_end >= call_end THEN call_end
ELSE rate_end END AS rate_end
FROM call_time_tariff
INTO TEMP call_time_tariff_times;
Finally, we need to sum the times spent at each tariff rate, and take that time (in seconds) and multiply by the rate charged. Since the result of SUM(rate_end - rate_start) is an INTERVAL, not a number, I had to invoke a conversion function to convert the INTERVAL into a DECIMAL number of seconds, and that (non-standard) function is iv_seconds:
SELECT phone_id, called_number, tariff_code, rate_charged,
call_start, duration,
SUM(rate_end - rate_start) AS tariff_time,
rate_charged * iv_seconds(SUM(rate_end - rate_start)) AS tariff_cost
FROM call_time_tariff_times
GROUP BY phone_id, called_number, tariff_code, rate_charged,
call_start, duration;
For the sample data, this yielded the data (where I'm not printing the phone number and called number for compactness):
N 0.9876 2009-02-26 15:17:19 186234 0 16:00:00 56885.760000000
O 1.2345 2009-02-26 15:17:19 186234 0 10:01:11 44529.649500000
P 2.3456 2009-02-26 15:17:19 186234 1 01:42:41 217111.081600000
That's a very expensive call, but the telco will be happy with that. You can poke at any of the intermediate results to see how the answer is derived. You can use fewer temporary tables at the cost of some clarity.
For a single call, this will not be much different than running the code in VB in the client. For a lot of calls, this has the potential to be more efficient. I'm far from convinced that recursion is necessary in VB - straight iteration should be sufficient.
kar_vasile(id,vid,datein,timein,timeout,bikari,tozihat)
{
--- the bikari field is unemployment time you can delete any where
select
id,
vid,
datein,
timein,
timeout,
bikari,
hourwork =
case when
timein <= timeout
then
SUM
(abs(DATEDIFF(mi, timein, timeout)) - bikari)/60 --
calculate Hour
else
SUM(abs(DATEDIFF(mi, timein, '23:59:00:00') + DATEDIFF(mi, '00:00:00', timeout) + 1) - bikari)/60 --
calculate
minute
end
,
minwork =
case when
timein <= timeout
then
SUM
(abs(DATEDIFF(MI, timein, timeout)) - bikari)%60 --
calclate Hour
starttime is later
than endtime
else
SUM(abs(DATEDIFF(mi, timein, '23:59:00:00') + DATEDIFF(mi, '00:00:00', timeout) + 1) - bikari)%60--
calculate minute
starttime is later
than
endtime
end, tozihat
from kar_vasile
group
by id, vid, datein, timein, timeout, tozihat, bikari
}
Effectively in T-SQL? I suspect not, with the schema as described at present.
It might be possible, however, if your rate table stores the three tariffs for each date. There is at least one reason why you might do this, apart from the problem at hand: it's likely at some point that rates for one period or another might change and you may need to have the historic rates available.
So say we have these tables:
CREATE TABLE rates (
from_date_time DATETIME
, to_date_time DATETIME
, rate MONEY
)
CREATE TABLE calls (
id INT
, started DATETIME
, ended DATETIME
)
I think there are three cases to consider (may be more, I'm making this up as I go):
a call occurs entirely within one
rate period
a call starts in one
rate period (a) and ends in the next (b)
a call spans at least one complete
rate period
Assuming rate is per second, I think you might produce something like the following (completely untested) query
SELECT id, DATEDIFF(ss, started, ended) * rate /* case 1 */
FROM rates JOIN calls ON started > from_date_time AND ended < to_date_time
UNION
SELECT id, DATEDIFF(ss, started, to_date_time) * rate /* case 2a and the start of case 3 */
FROM rates JOIN calls ON started > from_date_time AND ended > to_date_time
UNION
SELECT id, DATEDIFF(ss, from_date_time, ended) * rate /* case 2b and the last part of case 3 */
FROM rates JOIN calls ON started < from_date_time AND ended < to_date_time
UNION
SELECT id, DATEDIFF(ss, from_date_time, to_date_time) * rate /* case 3 for entire rate periods, should pick up all complete periods */
FROM rates JOIN calls ON started < from_date_time AND ended > to_date_time
You could apply a SUM..GROUP BY over that in SQL or handle it in your code. Alternatively, with carefully-constructed logic, you could probably merge the UNIONed parts into a single WHERE clause with lots of ANDs and ORs. I thought the UNION showed the intent rather more clearly.
HTH & HIW (Hope It Works...)
This is a thread about your problem we had over at sqlteam.com. take a look because it includes some pretty slick solutions.
Following on from Mike Woodhouse's answer, this may work for you:
SELECT id, SUM(DATEDIFF(ss, started, ended) * rate)
FROM rates
JOIN calls ON
CASE WHEN started < from_date_time
THEN DATEADD(ss, 1, from_date_time)
ELSE started > from_date_time
AND
CASE WHEN ended > to_date_time
THEN DATEADD(ss, -1, to_date_time)
ELSE ended END
< ended
GROUP BY id
An actual schema for the relevant tables in your database would have been very helpful. I'll take my best guesses. I've assumed that the Rates table has start_time and end_time as the number of minutes past midnight.
Using a calendar table (a VERY useful table to have in most databases):
SELECT
C.id,
R.rate,
SUM(DATEDIFF(ss,
CASE
WHEN C.start_time < R.rate_start_time THEN R.rate_start_time
ELSE C.start_time
END,
CASE
WHEN C.end_time > R.rate_end_time THEN R.rate_end_time
ELSE C.end_time
END)) AS
FROM
Calls C
INNER JOIN
(
SELECT
DATEADD(mi, Rates.start_time, CAL.calendar_date) AS rate_start_time,
DATEADD(mi, Rates.end_time, CAL.calendar_date) AS rate_end_time,
Rates.rate
FROM
Calendar CAL
INNER JOIN Rates ON
1 = 1
WHERE
CAL.calendar_date >= DATEADD(dy, -1, C.start_time) AND
CAL.calendar_date <= C.start_time
) AS R ON
R.rate_start_time < C.end_time AND
R.rate_end_time > C.start_time
GROUP BY
C.id,
R.rate
I just came up with this as I was typing, so it's untested and you will very likely need to tweak it, but hopefully you can see the general idea.
I also just realized that you use a start_time and a duration for your calls. You can just replace C.end_time wherever you see it with DATEADD(ss, C.start_time, C.duration) assuming that the duration is in seconds.
This should perform pretty quickly in any decent RDBMS assuming proper indexes, etc.
Provided that you calls last less than 100 days:
WITH generate_range(item) AS
(
SELECT 0
UNION ALL
SELECT item + 1
FROM generate_range
WHERE item < 100
)
SELECT tday, id, span
FROM (
SELECT tday, id,
DATEDIFF(minute,
CASE WHEN tbegin < clbegin THEN clbegin ELSE tbegin END,
CASE WHEN tend < clend THEN tend ELSE clend END
) AS span
FROM (
SELECT DATEADD(day, item, DATEDIFF(day, 0, clbegin)) AS tday,
ti.id,
DATEADD(minute, rangestart, DATEADD(day, item, DATEDIFF(day, 0, clbegin))) AS tbegin,
DATEADD(minute, rangeend, DATEADD(day, item, DATEDIFF(day, 0, clbegin))) AS tend
FROM calls, generate_range, tariff ti
WHERE DATEADD(day, 1, DATEDIFF(day, 0, clend)) > DATEADD(day, item, DATEDIFF(day, 0, clbegin))
) t1
) t2
WHERE span > 0
I'm assuming you keep your tariffs ranges in minutes from midnight and count lengths in minutes too.
The big problem with performing this kind of calculation at the database level is that it takes resource away from your database while it's going on, both in terms of CPU and availability of rows and tables via locking. If you were calculating 1,000,000 tariffs as part of a batch operation, then that might run on the database for a long time and during that time you'd be unable to use the database for anything else.
If you have the resource, retrieve all the data you need with one transaction and do all the logic calculations outside the database, in a language of your choice. Then insert all the results. Databases are for storing and retrieving data, and any business logic they perform should be kept to an absolute bare minimum at all times. Whilst brilliant at some things, SQL isn't the best language for date or string manipulation work.
I suspect you're already on the right lines with your VBA work, and without knowing more it certainly feels like a recursive, or at least an iterative, problem to me. When done correctly recursion can be a powerful and elegant solution to a problem. Tying up the resources of your database very rarely is.