A question again on cursors in SQL Server - sql

I am reading data using modbus The data contains status of the 250 registers in a PLC as either off or on with the time of reading as the time stamp. The raw data received is stored in table as below where the column register represents the register read and the column value represents the status of the register as 0 or 1 with time stamp. In the sample I am showing data for just one register (ie 250). Slave ID represents the PLC from which data was obtained
I need to populate one more table Table_signal_on_log from the raw data table. This table should contain the time at which the value changed to 1 as the start time and the time at which it changes back to 0 as end time. This table is also given below
I am able to do it with a cursor but it is slow and if the number of signals increases could slow down the processing. How could I do without cursor. I tried to do it with set based operations I couldn't get one working. I need to avoid repeat values ie after recording 13:30:30 as the time at which signal becomes 1, I have to ignore all entries till it becomes 0 and record that as end time. Again ignore all values till becomes 1. This process is done once in 20 seconds (can be done at any interval but presently 20). So I may have 500 rows to be looped through every time. This may increase as the number of PLCs connected increases and cursor operation is bound to be an issue
Raw data table
SlaveID Register Value Timestamp ProcessTime
-------------------------------------------------------
3 250 0 13:30:10 NULL
3 250 0 13:30:20 NULL
3 250 1 13:30:30 NULL
3 250 1 13:30:40 NULL
3 250 1 13:30:50 NULL
3 250 1 13:31:00 NULL
3 250 0 13:31:10 NULL
3 250 0 13:31:20 NULL
3 250 0 13:32:30 NULL
3 250 0 13:32:40 NULL
3 250 1 13:32:50 NULL
Table_signal_on_log
SlaveID Register StartTime Endtime
3 250 13:30:30 13:31:10
3 250 13:32:50 NULL //value is still 1

This is a classic gaps-and-islands problem, there are a number of solutions. Here is one:
Get the previous Value for each row using LAG
Filter so we only have rows where the previous Value is different or non-existent, in other words the beginning of an "island" of rows.
Of those rows, get the next Timestamp for eacc row using LEAD.
Filter so we only have Value = 1.
WITH cte1 AS (
SELECT *,
PrevValue = LAG(t.Value) OVER (PARTITION BY t.SlaveID, t.Register ORDER BY t.Timestamp)
FROM YourTable t
),
cte2 AS (
SELECT *,
NextTime = LEAD(t.Timestamp) OVER (PARTITION BY t.SlaveID, t.Register ORDER BY t.Timestamp)
FROM cte1 t
WHERE (t.Value <> t.PrevValue OR t.PrevValue IS NULL)
)
SELECT
t.SlaveID,
t.Register,
StartTime = t.Timestamp,
Endtime = t.NextTime
FROM cte2 t
WHERE t.Value = 1;
db<>fiddle

Related

A follow up question on Gaps and Islands solution

This is continuation of my previous question A question again on cursors in SQL Server.
To reiterate, I get values from a sensor as 0 (off) or 1(on) every 10 seconds. I need to log in another table the on times ie when the sensor value is 1.
I will process the data every one minute (which means I will have 6 rows of data). I needed a way to do this without using cursors and was answered by #Charlieface.
WITH cte1 AS (
SELECT *,
PrevValue = LAG(t.Value) OVER (PARTITION BY t.SlaveID, t.Register ORDER BY t.Timestamp)
FROM YourTable t
),
cte2 AS (
SELECT *,
NextTime = LEAD(t.Timestamp) OVER (PARTITION BY t.SlaveID, t.Register ORDER BY t.Timestamp)
FROM cte1 t
WHERE (t.Value <> t.PrevValue OR t.PrevValue IS NULL)
)
SELECT
t.SlaveID,
t.Register,
StartTime = t.Timestamp,
Endtime = t.NextTime
FROM cte2 t
WHERE t.Value = 1;
db<>fiddle
The raw data set and desired outcome are as below. Here register 250 represents the sensor and value presents the value as 0 or 1 and time stamp represents the time of reading the value
SlaveID
Register
Value
Timestamp
ProcessTime
3
250
0
13:30:10
NULL
3
250
0
13:30:20
NULL
3
250
1
13:30:30
NULL
3
250
1
13:30:40
NULL
3
250
1
13:30:50
NULL
3
250
1
13:31:00
NULL
3
250
0
13:31:10
NULL
3
250
0
13:31:20
NULL
3
250
0
13:32:30
NULL
3
250
0
13:32:40
NULL
3
250
1
13:32:50
NULL
The required entry in the logging table is
SlaveID
Register
StartTime
Endtime
3
250
13:30:30
13:31:10
3
250
13:32:50
NULL //value is still 1
The solution given works fine but when the next set of data is processed, the exiting open entry (end time is null) is to be considered.
If the next set of values is only 1 (ie all values are 1), then no entry is to be made in the log table since the value was 1 in the previous set of data and continues to be 1. When the value changes 0 in one of the sets, then the end time should be updated with that time. A fresh row to be inserted in log table when it becomes 1 again
I solved the issue by using a 'hybrid'. I get 250 rows (values of 250 sensors polled) every 10 seconds. I process the data once in 180 seconds. I get about 4500 records which I process using the CTE. Now I get result set of around 250 records (a few more than 250 if some signals have changed the state). This I insert into a #table (of the table being processed) and use a cursor on this #table to check and insert into the log table. Since the number of rows is around 250 only cursor runs without issue.
Thanks to #charlieface for the original answer.

Misleading count of 1 on JOIN in Postgres 11.7

I've run into a subtlety around count(*) and join, and a hoping to get some confirmation that I've figured out what's going on correctly. For background, we commonly convert continuous timeline data into discrete bins, such as hours. And since we don't want gaps for bins with no content, we'll use generate_series to synthesize the buckets we want values for. If there's no entry for, say 10AM, fine, we stil get a result. However, I noticed that I'm sometimes getting 1 instead of 0. Here's what I'm trying to confirm:
The count is 1 if you count the "grid" series, and 0 if you count the data table.
This only has to do with count, and no other aggregate.
The code below sets up some sample data to show what I'm talking about:
DROP TABLE IF EXISTS analytics.measurement_table CASCADE;
CREATE TABLE IF NOT EXISTS analytics.measurement_table (
hour smallint NOT NULL DEFAULT NULL,
measurement smallint NOT NULL DEFAULT NULL
);
INSERT INTO measurement_table (hour, measurement)
VALUES ( 0, 1),
( 1, 1), ( 1, 1),
(10, 2), (10, 3), (10, 5);
Here are the goal results for the query. I'm using 12 hours to keep the example results shorter.
Hour Count sum
0 1 1
1 2 2
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 3 10
11 0 0
12 0 0
This works correctly:
WITH hour_series AS (
select * from generate_series (0,12) AS hour
)
SELECT hour_series.hour,
count(measurement_table.hour) AS frequency,
COALESCE(sum(measurement_table.measurement), 0) AS total
FROM hour_series
LEFT JOIN measurement_table ON (measurement_table.hour = hour_series.hour)
GROUP BY 1
ORDER BY 1
This returns misleading 1's on the match:
WITH hour_series AS (
select * from generate_series (0,12) AS hour
)
SELECT hour_series.hour,
count(*) AS frequency,
COALESCE(sum(measurement_table.measurement), 0) AS total
FROM hour_series
LEFT JOIN measurement_table ON (hour_series.hour = measurement_table.hour)
GROUP BY 1
ORDER BY 1
0 1 1
1 2 2
2 1 0
3 1 0
4 1 0
5 1 0
6 1 0
7 1 0
8 1 0
9 1 0
10 3 10
11 1 0
12 1 0
The only difference between these two examples is the count term:
count(*) -- A result of 1 on no match, and a correct count otherwise.
count(joined to table field) -- 0 on no match, correct count otherwise.
That seems to be it, you've got to make it explicit that you're counting the data table. Otherwise, you get a count of 1 since the series data is matching once. Is this a nuance of joinining, or a nuance of count in Postgres?
Does this impact any other aggrgate? It seems like it sholdn't.
P.S. generate_series is just about the best thing ever.
You figured out the problem correctly: count() behaves differently depending on the argument is is given.
count(*) counts how many rows belong to the group. This just cannot be 0 since there is always at least one row in a group (otherwise, there would be no group).
On the other hand, when given a column name or expression as argument, count() takes in account any non-null value, and ignores null values. For your query, this lets you distinguish groups that have no match in the left joined table from groups where there are matches.
Note that this behavior is not Postgres specific, but belongs to the standard
ANSI SQL specification (all databases that I know conform to it).
Bottom line:
in general cases, uses count(*); this is more efficient, since the database does not need to check for nulls (and makes it clear to the reader of the query that you just want to know how many rows belong to the group)
in specific cases such as yours, put the relevant expression in the count()

"Sessioning" an event stream

I have a problem that should be solved outside of SQL, but due to business constraints needs to be solved within SQL.
So, please don't tell me to do this at data ingestion, outside of SQL, I want to, but it's not an option...
I have a stream of events, with 4 principle properties....
The source device
The event's timestamp
The event's "type"
The event's "payload" (a dreaded VARCHAR representing various data-types)
What I need to do is break the stream up in to pieces (that I will refer to as "sessions").
Each session is specific to a device (effectively, PARTITION BY device_id)
No one session may contain more than one event of the same type
To shorten the examples, I'll limit them to include just the timestamp and the event_type...
timestamp | event_type desired_session_id
-----------+------------ --------------------
0 | 1 0
1 | 4 0
2 | 2 0
3 | 3 0
4 | 2 1
5 | 1 1
6 | 3 1
7 | 4 1
8 | 4 2
9 | 4 3
10 | 1 3
11 | 1 4
12 | 2 4
An idealised final output may be to pivot the final results...
device_id | session_id | event_type_1_timestamp | event_type_1_payload | event_type_2_timestamp | event_type_2_payload ...
(But that is not yet set in stone, but I will need to "know" which events make up a session, that their timestamps are, and what their payloads are. It is possible that just appending the session_id column to the input is sufficient, as long as I don't "lose" the other properties.)
There are:
12 discrete event types
hundreds of thousands of devices
hundred of thousands of events per device
a "norm" of around 6-8 events per "session"
but sometimes a session may have just 1 or all 12
These factors mean that half-cartesian products and the like are, umm, less than desirable, but possibly may be "the only way".
I've played (in my head) with analytic functions and gaps-and-islands type processes, but can never quite get there. I always fall back to a place where I "want" some flags that I can carry forward from row to row and reset them as needed...
Pseduo-code that doesn't work in SQL...
flags = [0,0,0,0,0,0,0,0,0]
session_id = 0
for each row in stream
if flags[row.event_id] == 0 then
flags[row.event_id] = 1
else
session_id++
flags = [0,0,0,0,0,0,0,0,0]
row.session_id = session_id
Any SQL solution to that is appreciated, but you get "bonus points" if you can also take account of events "happening at the same time"...
If multiple events happen at the same timestamp
If ANY of those events are in the "current" session
ALL of those events go in to a new session
Else
ALL of those events go in to the "current" session
If such a group of event include the same event type multiple times
Do whatever you like
I'll have had enough by that point...
But set the session as "ambiguous" or "corrupt" with some kind of flag?
I'm not 100% sure this can be done in SQL. But I have an idea for an algorithm that might work:
enumerate the counts for each event
take the maximum count up to each point as the "grouping" for the events (this is the session)
So:
select t.*,
(max(seqnum) over (partition by device order by timestamp) - 1) as desired_session_id
from (select t.*,
row_number() over (partition by device, event_type order by timestamp) as seqnum
from t
) t;
EDIT:
This is too long for a comment. I have a sense that this requires a recursive CTE (RBAR). This is because you cannot land at a single row and look at the cumulative information or neighboring information to determine if the row should start a new session.
Of course, there are some situations where it is obvious (say, the previous row has the same event). And, it is also possible that there is some clever method of aggregating the previous data that makes it possible.
EDIT II:
I don't think this is possible without recursive CTEs (RBAR). This isn't quite a mathematical proof, but this is where my intuition comes from.
Imagine you are looking back 4 rows from the current and you have:
1
2
1
2
1 <-- current row
What is the session for this? It is not determinate. Consider:
e s vs e s
1 1 2 1 <-- row not in look back
1 2 1 1
2 2 2 2
1 3 1 2
2 3 2 3
1 4 1 3
The value depends on going further back. Obviously, this example can be extended all the way back to the first event. I don't think there is a way to "aggregate" the earlier values to distinguish between these two cases.
The problem is solvable if you can deterministically say that a given event is the start of a new session. That seems to require complete prior knowledge, at least in some cases. There are obviously cases where this is easy -- such as two events in a row. I suspect, though, that these are the "minority" of such sequences.
That said, you are not quite stuck with RBAR through the entire table, because you have device_id for parallelization. I'm not sure if your environment can do this, but in BQ or Postgres, I would:
Aggregate along each device to create an array of structs with the time and event information.
Loop through the arrays once, perhaps using custom code.
Reassign the sessions by joining back to the original table or unnesting the logic.
UPD based on discussion (not checked/tested, rough idea):
WITH
trailing_events as (
select *, listagg(event_type::varchar,',') over (partition by device_id order by ts rows between previous 12 rows and current row) as events
from tbl
)
,session_flags as (
select *, f_get_session_flag(events) as session_flag
from trailing_events
)
SELECT
*
,sum(session_flag::int) over (partition by device_id order by ts) as session_id
FROM session_flags
where f_get_session_flag is
create or replace function f_get_session_flag(arr varchar(max))
returns boolean
stable as $$
stream = arr.split(',')
flags = [0,0,0,0,0,0,0,0,0,0,0,0]
is_new_session = False
for row in stream:
if flags[row.event_id] == 0:
flags[row.event_id] = 1
is_new_session = False
else:
session_id+=1
flags = [0,0,0,0,0,0,0,0,0,0,0,0]
is_new_session = True
return is_new_session
$$ language plpythonu;
prev answer:
The flags could be replicated as the division remainder of running count of the event and 2:
1 -> 1%2 = 1
2 -> 2%2 = 0
3 -> 3%2 = 1
4 -> 4%2 = 0
5 -> 5%2 = 1
6 -> 6%2 = 0
and concatenated into a bit mask (similar to flags array in the pseudocode). The only tricky point is when to exactly reset all flags to zeros and initiate the new session ID but I could get quite close. If your sample table is called t and it has ts and type columns the script could look like this:
with
-- running count of the events
t1 as (
select
*
,sum(case when type=1 then 1 else 0 end) over (order by ts) as type_1_cnt
,sum(case when type=2 then 1 else 0 end) over (order by ts) as type_2_cnt
,sum(case when type=3 then 1 else 0 end) over (order by ts) as type_3_cnt
,sum(case when type=4 then 1 else 0 end) over (order by ts) as type_4_cnt
from t
)
-- mask
,t2 as (
select
*
,case when type_1_cnt%2=0 then '0' else '1' end ||
case when type_2_cnt%2=0 then '0' else '1' end ||
case when type_3_cnt%2=0 then '0' else '1' end ||
case when type_4_cnt%2=0 then '0' else '1' end as flags
from t1
)
-- previous row's mask
,t3 as (
select
*
,lag(flags) over (order by ts) as flags_prev
from t2
)
-- reset the mask if there is a switch from 1 to 0 at any position
,t4 as (
select *
,case
when (substring(flags from 1 for 1)='0' and substring(flags_prev from 1 for 1)='1')
or (substring(flags from 2 for 1)='0' and substring(flags_prev from 2 for 1)='1')
or (substring(flags from 3 for 1)='0' and substring(flags_prev from 3 for 1)='1')
or (substring(flags from 4 for 1)='0' and substring(flags_prev from 4 for 1)='1')
then '0000'
else flags
end as flags_override
from t3
)
-- get the previous value of the reset mask and same event type flag for corner case
,t5 as (
select *
,lag(flags_override) over (order by ts) as flags_override_prev
,type=lag(type) over (order by ts) as same_event_type
from t4
)
-- again, session ID is a switch from 1 to 0 OR same event type (that can be a switch from 0 to 1)
select
ts
,type
,sum(case
when (substring(flags_override from 1 for 1)='0' and substring(flags_override_prev from 1 for 1)='1')
or (substring(flags_override from 2 for 1)='0' and substring(flags_override_prev from 2 for 1)='1')
or (substring(flags_override from 3 for 1)='0' and substring(flags_override_prev from 3 for 1)='1')
or (substring(flags_override from 4 for 1)='0' and substring(flags_override_prev from 4 for 1)='1')
or same_event_type
then 1
else 0 end
) over (order by ts) as session_id
from t5
order by ts
;
You can add necessary partitions and extend to 12 event types, this code is intended to work on a sample table that you provided... it's not perfect, if you run the subqueries you'll see that flags are reset more often than needed but overall it works except the corner case for session id 2 with a single event type=4 following the end of the other session with the same event type=4, so I have added a simple lookup in same_event_type and used it as another condition for a new session id, hope this will work on a bigger dataset.
The solution I decided to live with is effectively "don't do it in SQL" by deferring the actual sessionising to a scalar function written in python.
--
-- The input parameter should be a comma delimited list of identifiers
-- Each identified should be a "power of 2" value, no lower than 1
-- (1, 2, 4, 8, 16, 32, 64, 128, etc, etc)
--
-- The input '1,2,4,2,1,1,4' will give the output '0001010'
--
CREATE OR REPLACE FUNCTION public.f_indentify_collision_indexes(arr varchar(max))
RETURNS VARCHAR(MAX)
STABLE AS
$$
stream = map(int, arr.split(','))
state = 0
collisions = []
item_id = 1
for item in stream:
if (state & item) == (item):
collisions.append('1')
state = item
else:
state |= item
collisions.append('0')
item_id += 1
return ''.join(collisions)
$$
LANGUAGE plpythonu;
NOTE : I wouldn't use this if there are hundreds of event types ;)
Effectively I pass in a data structure of events in sequence, and the return is a data structure of where the new sessions start.
I chose the actual data structures so make the SQL side of things as simple as I could. (Might not be the best, very open to other ideas.)
INSERT INTO
sessionised_event_stream
SELECT
device_id,
REGEXP_COUNT(
LEFT(
public.f_indentify_collision_indexes(
LISTAGG(event_type_id, ',')
WITHIN GROUP (ORDER BY session_event_sequence_id)
OVER (PARTITION BY device_id)
),
session_event_sequence_id::INT
),
'1',
1
) + 1
AS session_login_attempt_id,
session_event_sequence_id,
event_timestamp,
event_type_id,
event_data
FROM
(
SELECT
*,
ROW_NUMBER()
OVER (PARTITION BY device_id
ORDER BY event_timestamp, event_type_id, event_data)
AS session_event_sequence_id
FROM
event_stream
)
Assert a deterministic order to the events (encase of events happening at the same time, etc)
ROW_NUMBER() OVER (stuff) AS session_event_sequence_id
Create a comma delimited list of event_type_id's
LISTAGG(event_type_id, ',') => '1,2,4,8,2,1,4,1,4,4,1,1'
Use python to work out the boundaries
public.f_magic('1,2,4,8,2,1,4,1,4,4,1,1') => '000010010101'
For the first event in the sequence, count the number of 1's up to and including the first character in the 'boundaries'. For the second event in the sequence, count the number of 1's up to and including the second character in the boundaries, etc, etc.
event 01 = 1 => boundaries = '0' => session_id = 0
event 02 = 2 => boundaries = '00' => session_id = 0
event 03 = 4 => boundaries = '000' => session_id = 0
event 04 = 8 => boundaries = '0000' => session_id = 0
event 05 = 2 => boundaries = '00001' => session_id = 1
event 06 = 1 => boundaries = '000010' => session_id = 1
event 07 = 4 => boundaries = '0000100' => session_id = 1
event 08 = 1 => boundaries = '00001001' => session_id = 2
event 09 = 4 => boundaries = '000010010' => session_id = 2
event 10 = 4 => boundaries = '0000100101' => session_id = 3
event 11 = 1 => boundaries = '00001001010' => session_id = 3
event 12 = 1 => boundaries = '000010010101' => session_id = 4
REGEXP_COUNT( LEFT('000010010101', session_event_sequence_id), '1', 1 )
The result is something that's not very speedy, but robust and still better than other options I've tried. What it "feels like" is that (perhaps, maybe, I'm not sure, caveat, caveat) if there are 100 items in a stream then LIST_AGG() is called once and the python UDF is called 100 times. I might be wrong. I've seen Redshift do worse things ;)
Pseudo code for what turns out to be a worse option.
Write some SQL that can find "the next session" from any given stream.
Run that SQL once storing the results in a temp table.
=> Now have the first session from every stream
Run it again using the temp table as an input
=> We now also have the second session from every stream
Keep repeating this until the SQL inserts 0 rows in to the temp table
=> We now have all the sessions from every stream
The time taken to calculate each session was relatively low, and was actually dominated by the overhead of making repeated requests to RedShift. It also meant that the dominant factor was "how many session are in the longest stream" (In my case, 0.0000001% of the streams were 1000x longer than the average.)
The python version is actually slower in most individual cases, but is not dominated by those annoying outliers. This meant that overall the python version completed about 10x sooner than the "external loop" version described here. It also used a bucket load more CPU resources in total, but elapsed time is the more important factor right now :)

sql to read n number of rows and display in fixed number of columns

We are currently using SQL2005. I have a SQL table that stores serial numbers in a single column. Thus 10 000 serial numbers mean 10 000 rows. When these are printed on an invoice, one serial number per row is being printed due to how the information is stored. We currently use the built-in invoice in our ERP system but will change to SSRS if I can get the printing of serials sorted.
How can I read the serial numbers and display it (either in a view or sp) maybe 10 at a time per row. Thus if I am reading 18 serials it will be two rows (1st row with 10 serials and 2nd row with 8 serials). If I am reading 53 serials, it will be 6 rows. Getting this right will cut down on the paper needed for invoice printing to roughly a tenth of what is currently required!
Just to be clear...the serials are currently are stored and print like this :
Ser1
Ser2
Ser3
Ser4
Ser5
I would prefer them to print like this :
Ser1 Ser2 Ser3 Ser4 Ser5 Ser6 Ser7 Ser8 Ser9 Ser10
Ser11 Ser12 Ser13 Ser14 Ser15 Ser16....etc
Thanks
You can use row_number() to assign a unique number to each row. That allows you to group by rn / 10, giving you groups of 10 rows.
Here's an example for 3 instead of 10 rows:
select max(case when rn % 3 = 0 then serialno end) as sn1
, max(case when rn % 3 = 1 then serialno end) as sn2
, max(case when rn % 3 = 2 then serialno end) as sn3
from (
select row_number() over (order by serialno) -1 as rn
, serialno
from #t
) as SubQueryAlias
group by
rn / 3
See it working at SQL Fiddle.

create variable for unique sessions

I have some data about when, how long, and what channel people are listening to the radio. I need to make a variable called sessions that groups all entries which occur while the radio is on. Because the data may contain some errors I would like to say that if less than five minutes passes from the end of one channel period to the next then it is still the same session. Hopefully a brief example will clarify.
obs Entry_date Entry_time duration(in secs) channel
1 01/01/12 23:25:21 6000 2
2 01/03/12 01:05:64 300 5
3 01/05/12 12:12:35 456 5
4 01/05/12 16:45:21 657 8
I want to create the variable sessions so that
obs Entry_date Entry_time duration(in secs) channel session
1 01/01/12 23:25:21 6000 2 1
2 01/03/12 01:05:64 300 5 1
3 01/05/12 12:12:35 456 5 2
4 01/05/12 16:45:21 657 8 3
for defining 1 session i need to use entry_time (and date if it goes from 11pm into the next morning) so that if entry_time+duration + (5minutes) < entry_time(next channel) then the session changes. This has been killing me and simple arrays wont do the trick, or my attempt using arrays has not worked. Thanks in advance
Aside from the comments I made in the OP, here's how I would do it using a SAS data step. I've changed the date and time values for row 2 to what I suspect they should be (in order to get the same result as in the OP). This avoids having to perform a self join, which is likely to be performance intensive on a large dataset.
I've used the DIF and LAG functions, so care needs to be taken if you're adding in extra code (particularly IF statements).
data have;
input entry_date :mmddyy10. entry_time :time. duration channel;
format entry_date date9. entry_time time.;
datalines;
01/01/2012 23:25:21 6000 2
01/02/2012 01:05:54 300 5
01/05/2012 12:12:35 456 5
01/05/2012 16:45:21 657 8
;
run;
data want;
set have;
by entry_date entry_time; /* put in to check data is sorted correctly */
retain session 1; /* initialise session with value 1 */
session+(dif(dhms(entry_date,0,0,entry_time))-lag(duration)>300); /* increment session by 1 if time difference > 5 minutes */
run;
hopefully I got your requirements right!
Since you need to base result on adjoining rows, there is a need to join a table to itself.
The Session #s are not consecutive, but you should get the point.
create table #temp
(obs int not null,
entry_date datetime not null,
duration int not null,
channel int not null)
--obs Entry_date Entry_time duration(in secs) channel
insert #temp
select 1, '01/01/12 23:25:21', 6000, 2
union all select 2, '01/03/12 01:05:54', 300, 5
union all select 3, '01/05/12 12:12:35', 456, 5
union all select 4, '01/05/12 16:45:21', 657, 8
select a.obs,
a.entry_date,
a.duration,
endSession = dateadd(mi,5,dateadd(mi,a.duration,a.entry_date)),
a.channel,
b.entry_date,
minOverlapping = datediff(mi,b.entry_date,
dateadd(mi,5,dateadd(mi,a.duration,a.entry_date))),
anotherSession = case
when dateadd(mi,5,dateadd(mi,a.duration,a.entry_date))<b.entry_date
then b.obs
else a.obs end
from #temp a
left join #temp b on a.obs = b.obs - 1
hope this helps a bit