oracle sql - finding entries with dates (start/end column) overlap - sql

So data is something like this:
ID | START_DATE | END_DATE | UID | CANCELED
-------------------------------------------------
44 | 2015-10-20 22:30 | 2015-10-20 23:10 | 'one' |
52 | 2015-10-20 23:00 | 2015-10-20 23:30 | 'one' |
66 | 2015-10-21 13:00 | 2015-10-20 13:30 | 'two' |
There are more than 100k of these entries.
We can see that start_date of the second entry overlaps with the end_date of the first entry. When dates do overlap, entries with lower id should be marked as true in 'CANCELED' column.
I tried some queries but they take a really long time so I'm not sure even if they work. Also I want to cover all overlaping cases so this also seems to slow this down.
I am the one responsible for inserting/updating these entries using pl/sql
update table set column = 'value' where ID = '44';
if sql%rowcount = 0
then insert values(...)
end if
so I could maybe do this in this step. But all tables are updated/inserted using one big pl/sql created dynamically where all rows either get updated or new ones get inserted so once again this seems to get slow.
And of all the sql 'dialects' oracle one is the most cryptic I had chance to work with. Ideas?
EDIT: I forgot one important detail, there is also one more column (UID) which is to be matched, update above

I would start with this query:
update table t
set cancelled = true
where exists (select 1
from table t2
where t.end_date > t2.start_date and
t.uid = t2.uid and
t.id < t2.id
)
An index on table(uid, start_date, id) might help.
As a note: this is probably much easier to do when you create the table, because you can use lag().

I think the following update should work:
update tbl
set cancelled = 'TRUE'
where t_id in (select t_id
from tbl t
where exists (select 1
from tbl x
where x.t_id > t.t_id
and x.start_date <= t.end_date));
Fiddle: http://sqlfiddle.com/#!4/06447/1/0
If the table is extremely large, you might be better off creating a new table using a CTAS (create table as select) query, where you can use the nologging option, allowing you to avoid having to write to the undo log. When you execute an update like you are doing now, you are writing the changes to Oracle's undo log so that, prior to committing the transaction, you have the option to rollback. This adds overhead. As a result a CTAS query with nologging might run faster. Here is one way for that approach:
create table new_table nologging as
with sub as
(select t_id,
start_date,
end_date,
'TRUE' as cancelled
from tbl t
where exists (select 1
from tbl x
where x.t_id > t.t_id
and x.start_date <= t.end_date))
select *
from sub
union all
select t.*
from tbl t
left join sub s
on t.t_id = s.t_id
where s.t_id is null;
Fiddle: http://sqlfiddle.com/#!4/c6a29/1

This will do the trick without dynamic query nor correlated subqueries, but it consume some memory for the with clauses:
MERGE INTO Table1
USING
(
with q0 as(
select rownum fid, id, start_date from(
select id, start_date from table1
union all
select 999999 id, null start_date from dual
order by id
)
), q1 as (
select rownum fid, id, end_date from(
select -1 id, null end_date from dual
union all
select id, end_date from table1
order by id
)
)
select q0.fid, q1.id, q0.start_date, q1.END_DATE, case when (q0.start_date < q1.END_DATE) then 1 else 0 end canceled
from q0
join q1
on (q0.fid = q1.fid)
) ta ON (ta.id = Table1.id)
WHEN MATCHED THEN UPDATE
SET Table1.canceled = ta.canceled;
The inner with select statement with alias ta will produce this result:
"FID"|"ID"|"START_DATE" |"END_DATE" |"CANCELED"
---------------------------------------------------------
1 |-1 |20/10/15 22:30:00| |0
2 |44 |20/10/15 23:00:00|20/10/15 23:10:00|1
3 |52 |21/10/15 13:00:00|20/10/15 23:30:00|0
4 |66 | |20/10/15 13:30:00|0
Then its used in the merge vwithout any correlated queries. Tested and worked fine using SQLDeveloper.

You can use BULK COLLECT INTO and FORALL to reduce context switching within a procedure:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE test ( ID, START_DATE, END_DATE, CANCELED ) AS
SELECT 44, TO_DATE( '2015-10-20 22:30', 'YYYY-MM-DD HH24:MI' ), TO_DATE( '2015-10-20 23:10', 'YYYY-MM-DD HH24:MI' ), 'N' FROM DUAL
UNION ALL SELECT 52, TO_DATE( '2015-10-20 23:00', 'YYYY-MM-DD HH24:MI' ), TO_DATE( '2015-10-20 23:30', 'YYYY-MM-DD HH24:MI' ), 'N' FROM DUAL
UNION ALL SELECT 66, TO_DATE( '2015-10-21 13:00', 'YYYY-MM-DD HH24:MI' ), TO_DATE( '2015-10-21 12:30', 'YYYY-MM-DD HH24:MI' ), 'N' FROM DUAL
/
CREATE PROCEDURE updateCancelled
AS
TYPE ids_t IS TABLE OF test.id%TYPE INDEX BY PLS_INTEGER;
t_ids ids_t;
BEGIN
SELECT ID
BULK COLLECT INTO t_ids
FROM (
SELECT ID,
END_DATE,
LEAD( START_DATE ) OVER ( ORDER BY START_DATE ) AS NEXT_START_DATE
FROM TEST )
WHERE END_DATE > NEXT_START_DATE;
FORALL i IN 1 .. t_ids.COUNT
UPDATE TEST
SET CANCELED = 'Y'
WHERE ID = t_ids(i);
END;
/
BEGIN
updateCancelled();
END;
/
Query 1:
SELECT * FROM TEST
Results:
| ID | START_DATE | END_DATE | CANCELED |
|----|---------------------------|---------------------------|----------|
| 44 | October, 20 2015 22:30:00 | October, 20 2015 23:10:00 | Y |
| 52 | October, 20 2015 23:00:00 | October, 20 2015 23:30:00 | N |
| 66 | October, 21 2015 13:00:00 | October, 21 2015 12:30:00 | N |
Or as a single SQL statement:
UPDATE TEST
SET CANCELED = 'R'
WHERE ID IN ( SELECT ID
FROM ( SELECT ID,
END_DATE,
LEAD( START_DATE )
OVER ( ORDER BY START_DATE )
AS NEXT_START_DATE
FROM TEST )
WHERE END_DATE > NEXT_START_DATE )

Related

Getting last 4 months data from given date column some months data is midding

I have below data
Record_date ID
28-feb-2022 xyz
31-Jan-2022 ABC
30-nov-2022 jkl
31-oct-2022 dcs
I want to get last 3 months data from given date column. We don't have to consider the missing month.
Output should be:
Record_date ID
28-feb-2022 xyz
31-Jan-2022 ABC
30-nov-2022 jkl
In the last 3 months Dec is missing but we have to ignore it as the data is not available. Tried many things but not working.
Any suggestions?
Assuming you are using Oracle then you can use Oralce ADD_MONTHS function and filter the data.
--- untested
-- Assumption Record_date is a date column
SELECT * FROM table1
where Record_date > ADD_MONTHS(SYSDATE, -3)
To get the data for the three months that are latest in the table, you can use:
SELECT record_date,
id
FROM (
SELECT t.*,
DENSE_RANK() OVER (ORDER BY TRUNC(Record_date, 'MM') DESC) AS rnk
FROM table_name t
)
WHERE rnk <= 3;
Which, for the sample data:
CREATE TABLE table_name (Record_date, ID) AS
SELECT DATE '2022-02-28', 'xyz' FROM DUAL UNION ALL
SELECT DATE '2022-01-31', 'ABC' FROM DUAL UNION ALL
SELECT DATE '2022-11-30', 'jkl' FROM DUAL UNION ALL
SELECT DATE '2022-10-31', 'dcs' FROM DUAL;
Outputs:
RECORD_DATE
ID
2022-11-30 00:00:00
jkl
2022-10-31 00:00:00
dcs
2022-02-28 00:00:00
xyz
db<>fiddle here

Row for each date from start date to end date

What I'm trying to do is take a record that looks like this:
Start_DT End_DT ID
4/5/2013 4/9/2013 1
and change it to look like this:
DT ID
4/5/2013 1
4/6/2013 1
4/7/2013 1
4/8/2013 1
4/9/2013 1
it can be done in Python but I am not sure if it is possible with SQL Oracle? I am having difficult time making this work. Any help would be appreciated.
Thanks
Use a recursive subquery-factoring clause:
WITH ranges ( start_dt, end_dt, id ) AS (
SELECT start_dt, end_dt, id
FROM table_name
UNION ALL
SELECT start_dt + INTERVAL '1' DAY, end_dt, id
FROM ranges
WHERE start_dt + INTERVAL '1' DAY <= end_dt
)
SELECT start_dt,
id
FROM ranges;
Which for your sample data:
CREATE TABLE table_name ( start_dt, end_dt, id ) AS
SELECT DATE '2013-04-05', DATE '2013-04-09', 1 FROM DUAL
Outputs:
START_DT | ID
:------------------ | -:
2013-04-05 00:00:00 | 1
2013-04-06 00:00:00 | 1
2013-04-07 00:00:00 | 1
2013-04-08 00:00:00 | 1
2013-04-09 00:00:00 | 1
db<>fiddle here
connect by level is useful for these problems. suppose the first CTE named "table_DT" is your table name so you can use the select statement after that.
with table_DT as (
select
to_date('4/5/2013','mm/dd/yyyy') as Start_DT,
to_date('4/9/2013', 'mm/dd/yyyy') as End_DT,
1 as ID
from dual
)
select
Start_DT + (level-1) as DT,
ID
from table_DT
connect by level <= End_DT - Start_DT +1
;

How to join two tables to determine date ranges when one table contains (id, start_date) and another contains (id, end_date)

I'm new to SQL, hope you guys don't find it silly. Working with two tables here, one contains start dates and other contains end dates. Entries do not follow sequence/possibility of duplicates.
**TABLE 1**
id start_date
1 2019-04-23
1 2019-06-05
1 2019-06-05
1 2019-10-29
1 2019-12-16
2 2019-01-05
3 2020-02-01
**TABLE 2**
id end_date
1 2019-04-23
1 2019-06-05
1 2019-06-06
1 2019-06-06
1 2019-07-24
1 2019-10-16
2 2020-01-04
**EXPECTED OUTPUT**
id start_date end_date
1 2019-04-23 2019-06-05
1 2019-10-29 null
2 2019-01-05 2020-01-04
3 2020-02-01 null
You can use union all and aggregation with some window functions:
with table1 as (
select 1 as id, date('2019-04-23') as start_date union all
select 1, '2019-06-05' union all
select 1, '2019-06-05' union all
select 1, '2019-10-29' union all
select 1, '2019-12-16' union all
select 2, '2019-01-05' union all
select 3, '2020-02-01'
),
table2 as (
SELECT 1 as id, DATE('2019-04-23') as end_date union all
SELECT 1, '2019-06-05' union all
select 1, '2019-06-06' union all
select 1, '2019-06-06' union all
select 1, '2019-07-24' union all
select 1, '2019-10-16' union all
select 2, '2020-01-04'
)
select id, min(start_date), end_date
from (select id, start_date,
first_value(end_date ignore nulls) over (partition by id order by DATE_DIFF(coalesce(start_date, end_date), CURRENT_DATE, day) RANGE between 1 following and unbounded following) as end_date
from ((select id, start_date, null as end_date
from table1
) union all
(select id, null as start_date, end_date
from table2
)
) se
)
group by id, end_date
having min(start_date) is not null;
Why do you have multiple records with the same id (Am assuming id is a primary key)? My suggestion would be for you to make the id's unique and creating a foreign key constraint in the end dates table (Since there can't be and end date without a start date) and use the foreign key relationship to retrieve the desired results. E.g SELECT S.start_date,E.end_date FROM table1 S JOIN table2 E where S.id=E.table1_fk
Below is for BigQuery Standard SQL
#standardSQL
SELECT id, start_date, IF(end_date = '9999-01-01', NULL, end_date) end_date
FROM (
SELECT id, start_date, ARRAY_AGG(end_date ORDER BY end_date LIMIT 1)[OFFSET(0)] end_date
FROM (
SELECT id, start_date, IF(start_date < end_date, end_date, '9999-01-01') end_date
FROM `project.dataset.table1`
LEFT JOIN `project.dataset.table2`
USING (id)
)
GROUP BY id, start_date
)
If to apply to sample data from your question - result is
Row id start_date end_date
1 1 2019-04-23 2019-06-05
2 1 2019-06-05 2019-06-06
3 1 2019-10-29 null
4 1 2019-12-16 null
5 2 2019-01-05 2020-01-04
6 3 2020-02-01 null
Note: quick and not optimized - but looks like produces desired result

Count if date in date column is between start and end date [ Oracle SQL ]

This is my first post, so I hope I've posted this one correctly.
My problem:
I want to count the number of active customers per day, the last 30 days.
What I have so far:
In the first column I want to print today, and the last 29 days. This I have done with
select distinct trunc(sysdate-dayincrement, 'DD') AS DATES
from (
select level as dayincrement
from dual
connect by level <= 30
)
I've picked it up here at stackoverflow, and it works perfectly. I can even extend the number of days returned to ex. 365 days. Perfect!
I also have a table that looks like this
|Cust# | Start date | End date |
| 1000 | 01.01.2015 | 31.12.2015|
| 1001 | 02.01.2015 | 31.12.2016|
| 1002 | 02.01.2015 | 31.03.2015|
| 1003 | 03.01.2015 | 31.08.2015|
This is where I feel the problem starts
I would like to get this result:
| Dates | # of cust |
|04.01.2015| 4 |
|03.01.2015| 4 |
|02.01.2015| 3 |
|01.01.2015| 1 |
Here the query would count 1 if:
Start date <= DATES
End date >= DATES
Else count 0.
I just don't know how to structure the query.
I tried this, but it didn't work.
count(
IF ENDDATE <= DATES THEN
IF STARTDATE >= DATES THEN 1 ELSE 0 END IF
ELSE
0
END IF
) AS CUST
Any ideas?
The following produces the results you're looking for. I had change the date generator to start on 04-JAN-2015 instead of SYSDATE (which is, of course, in the year 2016), and to use LEVEL-1 to include 'current' day:
WITH CUSTS AS (SELECT 1000 AS CUST_NO, TO_DATE('01-JAN-2015', 'DD-MON-YYYY') AS START_DATE, TO_DATE('31-DEC-2015', 'DD-MON-YYYY') AS END_DATE FROM DUAL UNION ALL
SELECT 1001 AS CUST_NO, TO_DATE('02-JAN-2015', 'DD-MON-YYYY') AS START_DATE, TO_DATE('31-DEC-2016', 'DD-MON-YYYY') AS END_DATE FROM DUAL UNION ALL
SELECT 1002 AS CUST_NO, TO_DATE('02-JAN-2015', 'DD-MON-YYYY') AS START_DATE, TO_DATE('31-MAR-2015', 'DD-MON-YYYY') AS END_DATE FROM DUAL UNION ALL
SELECT 1003 AS CUST_NO, TO_DATE('03-JAN-2015', 'DD-MON-YYYY') AS START_DATE, TO_DATE('31-AUG-2015', 'DD-MON-YYYY') AS END_DATE FROM DUAL ),
DATES AS (SELECT DISTINCT TRUNC(TO_DATE('04-JAN-2015', 'DD-MON-YYYY') - DAYINCREMENT, 'DD') AS DT
FROM (SELECT LEVEL-1 AS DAYINCREMENT
FROM DUAL
CONNECT BY LEVEL <= 30))
SELECT d.DT, COUNT(*)
FROM CUSTS c
CROSS JOIN DATES d
WHERE d.DT BETWEEN c.START_DATE AND c.END_DATE
GROUP BY d.DT
ORDER BY DT DESC
Best of luck.
You could write a CASE expression equivalent to your IF-ELSE construct.
For example,
SQL> SELECT COUNT(
2 CASE
3 WHEN hiredate <= sysdate
4 THEN 1
5 ELSE 0
6 END ) AS CUST
7 FROM emp;
CUST
----------
14
SQL>
However, looking at your desired output, it seems, you just need to use COUNT and GROUP BY. The date conditions should be in the filter predicate.
For example,
SELECT dates, COUNT(*)
FROM table_name
WHERE dates BETWEEN start_date AND end_date
GROUP BY dates;

Select records all within 10 minutes from each other

I have some data coming from a source in my Oracle database.
If a particular Office_ID has been deactivated and it has all three clients (A,B,C) for a particular day, then we have to check whether all clients have gone. If yes, then we need to check whether timeframe for all clients is within 10 Minutes.
If this repeats three times in a day for a particular office we declare the office as closed.
Here is some sample data:
+-----------+-----------+--------------+--------+
| OFFICE_ID | FAIL_TIME | ACTIVITY_DAY | CLIENT |
| 1002 | 5:39:00 | 23/01/2015 | A |
| 1002 | 17:49:00 | 23/12/2014 | A |
| 1002 | 18:41:57 | 1/5/2014 | B |
| 1002 | 10:32:00 | 1/7/2014 | A |
| 1002 | 10:34:23 | 1/7/2014 | B |
| 1002 | 10:35:03 | 1/7/2014 | C |
| 1002 | 12:08:52 | 1/7/2014 | B |
| 1002 | 12:09:00 | 1/7/2014 | A |
| 1002 | 12:26:10 | 1/7/2014 | B |
| 1002 | 13:31:32 | 1/7/2014 | B |
| 1002 | 15:24:06 | 1/7/2014 | B |
| 1002 | 15:55:06 | 1/7/2014 | C |
+-----------+-----------+--------------+--------+
The result should be like this:
1002 10:32:00 A
1002 10:34:23 B
1002 10:35:03 C
Any help would be appreciated. I am looking for a SQL query or a PL/SQL procedure.
A solution using the COUNT analytic function with a RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING that avoids self-joins:
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE Test ( OFFICE_ID, FAIL_TIME, ACTIVITY_DAY, CLIENT ) AS
SELECT 1002, '5:39:00', '23/01/2015', 'A' FROM DUAL
UNION ALL SELECT 1002, '17:49:00', '23/12/2014', 'A' FROM DUAL
UNION ALL SELECT 1002, '18:41:57', '1/5/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '10:32:00', '1/7/2014', 'A' FROM DUAL
UNION ALL SELECT 1002, '10:34:23', '1/7/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '10:35:03', '1/7/2014', 'C' FROM DUAL
UNION ALL SELECT 1002, '12:08:52', '1/7/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '12:09:00', '1/7/2014', 'A' FROM DUAL
UNION ALL SELECT 1002, '12:26:10', '1/7/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '13:31:32', '1/7/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '15:24:06', '1/7/2014', 'B' FROM DUAL
UNION ALL SELECT 1002, '15:55:06', '1/7/2014', 'C' FROM DUAL
Query 1:
WITH Times AS (
SELECT OFFICE_ID,
TO_DATE( ACTIVITY_DAY || ' ' || FAIL_TIME, 'DD/MM/YYYY HH24/MI/SS' ) AS FAIL_DATETIME,
CLIENT
FROM Test
),
Next_Times As (
SELECT OFFICE_ID,
FAIL_DATETIME,
COUNT( CASE CLIENT WHEN 'A' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_A,
COUNT( CASE CLIENT WHEN 'B' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_B,
COUNT( CASE CLIENT WHEN 'C' THEN 1 END ) OVER ( PARTITION BY OFFICE_ID ORDER BY FAIL_DATETIME RANGE BETWEEN INTERVAL '10' MINUTE PRECEDING AND INTERVAL '10' MINUTE FOLLOWING ) AS COUNT_C
FROM Times
)
SELECT OFFICE_ID,
TO_CHAR( FAIL_DATETIME, 'HH24:MI:SS' ) AS FAIL_TIME,
TO_CHAR( FAIL_DATETIME, 'DD/MM/YYYY' ) AS ACTIVITY_DAY
FROM Next_Times
WHERE COUNT_A > 0
AND COUNT_B > 0
AND COUNT_C > 0
ORDER BY FAIL_DATETIME
Results:
| OFFICE_ID | FAIL_TIME | ACTIVITY_DAY |
|-----------|-----------|--------------|
| 1002 | 10:32:00 | 01/07/2014 |
| 1002 | 10:34:23 | 01/07/2014 |
| 1002 | 10:35:03 | 01/07/2014 |
To identify records you can join table to it self three times like this:
SELECT
a.*, b.*, c.*
FROM FailLog a INNER JOIN
FailLog b ON b.OFFICE_ID = A.OFFICE_ID AND
a.CLIENT = 'A' AND
b.CLIENT = 'B' AND
b.ACTIVITY_DAY = a.ACTIVITY_DAY INNER JOIN
FailLog c ON c.OFFICE_ID = A.OFFICE_ID AND
c.CLIENT = 'C' AND
c.ACTIVITY_DAY = a.ACTIVITY_DAY AND
-- need to calculate difference in min here
GREATEST (a.FAIL_TIME, b. FAIL_TIME, c. FAIL_TIME) -
LEAST (a.FAIL_TIME, b. FAIL_TIME, c. FAIL_TIME) <= 10
The output will give you one row instead of three as requested in the question, but that will be the right level for the fault data, as all three clients should have it.
The first thing we need is a way of comparing FAIL_TIME. As you haven't posted a table structure let's assume we're dealing with strings.
Oracle has some neat built-ins for casting dates and strings. If we concatenate ACTIVITY_DATE and FAIL_TIME we can convert them to a DATE data type:
to_date(ACTIVITY_DAY||' '||FAIL_TIME, 'dd/mm/yyyy hh24:mi:ss')
We can cast that to a string representing the number of seconds past midnight:
to_char(to_date(ACTIVITY_DAY||' '||FAIL_TIME, 'dd/mm/yyyy hh24:mi:ss'), 'sssss')
Then we can cast that to a number, which we can use in some arithmetic to compare with other rows; ten minutes = 600 seconds.
Next we can use the subquery factoring (the WITH clause). One of the neat features of this syntax is that we can pass the output of one subquery into another one, so we only only need to write that gnarly nested cast expression once.
with t as
( select OFFICE_ID
, ACTIVITY_DAY
, FAIL_TIME
, to_number(to_char(to_date(ACTIVITY_DAY||' '||FAIL_TIME, 'dd/mm/yyyy hh24:mi:ss'), 'sssss')) FAIL_TIME_SSSSS
, CLIENT
from faillog
)
We can use this sub-query to build other subqueries which separate the table's rows into sets for each CLIENT for use in our main query.
Finally we can use an analytic COUNT() function to track how many bunches of FAIL_TIME we have for each OFFICE and ACTIVITY_DATE combo.
count(*) over (partition by a.OFFICE_ID, a.ACTIVITY_DAY)
Putting it all together in an in-line view allows us to test for whether we can "declare the office as closed".
select * from (
with t as ( select OFFICE_ID
, ACTIVITY_DAY
, FAIL_TIME
, to_number(to_char(to_date(ACTIVITY_DAY||' '||FAIL_TIME, 'dd/mm/yyyy hh24:mi:ss'), 'sssss')) FAIL_TIME_SSSSS
, CLIENT
from faillog
)
, a as (select *
from t
where CLIENT = 'A' )
, b as (select *
from t
where CLIENT = 'B' )
, c as (select *
from t
where CLIENT = 'C' )
select a.OFFICE_ID
, a.ACTIVITY_DAY
, a.FAIL_TIME as a_fail_time
, b.FAIL_TIME as b_fail_time
, c.FAIL_TIME as a_fail_time
, count(*) over (partition by a.OFFICE_ID, a.ACTIVITY_DAY) as fail_count
from a
join b on a.OFFICE_ID = b.OFFICE_ID and a.ACTIVITY_DAY = b.ACTIVITY_DAY
join c on a.OFFICE_ID = c.OFFICE_ID and a.ACTIVITY_DAY = c.ACTIVITY_DAY
where a.FAIL_TIME_SSSSS between b.FAIL_TIME_SSSSS - 600 and b.FAIL_TIME_SSSSS + 600
and a.FAIL_TIME_SSSSS between c.FAIL_TIME_SSSSS - 600 and c.FAIL_TIME_SSSSS + 600
and b.FAIL_TIME_SSSSS between a.FAIL_TIME_SSSSS - 600 and a.FAIL_TIME_SSSSS + 600
and b.FAIL_TIME_SSSSS between c.FAIL_TIME_SSSSS - 600 and c.FAIL_TIME_SSSSS + 600
and c.FAIL_TIME_SSSSS between a.FAIL_TIME_SSSSS - 600 and a.FAIL_TIME_SSSSS + 600
and c.FAIL_TIME_SSSSS between b.FAIL_TIME_SSSSS - 600 and b.FAIL_TIME_SSSSS + 600
)
where fail_count >= 3
/
Notes
Obviously I have hard-coded the CLIENT identifier in the subqueries.
It would be possible to avoid the hard-coding, but the sample query is already complicated enough.
This query doesn't search for
triplets. Providing there is one failure for each of A, B and C
within a ten minute window it doesn't matter how many instances of
each CLIENT occur within the window. There's nothing in your
business rules to say this is wrong.
Similarly, the same instance of
one CLIENT can be matched with instances of other CLIENTs in
overlapping windows. Now this may be undesirable: double or triple
counting may inflate the FAIL_COUNT. But again, handling this will
make the final query more complicated.
The query as presented has one row for each distinct combo of A, B and C FAIL_TIME values. The result set can be pivoted if you really need a row for each CLIENT/FAIL_TIME.