Monthly Snapshot using Date Dimension - sql

I've some columns that I need to bring to a consolidated table(fact). I've a change capture table which captures changes to record everyday which looks like this:
CHG_TABLE:
+--------+-------------------+-----------------------+-----------+-----------+
| Key | Start_Date | End_Date | Value |Record_Type|
+--------+----- -------------+-----------------------+-----------+-----------+
| 1 | 5/25/2019 2.05 | 12/31/9999 00.00 | 800 | Insert |
| 1 | 5/25/2019 2.05 | 5/31/2019 11.12 | 800 | Update |
| 1 | 5/31/2019 11.12 | 12/31/9999 00.00 | 900 | Insert |
| 1 | 5/31/2019 11.12 | 6/15/2019 12.05 | 900 | Update |
| 1 | 6/15/2019 12.05 | 12/31/9999 00.00 | 1000 | Insert |
| 1 | 6/15/2019 12.05 | 6/25/2019 10.20 | 1000 | Update |
| 1 | 6/25/2019 10.20 | 12/31/9999 00.00 | 500 | Insert |
| 1 | 6/25/2019 10.20 | 6/30/2019 11.12 | 500 | Update |
| 1 | 6/30/2019 11.12 | 12/31/9999 00.00 | 3000 | Insert |
| 1 | 6/30/2019 11.12 | 7/15/2019 1.20 | 3000 | Update |
| 1 | 7/15/2019 1.20 | 12/31/9999 00.00 | 7000 | Insert |
+--------+-------------------+-----------------------+-----------+-----------+
During first insert, End_Date is end of time. When new record with new Start_Date and Value is added to the source it's captured as new entry and previous record with same Key is updated with End_Date as Start_Date of new record.
DIM_DATE:
+--------+-------------------+-----------------------+
|DateKey | Month_Start_Date | Month_End_Date |
+--------+-----+-------------+-----------------------+
| 1 | 6/1/2019 | 6/30/2019 |
| 2 | 7/1/2019 | 7/31/2019 |
+--------+-------------------+-----------------------+
I am struggling since I am using DATE dimension which has Month_Start_Date and Month_End_Date.
I want to create a monthly snapshot from this change table which would look like this:
RESULT:
+--------+-------------------+-----------------------+-----------+-----------+
| Key | Month_Start_Date | Month_End_Date |Begin_Value|End_Value |
+--------+-----+-------------+-----------------------+-----------+-----------+
| 1 | 6/1/2019 | 6/30/2019 | 800 | 500 |
| 1 | 7/1/2019 | 7/31/2019 | 500 | 3000 |
+--------+-------------------+-----------------------+-----------+-----------+
Begin_Value : Max(End_Date) < Month_Start_Date
End_Value : Max(End_Date) <= Month_End_Date
The Begin_Value should be most recent value from last month(which is not end of the time) and End_Value should be the most recent value based on Month_End_Date.
How to show above result?

I think you should rethink your logic a bit.
If CHG_TABLE has a 'update' record on July 15th and there is no later change, then that new value should be end value for July.
Assuming (big if) that's correct, then you should just ignore the END_DATE column altogether. If you're able, drop it from your data model. You don't need it.
Instead, create a descending index on CHG_TABLE.START_DATE, like so:
create index chg_table_n1 on chg_table (start_date desc);
Then, you should be able to create your snapshot fairly efficiently like this:
select ct.key,
dd.month_start_date,
dd.month_end_date,
( SELECT value
FROM chg_table ct2
WHERE ct2.key = ct.key
AND ct2.start_date < dd.month_start_date
ORDER BY ct2.start_date DESC
FETCH FIRST 1 ROW ONLY ) first_value,
max(ct.value) keep ( dense_rank last order by ct.start_date ) last_value
from dim_date dd
INNER JOIN chg_table ct ON ct.start_date BETWEEN dd.month_start_date and dd.month_end_date
GROUP BY ct.key, dd.month_start_date, dd.month_end_date;
Hopefully you are on release 12.1 or later for the FETCH FIRST syntax. Otherwise, you'll need to tweak that part to the pre-12.1 equivalent.
FULL EXAMPLE WITH TEST DATA
WITH chg_table ( key, start_date, end_date, value, record_type ) AS
(
SELECT 1,TO_DATE('5/25/2019 2.05','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'), 800, 'Insert' FROM DUAL UNION ALL
SELECT 1,TO_DATE('5/25/2019 2.05','MM/DD/YYYY HH24.MI'),TO_DATE('5/31/2019 11.12','MM/DD/YYYY HH24.MI'), 800, 'Update' FROM DUAL UNION ALL
SELECT 1,TO_DATE('5/31/2019 11.12','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'), 900, 'Insert' FROM DUAL UNION ALL
SELECT 1,TO_DATE('5/31/2019 11.12','MM/DD/YYYY HH24.MI'),TO_DATE('6/15/2019 12.05','MM/DD/YYYY HH24.MI'), 900, 'Update' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/15/2019 12.05','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'), 1000, 'Insert' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/15/2019 12.05','MM/DD/YYYY HH24.MI'),TO_DATE('6/25/2019 10.20','MM/DD/YYYY HH24.MI'), 1000, 'Update' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/25/2019 10.20','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'), 500, 'Insert' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/25/2019 10.20','MM/DD/YYYY HH24.MI'),TO_DATE('6/30/2019 11.12','MM/DD/YYYY HH24.MI'), 500, 'Update' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/30/2019 11.12','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'),3000, 'Insert' FROM DUAL UNION ALL
SELECT 1,TO_DATE('6/30/2019 11.12','MM/DD/YYYY HH24.MI'),TO_DATE('7/15/2019 1.20','MM/DD/YYYY HH24.MI'), 3000, 'Update' FROM DUAL UNION ALL
SELECT 1,TO_DATE('7/15/2019 1.20','MM/DD/YYYY HH24.MI'),TO_DATE('12/31/9999 00.00','MM/DD/YYYY HH24.MI'),7000, 'Insert' FROM DUAL ),
dim_date ( datekey, month_start_date, month_end_date ) AS (
SELECT 1, DATE'2019-05-01', DATE'2019-06-01' - INTERVAL '1' SECOND FROM DUAL UNION ALL
SELECT 2, DATE'2019-06-01', DATE'2019-07-01' - INTERVAL '1' SECOND FROM DUAL UNION ALL
SELECT 3, DATE'2019-07-01', DATE'2019-08-01' - INTERVAL '1' SECOND FROM DUAL )
select ct.key,
dd.month_start_date,
dd.month_end_date,
( SELECT value
FROM chg_table ct2
WHERE ct2.key = ct.key
AND ct2.start_date < dd.month_start_date
ORDER BY ct2.start_date DESC
FETCH FIRST 1 ROW ONLY ) first_value,
max(ct.value) keep ( dense_rank last order by ct.start_date ) last_value
from dim_date dd
INNER JOIN chg_table ct ON ct.start_date BETWEEN dd.month_start_date and dd.month_end_date
GROUP BY ct.key, dd.month_start_date, dd.month_end_date;
+-----+------------------+----------------+-------------+------------+
| KEY | MONTH_START_DATE | MONTH_END_DATE | FIRST_VALUE | LAST_VALUE |
+-----+------------------+----------------+-------------+------------+
| 1 | 01-MAY-19 | 31-MAY-19 | | 900 |
| 1 | 01-JUN-19 | 30-JUN-19 | 900 | 3000 |
| 1 | 01-JUL-19 | 31-JUL-19 | 3000 | 7000 |
+-----+------------------+----------------+-------------+------------+
Update - version w/o MAX()..KEEP(), assuming the existence of DIM_PERSON table
select k.key,
dd.month_start_date,
dd.month_end_date,
( SELECT value
FROM chg_table ct2
WHERE ct2.key = k.key
AND ct2.start_date < dd.month_start_date
ORDER BY ct2.start_date DESC
FETCH FIRST 1 ROW ONLY ) first_value,
( SELECT value
FROM chg_table ct2
WHERE ct2.key = k.key
AND ct2.start_date <= dd.month_end_date
ORDER BY ct2.start_date DESC
FETCH FIRST 1 ROW ONLY ) last_value
from dim_date dd
CROSS JOIN dim_person k
GROUP BY k.key, dd.month_start_date, dd.month_end_date;

Related

Oracle SQL: How can I sum every x number of subsequent rows for each row

I have a data table that looks like this:
|Contract Date | Settlement_Prcie |
|--------------|------------------|
| 01/10/2020 | 50 |
|--------------|------------------|
| 01/11/2020 | 10 |
|--------------|------------------|
| 01/01/2021 | 20 |
|--------------|------------------|
| 01/02/2021 | 30 |
|--------------|------------------|
| 01/03/2021 | 50 |
|--------------|------------------|
I would like to write a query that sums every two rows beneath ... For example, On the first row with contract date 01/10/2020, the sum column would add 10 and 20 to give a result of 30. The next row, the sum column would add 20 and 30 to give 40 and so on. The resulting table of results would look like this:
|Contract Date | Settlement_Prcie | Sum Column |
|--------------|------------------|------------|
| 01/10/2020 | 50 | 30
|--------------|------------------|------------|
| 01/11/2020 | 10 | 50
|--------------|------------------|------------|
| 01/01/2021 | 20 | 80
|--------------|------------------|------------|
| 01/02/2021 | 30 |
|--------------|------------------|
| 01/03/2021 | 50 |
|--------------|------------------|
Could anyone please help me with the query to do this not just for 2 subsequent rows but for x subsequent rows.
So far I had tried using a SUM (Settlement_Price) Over (order by Contract_date Rows between 3 preceding and current row) - Current row of course was not ok, but that is as far as I had gone.
You can use the SUM analytic function:
SELECT contract_date,
settlement_price,
CASE COUNT(*) OVER (
ORDER BY contract_date ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING
)
WHEN 2
THEN SUM( settlement_price ) OVER (
ORDER BY contract_date ROWS BETWEEN 1 FOLLOWING AND 2 FOLLOWING
)
END AS sum_column
FROM table_name;
Or you can use LEAD:
SELECT contract_date,
settlement_price,
LEAD( settlement_price, 1 , NULL ) OVER ( ORDER BY contract_date )
+ LEAD( settlement_price, 2 , NULL ) OVER ( ORDER BY contract_date )
AS sum_column
FROM table_name;
So, for the test data:
CREATE TABLE table_name ( contract_date, settlement_price ) AS
SELECT DATE '2020-10-01', 50 FROM DUAL UNION ALL
SELECT DATE '2020-11-01', 10 FROM DUAL UNION ALL
SELECT DATE '2020-12-01', 20 FROM DUAL UNION ALL
SELECT DATE '2021-01-01', 30 FROM DUAL UNION ALL
SELECT DATE '2021-02-01', 50 FROM DUAL;
Both queries output:
CONTRACT_DATE | SETTLEMENT_PRICE | SUM_COLUMN
:------------ | ---------------: | ---------:
01-OCT-20 | 50 | 30
01-NOV-20 | 10 | 50
01-DEC-20 | 20 | 80
01-JAN-21 | 30 | null
01-FEB-21 | 50 | null
db<>fiddle here
SUM (Settlement_Price) Over (order by Contract_date Rows between 1 following and 2 following)

creating complete historical timeline from overlapping intervals

I have below table which contain a code, from, to and hour. The problem is that i have overlapping dates in the intervals. Instead of it i want to create a complete historical timeline. So whe the code is identical and there is a overlap it should sum the hours like in the desired result.
** table **
+------+-------+--------------------------------------+
| code | from | to | hours |
+------+-------+--------------------------------------+
| 1 | 2013-05-01 | 2013-09-30 | 37 |
| 1 | 2013-05-01 | 2014-02-28 | 10 |
| 1 | 2013-10-01 | 9999-12-31 | 5 |
+------+-------+--------------------------------------+
desired result:
+------+-------+--------------------------------------+
| code | from | to | hours |
+------+-------+--------------------------------------+
| 1 | 2013-05-01 | 2013-09-30 | 47 |
| 1 | 2013-10-01 | 2014-02-28 | 15 |
| 1 | 2014-02-29 | 9999-12-31 | 5 |
+------+-------+--------------------------------------+
Oracle Setup:
CREATE TABLE Table1 ( code, "FROM", "TO", hours ) AS
SELECT 1, DATE '2013-05-01', DATE '2013-09-30', 37 FROM DUAL UNION ALL
SELECT 1, DATE '2013-05-01', DATE '2014-02-28', 10 FROM DUAL UNION ALL
SELECT 1, DATE '2013-10-01', DATE '9999-12-31', 5 FROM DUAL;
Query:
SELECT *
FROM (
SELECT code,
dt AS "FROM",
LEAD( dt ) OVER ( PARTITION BY code ORDER BY dt ASC, value DESC, ROWNUM ) AS "TO",
hours
FROM (
SELECT code,
dt,
SUM( hours * value ) OVER ( PARTITION BY code ORDER BY dt ASC, VALUE DESC ) AS hours,
value
FROM table1
UNPIVOT ( dt FOR value IN ( "FROM" AS 1, "TO" AS -1 ) )
)
)
WHERE "FROM" + 1 < "TO";
Results:
CODE FROM TO HOURS
---- ---------- ---------- -----
1 2013-05-01 2013-09-30 47
1 2013-10-01 2014-02-28 15
1 2014-02-28 9999-12-31 5

How use Group by and Max(date) multi record

i want Group by by Max(Datetime) each record. but i query have dupplicatate record. i want don't duplicate record.
SQL:
select a.pmn_code,
a.ref_period,
a.SERVICE_TYPE,
min(a.status) keep (dense_rank last order by a.updated_dtm) as status,
max(a.updated_dtm) as updated_dtm
from tempChkStatus a
group by a.pmn_code, a.ref_period, a.SERVICE_TYPE
Data Table tempChkStatus:
PMN_CODE | REF_PERIOD | SERVICE_TYPE | STATUS | UPDATED_DTM
A | 01/2016 | OI | I | 19/08/2016 10:54:44
A | 01/2016 | OP | N | 06/06/2017 15:09:55
A | 02/2016 | OT | I | 31/08/2016 08:37:45
A | 02/2016 | OT | N | 12/10/2016 11:13:56
A | 04/2016 | OI | I | 19/08/2016 10:54:44
A | 04/2016 | OP | N | 06/06/2017 15:09:55
Result SQL:
PMN_CODE | REF_PERIOD | SERVICE_TYPE | STATUS | UPDATED_DTM
A | 01/2016 | OI | I | 19/08/2016 10:54:44
A | 01/2016 | OP | N | 06/06/2017 15:09:55
A | 02/2016 | OT | N | 12/10/2016 11:13:56
A | 04/2016 | OI | I | 19/08/2016 10:54:44
A | 04/2016 | OP | N | 06/06/2017 15:09:55
But I want Result:
PMN_CODE | REF_PERIOD | SERVICE_TYPE | STATUS | UPDATED_DTM
A | 01/2016 | OP | N | 06/06/2017 15:09:55
A | 02/2016 | OT | N | 12/10/2016 11:13:56
A | 04/2016 | OP | N | 06/06/2017 15:09:55
Help me please. Thanks advance ;)
with tempChkStatus (
PMN_CODE, REF_PERIOD , SERVICE_TYPE , STATUS , UPDATED_DTM) as
(
select 'A', '01/2016' ,'OI', 'I', to_date('19/08/2016 10:54:44', 'dd/mm/yyyy hh24:mi:ss') from dual union all
select 'A', '01/2016' ,'OP', 'N', to_date('06/06/2017 15:09:55', 'dd/mm/yyyy hh24:mi:ss') from dual union all
select 'A', '02/2016' ,'OT', 'I', to_date('31/08/2016 08:37:45', 'dd/mm/yyyy hh24:mi:ss') from dual union all
select 'A', '02/2016' ,'OT', 'N', to_date('12/10/2016 11:13:56', 'dd/mm/yyyy hh24:mi:ss') from dual union all
select 'A', '04/2016' ,'OI', 'I', to_date('19/08/2016 10:54:44', 'dd/mm/yyyy hh24:mi:ss') from dual union all
select 'A', '04/2016' ,'OP', 'N', to_date('06/06/2017 15:09:55', 'dd/mm/yyyy hh24:mi:ss') from dual
)
select * from (
select e.*, max(updated_dtm) over (partition by ref_period) md from tempchkstatus e
)
where updated_dtm = md
;
You just need to remove SERVICE_TYPE from the GROUP BY:
select s.pmn_code, s.ref_period,
min(s.SERVICE_TYPE) as service_type,
min(s.status) keep (dense_rank last order by s.updated_dtm) as status,
max(s.updated_dtm) as updated_dtm
from tempChkStatus s
group by s.pmn_code, s.ref_period;
The GROUP BY expressions define the rows returns by an aggregation query.
This version uses MIN() on SERVICE_TYPE. It is not clear what logic you want for the result set.

Generate the rank/number if the difference between consecutive rows is less than 10 days

Need hive query that calculates the date difference for consecutive records but for the same txn type and generate same number if the difference is less than 10 else generate new number.
Input table
+--------+----------+-------------+
| Txn_id | Txn_type | Txn_date |
+--------+----------+-------------+
| 1 | T100 | 26-Aug-2015 |
| 2 | T100 | 03-Nov-2015 |
| 3 | T100 | 05-Dec-2015 |
| 4 | T100 | 08-Dec-2015 |
| 5 | T100 | 25-Jan-2016 |
| 6 | T111 | 26-Jan-2016 |
| 7 | T200 | 02-Feb-2016 |
| 8 | T200 | 07-May-2016 |
| 9 | T200 | 12-May-2016 |
| 10 | T200 | 20-May-2016 |
+--------+----------+-------------+
Expected output
+--------+----------+-------------+--------+
| Txn_id | Txn_type | Txn_date | Number |
+--------+----------+-------------+--------+
| 1 | T100 | 26-Aug-2015 | 1 |
| 2 | T100 | 03-Nov-2015 | 2 |
| 3 | T100 | 05-Dec-2015 | 3 |
| 4 | T100 | 08-Dec-2015 | 3 |
| 5 | T100 | 25-Jan-2016 | 4 |
| 6 | T111 | 26-Jan-2016 | 1 |
| 7 | T200 | 02-Feb-2016 | 1 |
| 8 | T200 | 07-May-2016 | 2 |
| 9 | T200 | 12-May-2016 | 2 |
| 10 | T200 | 20-May-2016 | 2 |
+--------+----------+-------------+--------+
Not sure if "less than 10 days" means strict or non-strict inequality, but otherwise:
with
inputs ( txn_id, txn_type, txn_date ) as (
select 1, 'T100', to_date('26-Aug-2015', 'dd-Mon-yy') from dual union all
select 2, 'T100', to_date('03-Nov-2015', 'dd-Mon-yy') from dual union all
select 3, 'T100', to_date('05-Dec-2015', 'dd-Mon-yy') from dual union all
select 4, 'T100', to_date('08-Dec-2015', 'dd-Mon-yy') from dual union all
select 5, 'T100', to_date('25-Jan-2016', 'dd-Mon-yy') from dual union all
select 6, 'T111', to_date('26-Jan-2016', 'dd-Mon-yy') from dual union all
select 7, 'T200', to_date('02-Feb-2016', 'dd-Mon-yy') from dual union all
select 8, 'T200', to_date('07-May-2016', 'dd-Mon-yy') from dual union all
select 9, 'T200', to_date('12-May-2016', 'dd-Mon-yy') from dual union all
select 10, 'T200', to_date('20-May-2016', 'dd-Mon-yy') from dual
),
prep ( txn_id, txn_type, txn_date, ct ) as (
select txn_id, txn_type, txn_date,
case when txn_date < lag(txn_date) over (partition by txn_type
order by txn_date) + 10 then 0 else 1 end
from inputs
)
select txn_id, txn_type, txn_date,
sum(ct) over (partition by txn_type order by txn_date) as number_
from prep;
I used number_ as a column name; don't use reserved Oracle words for table or column names unless your life depends on it, and not even then.
Use a common table expression to mark the rows that have a difference of more than 10 days and then count those to get the new number.
with test_data as (
SELECT 1 txn_id, 'T100' txn_type, to_date('26-AUG-2015','DD-MON-YYYY') txn_date from dual union all
SELECT 2 txn_id, 'T100', to_date('03-NOV-2015','DD-MON-YYYY') from dual union all
SELECT 3 txn_id, 'T100', to_date('05-DEC-2015','DD-MON-YYYY') from dual union all
SELECT 4 txn_id, 'T100', to_date('08-DEC-2015','DD-MON-YYYY') from dual union all
SELECT 5 txn_id, 'T100', to_date('25-JAN-2016','DD-MON-YYYY') from dual union all
SELECT 6 txn_id, 'T111', to_date('26-JAN-2016','DD-MON-YYYY') from dual union all
SELECT 7 txn_id, 'T200', to_date('02-FEB-2016','DD-MON-YYYY') from dual union all
SELECT 8 txn_id, 'T200', to_date('07-MAY-2016','DD-MON-YYYY') from dual union all
SELECT 9 txn_id, 'T200', to_date('12-MAY-2016','DD-MON-YYYY') from dual union all
SELECT 10 txn_id, 'T200', to_date('20-MAY-2016','DD-MON-YYYY') from dual),
markers as (
select td.*,
case when td.txn_date - nvl(lag(td.txn_date)
over ( partition by txn_type order by txn_id ), td.txn_date-9999) > 10
THEN 'Y' ELSE NULL end new_txn_marker from test_data td )
SELECT txn_id, txn_type,txn_date,
count(new_txn_marker) over ( partition by txn_type order by txn_id ) "NUMBER"
FROM markers;

Get first and last of serial group in oracle

I'm trying to select from table (sorted):
+--------+-------+
| Serial | Group |
+--------+-------+
| 0100 | 99 |
| 0101 | 99 |
| 0102 | 99 |
| 096 | 92 |
| 097 | 92 |
| 099 | 93 |
| 23 | 16 |
| 95 | 87 |
| 99 | 90 |
| 100 | 90 |
| 101 | 90 |
| 102 | 90 |
| a | a |
| b | b |
| c | c |
+--------+-------+
and I would like table (first, last and quantity by group):fsdfsdfsdfdsfsdf
+------------+----------+----------+
| fromSerial | toSerial | quantity |
+------------+----------+----------+
| 0100 | 0102 | 3 |
| 096 | 097 | 2 |
| 099 | 099 | 1 |
| 99 | 102 | 4 |
| 23 | 23 | 1 |
| 95 | 95 | 1 |
| a | a | 1 |
| b | b | 1 |
| c | c | 1 |
+------------+----------+----------+
My query
Thanks.
you can use window analytic function row_number to partition the data based on the group column
you can also get number of elements in each partition
you can then do case based aggregation to get the from and to serial number values.
SQL Fiddle Demo
with cte
as
(
select "Serial", "Group", row_number() over ( partition by "Group" order by "Serial" ) as rn,
count(*) over ( partition by "Group") as cnt
from Table1
)
select max(case when rn =1 then "Serial" end) as "FromSerial",
max(case when rn =cnt then "Serial" end) as "ToSerial",
max(cnt) as quantity
from cte
group by "Group"
Use MIN, MAX and GROUP BY.
And, DO NOT use keyword GROUP as column name.
SQL> WITH DATA AS(
2 SELECT '0100' Serial, '99' "GROUP_1" FROM dual UNION ALL
3 SELECT '0101' , '99' FROM dual UNION ALL
4 SELECT '0102' , '99' FROM dual UNION ALL
5 SELECT '096' , '92' FROM dual UNION ALL
6 SELECT '097' , '92' FROM dual UNION ALL
7 SELECT '099', '93' FROM dual UNION ALL
8 SELECT '23' , '16' FROM dual UNION ALL
9 SELECT '95' , '87' FROM dual UNION ALL
10 SELECT '99' , '90' FROM dual UNION ALL
11 SELECT '100' , '90' FROM dual UNION ALL
12 SELECT '101' , '90' FROM dual UNION ALL
13 SELECT '102' , '90' FROM dual UNION ALL
14 SELECT 'A' , 'A' FROM dual UNION ALL
15 SELECT 'b' , 'b' FROM dual UNION ALL
16 SELECT 'c' , 'c' FROM dual
17 )
18 SELECT MIN(serial) fromserial,
19 MAX(Serial) toserial,
20 COUNT(*) quantity
21 FROM DATA
22 GROUP BY group_1
23 ORDER BY fromserial
24 /
FROM TOSE QUANTITY
---- ---- ----------
0100 0102 3
096 097 2
099 099 1
100 99 4
23 23 1
95 95 1
A A 1
b b 1
c c 1
9 rows selected.
SQL>
Try this query :
SELECT grp,
Cast(Min(Cast(serial AS INT)) AS VARCHAR2(30)) fromserial,
Cast(Max(Cast(serial AS INT)) AS VARCHAR2(30)) toserial,
Count(*) quantity
FROM yourtable
WHERE NVL(LENGTH(TRIM(TRANSLATE(serial, '0123456789', ' '))), 0) = 0
GROUP BY grp
UNION
SELECT grp,
Cast(Min(serial) AS VARCHAR2(30)) fromserial,
Cast(Max(serial) AS VARCHAR2(30)) toserial,
Count(*) quantity
FROM yourtable
WHERE NVL(LENGTH(TRIM(TRANSLATE(serial, '0123456789', ' '))), 0) != 0
GROUP BY grp
ORDER BY grp
Sqlfiddle