Passing data from one table to a block code - Oracle - sql

I have a table dates_2019 with all the weekdays dates for 2019 as below:-
TS_RANGE_BEGIN |TS_RANGE_END
2019-01-01 17:00:00 |2019-01-02 17:00:00
2019-01-02 17:00:00 |2019-01-03 17:00:00
2019-01-03 17:00:00 |2019-01-04 17:00:00
2019-01-04 17:00:00 |2019-01-07 17:00:00
2019-01-07 17:00:00 |2019-01-08 17:00:00
2019-01-08 17:00:00 |2019-01-09 17:00:00
My insert query as below:-
insert into report_2019(ab,app_name,status,sub_count,category,last_modified_timestamp)
with T as (
select id,ab,app_name,status,trunc(last_modified_timestamp),
row_number() over(partition by id, ab order by p_message_id desc, message_id desc) lastest_status_order_id
,p_message_id
,LAST_MODIFIED_TIMESTAMP,reporting_purpose
from (
select
id,
ab
, app_name,
status
,p.message_id p_message_id
,s.message_id
,s.LAST_MODIFIED_TIMESTAMP, reporting_purpose
from table_a s, table_b d, table_c k, table_d t,
table_e p
where s.LAST_MODIFIED_TIMESTAMP > to_timestamp('2019-01-01 17', 'YYYY-MM-DD HH24')
and s.LAST_MODIFIED_TIMESTAMP <= to_timestamp('2019-01-02 17', 'YYYY-MM-DD HH24')
and .....
) a
)
select * from (
select ab, app_name, status, count(*) subtotal,
'Reporting' as v_category,trunc(last_modified_timestamp)
from T where lastest_status_order_id = 1
and LAST_MODIFIED_TIMESTAMP <= to_timestamp('2019-01-02 17', 'YYYY-MM-DD HH24')
group by ab,app_name,status,trunc(last_modified_timestamp))a order by ab, app_name;
The original idea was to join both the tables dates_2019 and T and get the results for the whole year as below:-
where s.LAST_MODIFIED_TIMESTAMP > c.ts_range_begin
and s.LAST_MODIFIED_TIMESTAMP <= c.ts_range_end
....
However the temp tablespace is low and i got the below errors:-
[Error Code: 12801, SQL State: 72000] ORA-12801: error signaled in
parallel query server P004 ORA-01555: snapshot too old: rollback
segment number 30 with name "_SYSSMU30_326584413$" too small
The option now is to insert the data one day at a time in a block.
Could you please help me with the solution?
Thanks,

Related

PostgreSQL Attendance and night shift

i have following table:
dt
type
2022-09-12 21:36:26
WORK_START
2022-09-13 02:00:00
BREAK_START
2022-09-20 06:00:00
WORK_START
2022-09-20 10:00:00
BREAK_START
2022-09-20 10:27:00
BREAK_END
2022-09-20 13:00:00
WORK_END
2022-09-13 06:00:00
WORK_END
2022-09-13 02:30:00
BREAK_END
and query :
SELECT g.tempDatum::date as datum,
MAX(att.dt::time) FILTER (WHERE att.type = 'WORK_START') as work_start
, MAX(att.dt::time) FILTER (WHERE att.type = 'BREAK_START') as break_start
, MAX(att.dt::time) FILTER (WHERE att.type = 'BREAK_END') as break_end
, MAX(att.dt::time) FILTER (WHERE att.type = 'WORK_END') as work_end
FROM generate_series( '2022-09-01','2022-09-30', '1 day'::interval) AS g(tempDatum)
LEFT JOIN att ON att.dt::date = g.tempDatum::date group by g.tempDatum order by
g.tempDatum;
Result is pretty good:
Result photo
except for 2022-09-12 because is a night shift. I want move Break_start + end and work_end to day 2022-09-12 for better result as attendance log.
How achieve this ? Big thanks for any help.
By grouping each work day (start, break start, break end, end) as one we can use crosstab to pivot it using the first work day of each group as the one for the entire day as requested.
select *
from crosstab(
'select min(dte) over(partition by grp), type, tme from
(
select dt::date as dte
,dt::time as tme
,type
,row_number() over(order by dt,type)-case when row_number() over(order by dt,type) <= 4 then row_number() over(order by dt,type) else row_number() over(order by dt,type)-4 end as grp
from t
) t' )
as ct(dt date, WORK_START time, BREAK_START time, BREAK_END time, WORK_END time)
dt
work_start
break_start
break_end
work_end
2022-09-12
21:36:26
02:00:00
02:30:00
06:00:00
2022-09-20
06:00:00
10:00:00
10:27:00
13:00:00
Fiddle

Merge consecutive time record in SQL server 2008

Say I have data like this, timeslots is basically time with 30 mins apart.
Note there is a gap in 2021-12-24 between 15:30 and 16:30.
calender_date
timeslot
timeslot_end
2021-12-24
14:00:00
14:30:00
2021-12-24
14:30:00
15:00:00
2021-12-24
15:00:00
15:30:00
2021-12-24
16:30:00
17:00:00
2021-12-24
17:00:00
17:30:00
2021-12-24
17:30:00
18:00:00
2021-12-30
09:00:00
09:30:00
2021-12-30
09:30:00
10:00:00
I want to merge rows where timeslot_end = next row's timeslot and in the same day, so data would look like this.
calender_date
timeslot
timeslot_end
2021-12-24
14:00:00
15:30:00
2021-12-24
16:30:00
18:00:00
2021-12-30
9:00:00
10:00:00
I have try row numbering with self join,
WITH cte AS
(
SELECT
calender_date
,timeslot
,timeslot_end
,ROW_NUMBER()OVER(ORDER BY [timeslot])rn
FROM #tmp_leave tl
)
SELECT
MIN(a.timeslot) OVER(PARTITION BY a.calender_date, DATEDIFF(minute,a.timeslot_end,
ISNULL(b.timeslot, a.timeslot_end))) AS 'StartTime',
MAX(a.timeslot_end ) OVER(PARTITION BY a.calender_date,
DATEDIFF(minute,a.timeslot_end, ISNULL(b.timeslot, a.timeslot_end))) AS 'EndTime'
FROM cte a
LEFT JOIN cte b
ON a.rn + 1 = b.rn AND a.timeslot_end = b.timeslot
ORDER BY calender_date
But the result isn't quite right, it ignored the gap in 2021-12-24 and return below.
calender_date
timeslot
timeslot_end
2021-12-24
14:00:00
18:00:00
2021-12-30
9:00:00
10:00:00
I have been searching and trying to solve it for a while now, please help, any help is very very appreciated!!
This is a gaps and islands problem. We can approach this by creating pseudo groups for each island of continuous dates/times.
WITH cte AS (
SELECT *, LAG(timeslot_end) OVER
(ORDER BY calendar_date, timeslot) timeslot_end_lag
FROM yourTable
),
cte2 AS (
SELECT *, COUNT(CASE WHEN timeslot_end_lag <> timeslot THEN 1 END)
OVER (ORDER BY calendar_date, timeslot) AS grp
FROM cte
)
SELECT calendar_date,
MIN(timeslot) AS timeslot,
MAX(timeslot_end) AS timeslot_end
FROM cte2
GROUP BY calendar_date, grp
ORDER BY calendar_date;
Demo

Find peaks of data

So I have a table Integrations.
Inte
Start Date
End Date
Total_Duration
INT1
1/7/2021 7:16:00
1/7/2021 9:22:00
02:06:00
INt2
2/7/2021 3:48:00
2/7/2021 5:10:00
01:22:00
Output I need:
Running Time
No of Inte.
1/7/2021 7:00:00
1
1/7/2021 8:00:00
1
1/7/2021 9:00:00
1
2/7/2021 4:00:00
1
2/7/2021 5:00:00
1
Basically it want to plot the peak hour when most Integrations were running.
Sql query I wrote:
select time, sum(value) as No_of_Inte
from(
select round(Start_Date, 'HH24') as time, count(*) as value
from Integrations
group by Start_Date
)
group by time
order by time asc
But this does not consider Total Duration.
Output :
Running Time
No of Inte.
1/7/2021 7:00:00
1
2/7/2021 4:00:00
1
Also, new Integrations are added every day.
This can be done using a recursive query. First create the test data
CREATE TABLE integrations (inte,start_date, end_date)
AS
(
SELECT 'INT1', TO_DATE('1/7/2021 7:16:00','DD/MM/YYYY HH24:MI:SS'), TO_DATE('1/7/2021 9:22:00','DD/MM/YYYY HH24:MI:SS') FROM dual UNION ALL
SELECT 'INT2', TO_DATE('2/7/2021 3:48:00','DD/MM/YYYY HH24:MI:SS'), TO_DATE('2/7/2021 5:10:00','DD/MM/YYYY HH24:MI:SS') FROM dual
);
Now use a recursive query to loop through the hours between start and end date. Then group by hour to get the correct counts per hour.
WITH row_per_hours (id, run_hour, end_date) AS
(
SELECT inte,
TRUNC(start_date,'HH24'),
end_date
FROM integrations
UNION ALL
SELECT id,
run_hour + INTERVAL '1' HOUR,
end_date
FROM row_per_hours
WHERE run_hour + INTERVAL '1' HOUR < end_date
)
SELECT TO_CHAR(run_hour,'DD/MM/YYYY HH24:MI:SS') as running_time,
COUNT(id) as integration_count
FROM row_per_hours
GROUP BY TO_CHAR(run_hour,'DD/MM/YYYY HH24:MI:SS') ORDER BY 1;
RUNNING_TIME INTEGRATION_COUNT
------------------- -----------------
01/07/2021 07:00:00 1
01/07/2021 08:00:00 1
01/07/2021 09:00:00 1
02/07/2021 03:00:00 1
02/07/2021 04:00:00 1
02/07/2021 05:00:00 1
For 12C and above:
You may use lateral join to generate required number of rows per each interval. Since it looks like you need some rounding of dates towards neares hour, I've added round instead of trunc. Or is there any other reason for the first interval is treating 7:00 as inclusion?.
with a(Inte, start_dt, end_dt) as (
select
'INT1'
, to_date('1/7/2021 07:16:00', 'dd/mm/yyyy hh24:mi:ss')
, to_date('1/7/2021 09:22:00', 'dd/mm/yyyy hh24:mi:ss')
from dual union all
select
'INt2'
, to_date('2/7/2021 03:48:00', 'dd/mm/yyyy hh24:mi:ss')
, to_date('2/7/2021 05:10:00', 'dd/mm/yyyy hh24:mi:ss')
from dual
)
select /*+ gather_plan_statistics */
b.hour_
, count(1) as int_cnt
from a
outer apply (
select
round(a.start_dt + numtodsinterval(level - 1, 'HOUR'), 'hh24') as hour_
from dual
connect by round(start_dt, 'hh24') + numtodsinterval(level - 1, 'HOUR') <= trunc(end_dt, 'hh24')
) b
group by b.hour_
order by 1
HOUR_ | INT_CNT
:------------------ | ------:
2021-07-01 07:00:00 | 1
2021-07-01 08:00:00 | 1
2021-07-01 09:00:00 | 1
2021-07-02 04:00:00 | 1
2021-07-02 05:00:00 | 1
db<>fiddle here

Generate a table with a range of timestamps - Oracle SQL

I am trying to create a table with 2 columns in the below format with all the dates of 2019:-
START_TIME END_TIME
2010-01-01 17:00:00|2019-01-02 17:00:00
2019-01-02 17:00:00|2019-01-03 17:00:00
2019-01-03 17:00:00|2019-01-04 17:00:00
...
...
2019-12-31 17:00:00|2020-01-01 17:00:00
Could you please help troubleshoot the error in this?
Please suggest any optimized way of achieving this.
CREATE TABLE s.dates_2019
(
ts_range_begin timestamp(6),
ts_range_end timestamp(6),
);
insert into s.dates_2019 (ts_range_begin)
select
to_timestamp('12/31/2018 05:00 PM', 'YYYY-MM-DD HH24:MI:SS') + n.n
from
(select rownum n
from ( select 1 just_a_column
from dual
connect by level <=
to_timestamp('12/31/2019 05:00 PM', 'YYYY-MM-DD HH24:MI:SS')
- to_timestamp('12/31/2018 05:00 PM', 'YYYY-MM-DD HH24:MI:SS')
+ 1
) t
) n
where
to_timestamp('12/31/2018 05:00 PM','YYYY-MM-DD HH24:MI:SS') + n.n <= to_timestamp('12/31/2019 05:00 PM','YYYY-MM-DD HH24:MI:SS')
insert into s.dates_2019 (ts_range_end)
select
to_timestamp('2019-01-01 05:00 PM', 'YYYY-MM-DD HH24:MI:SS') + n.n
from
(select rownum n
from ( select 1 just_a_column
from dual
connect by level <=
to_timestamp('2020-01-01 05:00 PM', 'YYYY-MM-DD HH24:MI:SS')
- to_timestamp('2019-01-01 05:00 PM', 'YYYY-MM-DD HH24:MI:SS')
+ 1
) t
) n
where
to_timestamp('2019-01-01 05:00 PM','YYYY-MM-DD HH24:MI:SS') + n.n <= to_timestamp('2020-01-01 05:00 PM','YYYY-MM-DD HH24:MI:SS')
Error is :-
[Error Code: 30081, SQL State: 99999] ORA-30081: invalid data type for datetime/interval arithmetic
How about this?
SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi';
Session altered.
SQL> with dates as
2 (select date '2019-01-01' + 17/24 + level - 1 datum
3 from dual
4 connect by level <= date '2020-01-01' - date '2019-01-01' + 1
5 ),
6 staend as
7 (select datum as start_time,
8 lead(datum) over (order by datum) as end_time
9 from dates
10 )
11 select start_time,
12 end_time
13 from staend
14 where end_time is not null
15 order by start_time;
START_TIME END_TIME
---------------- ----------------
2019-01-01 17:00 2019-01-02 17:00
2019-01-02 17:00 2019-01-03 17:00
2019-01-03 17:00 2019-01-04 17:00
2019-01-04 17:00 2019-01-05 17:00
<snip>
2019-12-30 17:00 2019-12-31 17:00
2019-12-31 17:00 2020-01-01 17:00
365 rows selected.
SQL>
If you want to insert dates into a table, you don't really need a timestamp - date will do.
SQL> create table dates_2019
2 (ts_range_begin date,
3 ts_range_end date
4 );
Table created.
SQL> insert into dates_2019 (ts_range_begin, ts_range_end)
2 with dates as
3 (select date '2019-01-01' + 17/24 + level - 1 datum
4 from dual
5 connect by level <= date '2020-01-01' - date '2019-01-01' + 1
6 ),
7 staend as
8 (select datum as start_time,
9 lead(datum) over (order by datum) as end_time
10 from dates
11 )
12 select start_time,
13 end_time
14 from staend
15 where end_time is not null
16 order by start_time;
365 rows created.
SQL>
If you want to aggregate weekends, consider using offset in the lead analytic function. That offset depends on day name (Friday). Also, remove weekend days from the result set (line #21, where day not in ('sat', 'sun')).
SQL> insert into dates_2019 (ts_range_begin, ts_range_end)
2 with dates as
3 (select date '2019-01-01' + 17/24 + level - 1 datum,
4 --
5 to_char(date '2019-01-01' + 17/24 + level - 1,
6 'fmdy', 'nls_date_language = english') day
7 from dual
8 connect by level <= date '2020-01-01' - date '2019-01-01' + 1
9 ),
10 staend as
11 (select datum as start_time,
12 day,
13 lead(datum, case when day = 'fri' then 3
14 else 1
15 end) over (order by datum) as end_time
16 from dates
17 )
18 select start_time,
19 end_time
20 from staend
21 where day not in ('sat', 'sun')
22 and end_time is not null;
261 rows created.
SQL> select * from dates_2019 order by ts_range_begin;
TS_RANGE_BEGIN TS_RANGE_END
---------------- ----------------
2019-01-01 17:00 2019-01-02 17:00
2019-01-02 17:00 2019-01-03 17:00
2019-01-03 17:00 2019-01-04 17:00
2019-01-04 17:00 2019-01-07 17:00 --> aggregated
2019-01-07 17:00 2019-01-08 17:00
2019-01-08 17:00 2019-01-09 17:00
2019-01-09 17:00 2019-01-10 17:00
2019-01-10 17:00 2019-01-11 17:00
2019-01-11 17:00 2019-01-14 17:00 --> aggregated
2019-01-14 17:00 2019-01-15 17:00
2019-01-15 17:00 2019-01-16 17:00
<snip>
I think your actual error is because subtracting timestamps returns an interval, whereas you're using the result as a number in CONNECT BY LEVEL. You could cast the timestamps as dates (you might find the answers here useful) or use an interval expression to get the day component between the timestamps.
But if this is your actual SQL and not a simplification, I suggest just using dates in the CONNECT BY (you can still keep timestamps in your table if that's what you want) and doing something like...
CREATE TABLE dates_2019
(
ts_range_begin timestamp(6),
ts_range_end timestamp(6)
);
insert into dates_2019 (ts_range_begin)
select
to_timestamp('2018-12-31 17', 'YYYY-MM-DD HH24') + rownum
from
dual
connect by level <= to_date('2019-12-31 17', 'YYYY-MM-DD HH24') - to_date('2018-12-31 17', 'YYYY-MM-DD HH24')
;
update dates_2019 SET ts_range_end = ts_range_begin + 1;
... which I tested in Oracle 18c, but probably works 10g.

Counting records and grouping them by the hour

I'm trying to count the records in my table and grouping them by hour, i'm getting results with my query but I want it to return every hour even if there are no records.
My current query is,
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM TABLE
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH');
This returns a result like this,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
What I want is for it to return every hour between my 2 dates so,
TRANSACTIONCOUNT | TRANSACTIONDATE
43 | 23-Jan-19 07:00:00
47 | 23-Jan-19 08:00:00
0 | 23-Jan-19 09:00:00
0 | 23-Jan-19 10:00:00
0 | 23-Jan-19 11:00:00
0 | 23-Jan-19 12:00:00
0 | 23-Jan-19 13:00:00
156 | 23-Jan-19 14:00:00
558 | 23-Jan-19 15:00:00
--......
0 | 24-Jan-19 00:00:00
0 | 24-Jan-19 01:00:00
0 | 24-Jan-19 02:00:00
--and so on
To fill the holes in the transaction hours you create first a complete table of hours.
You may use Recursive Subquery Factoring to do it
WITH hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit here */
)
select * from hour_table;
TRANSACTIONDATE
-------------------
23.01.2019 07:00:00
23.01.2019 08:00:00
...
24.01.2019 05:00:00
24.01.2019 06:00:00
Note that you use the staring and ending date in this query, the starting date must be exact an hour.
Next step is as simple as to outer join this hour table to your aggregation and set the default value for the missing hours with NVL.
with hour_table(TRANSACTIONDATE) AS (
SELECT to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') /* init hour here */
FROM DUAL
UNION ALL
SELECT TRANSACTIONDATE + 1/24
FROM hour_table
WHERE TRANSACTIONDATE + 1/24 < to_date('24-JAN-19 06:59:59','dd-MON-yy hh24:mi:ss') /* limit */
),
agg as (
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'HH') as TRANSACTIONDATE
FROM "TABLE"
WHERE date_modified between to_date('23-JAN-19 07:00:00','dd-MON-yy hh24:mi:ss') and to_date('24-Jan-19 06:59:59','dd-MON-yy hh24:mi:ss')
group by trunc(date_modified, 'HH')
)
select t.TRANSACTIONDATE, nvl(transactioncount,0) transactioncount
from hour_table t
left outer join agg a
on t.TRANSACTIONDATE = a.TRANSACTIONDATE
order by 1;
You might consider using the following with CONNECT BY level logic :
SELECT sum(transactioncount) as transactioncount, transactiondate
FROM
(
with "TABLE"(date_modified) as
(
SELECT timestamp'2019-01-23 08:00:00' FROM dual union all
SELECT timestamp'2019-01-23 08:30:00' FROM dual union all
SELECT timestamp'2019-01-23 09:00:00' FROM dual union all
SELECT timestamp'2019-01-24 05:01:00' FROM dual
)
SELECT nvl(count(*),0) AS transactioncount, trunc(date_modified, 'hh24') as transactiondate
FROM "TABLE" t
GROUP BY trunc(date_modified, 'HH24')
UNION ALL
SELECT 0, timestamp'2019-01-23 07:00:00' + ( level - 1 )/24
FROM dual
CONNECT BY level <= 24 * extract( day from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') +
extract( hour from
timestamp'2019-01-24 06:59:59'-
timestamp'2019-01-23 07:00:00') + 1
)
GROUP BY transactiondate
ORDER BY transactiondate
Rextester Demo