Loop over datablock extremely slow (oracle forms, pl/sql) - sql

I created a small loop to select the maximum value from a datablock in Oracle Forms. I have to do it this way, because the block sometimes gets global parameters from another form, or sometimes it has a different default where clause etc. It gets populated from different sources, so I can't create a cursor or I have to do it dynamically.
The loop I have, is declared like this:
loop
exit when :system.last_record = 'TRUE';
if (:block.number > v_max) then
v_max := :block.number;
end if;
next_record;
end loop;
Why is it so slow? It takes a long time to even check a block with 10 records.
Or is there an easier way to select the maximum from a column in a block?
Thanks in advance,

You probably have a lot of calculations and additional fetches in your post_query trigger? This gets executed for each row.
Alternatively you could set the block parameter "FETCH ALL RECORDS=true", and in the post_query trigger you update a global variable (which you have initialized with 0 in the pre-query trigger).
e.g.
pre_query:
:gobal.maxvalue := 0;
post_query (note this gets executed for each row):
if :block.number > :gobal.maxvalue then
:gobal.maxvalue := :block.number;
end if;
if :system.last_record = 'TRUE' then
do something with :global.maxvalue; -- we are on the last record of the query, so do something with the max value
end if;
after that you can use the global variable

Related

PL SQL bulk collect fetchall not completing

I made this procedure to bulk delete data (35m records). Can you see why this pl/sql procedure runs without exiting and rows are not getting deleted ?
create or replace procedure clear_logs
as
CURSOR c_logstodel IS SELECT * FROM test where id=23;
TYPE typ_log is table of test%ROWTYPE;
v_log_del typ_log;
BEGIN
OPEN c_logstodel;
LOOP
FETCH c_logstodel BULK COLLECT INTO v_log_del LIMIT 5000;
EXIT WHEN c_logstodel%NOTFOUND;
FORALL i IN v_log_del.FIRST..v_log_del.LAST
DELETE FROM test WHERE id =v_log_del(i).id;
COMMIT;
END LOOP;
CLOSE c_logstodel;
END clear_logs;
Adding in rowid instead of column name, exit when v_delete_data.count = 0; instead of EXIT WHEN c_logstodel%NOTFOUND; and changing chunk limit to 50,000 allowed the script clear 35 million rows in 15 mins
create or replace procedure clear_logs
as
CURSOR c_logstodel IS SELECT rowid FROM test where id=23;
TYPE typ_log is table of rowid index by binary_integer;
v_log_del typ_log;
BEGIN
OPEN c_logstodel;
LOOP
FETCH c_logstodel BULK COLLECT INTO v_log_del LIMIT 50000;
exit when v_log_del.count = 0;
FORALL i IN v_log_del.FIRST..v_log_del.LAST
DELETE FROM test WHERE rowid =v_log_del(i);
exit when v_log_del.count = 0;
COMMIT;
END LOOP;
COMMIT;
CLOSE c_logstodel;
END clear_logs;
First off when using BULK COLLECT LIMIT X the %NOTFOUND takes on a slightly unexpected meaning. In this case %NOTFOUND actually means Oracle could not retrieve X rows. (I guess technically it always does you fetch the next 1 and it says it could not fill the 1 row buffer.) Just move the EXIT WHEN %NOTFOUND to after the FORALL. But there is actually no reason to retrieve the data and then delete the retrieved rows. While one statement would be considerable faster 35M rows would require signifient rollback space. There is an interment solution.
Although not commonly used Delate statements generate rownum as do selects. This value can be user to limit the number or rows processed. So to break into a given commit size just limit rownum on the delete:
create or replace procedure clear_logs
as
k_max_rows_per_interation constant integer := 50000;
begin
loop
delete
from test
where id=23
and rownum <= k_max_rows_per_interation;
exit when sql%rowcount < k_max_rows_per_interation;
commit;
end loop;
commit;
end;
As #Stilgar points out deletes are expensive, meaning slow, so their solution may be better. But this has the advantage that it does not essentially take the table completely out-of-service during the operation. NOTE: I tend to use a much larger commit interval size, generally around 400,000 - 300,000 rows. I suggest you talk with your DBA see what they think this limit should be. Remember it is their job to properly size rollback space for typical operations. If this is normal in your operation they need to set it correctly. If you can get rollback space for 35M deletes then that is the fastest you are going to get.

performance anaysis (select in loop vs function call in loop )

i will like to ask a performance related question here my question is which approach is best
1 adding subquery in loop
declare
test varchar2(50);
FOR Lcntr IN 1..20
LOOP
update emp set empno='50' where empname=test;
END LOOP;
2 adding function call in loop or making the function of the above query and calling it in loop
declare
test varchar2(50);
FOR Lcntr IN 1..20
LOOP
temp:=update('argument');
END LOOP;
If your function update just call the same sql update, it dosen't matter are you call it directly of from stored function.
In common the best way is use one sql statement (update or merge) for update whole dataset what you need.
But you update are look like strange:
In first pl/sql block you declare variable test. And test is equal null. And after that you try update table by comparing with null - no any rows will be affected.
In second pl\sql block you declare variable test too, but use varable temp. It will raise error in compilation.

SQL Oracle - Procedure Syntax (School assignment)

I'm currently learning SQL and I'm having trouble with a procedure of mine. The procedure should calculate the average of a column called 'INDICE_METABO_PAT'. The information I need is in 3-4 different tables. Then when I do have the average calculated, I update a table to set this average to the corresponding entries. Here is the procedure. Note that everything else in my .sql file works : inserts, updates, selects, views, etc.
create or replace Procedure SP_UPDATE_INDICE_METABO_DV (P_NO_ETUDE in number)
is
V_SOMME number := 0;
V_NBPATIENT number := 0;
V_NO_ETUDE number := P_NO_ETUDE;
cursor curseur is
select PATIENT.INDICE_EFFICACITE_METABO_PAT
from ETUDE, DROGUE_VARIANT, ETUDE_PATIENT, PATIENT
where ETUDE.NO_DROGUE = DROGUE_VARIANT.NO_DROGUE
and ETUDE.NO_VAR_GEN = DROGUE_VARIANT.NO_VAR_GEN
and V_NO_ETUDE = ETUDE_PATIENT.NO_ETUDE
and ETUDE_PATIENT.NO_PATIENT = PATIENT.NO_PATIENT;
begin
open curseur;
fetch curseur into V_SOMME;
V_NBPATIENT := V_NBPATIENT + 1;
exit when curseur%NOTFOUND;
update DROGUE_VARIANT
set INDICE_EFFICACITE_METABO_DV = V_SOMME / V_NBPATIENT
where exists(select * from ETUDE, DROGUE_VARIANT, ETUDE_PATIENT, PATIENT
where ETUDE.NO_DROGUE = DROGUE_VARIANT.NO_DROGUE
and ETUDE.NO_VAR_GEN = DROGUE_VARIANT.NO_VAR_GEN
and V_NO_ETUDE = ETUDE_PATIENT.NO_ETUDE
and ETUDE_PATIENT.NO_PATIENT = PATIENT.NO_PATIENT);
end SP_UPDATE_INDICE_METABO_DV;
/
I'm getting an error : Procedure compiled , error check compiler log.
But I can't open the compiler log, and when my friend opens it, it points to weird places, like my create tables and such.
This is school stuff by the way, so it'll be nice if you could give an insight instead of a direct solution. Thanks alot.
Thanks alot in advance for your kind help !
To see the error you can do show errors after your procedure creation statement, or you can query the user_errors or all_errors views.
That will show something like:
LINE/COL ERROR
-------- ------------------------------------------------------------------------
20/4 PLS-00376: illegal EXIT/CONTINUE statement; it must appear inside a loop
20/4 PL/SQL: Statement ignored
You mentioned that when you checked the complier log, which shows the same information, "it points to weird places". Presumably you're looking at line 20 in your script. But this message is referring to line 20 of the PL/SQL code block, which is the exit when curseur%NOTFOUND;, which makes sense for the error message.
And as the message also says, and as #ammoQ said in a comment, that should be in a loop. If you're trying to manually calculate the average in a procedure as an exercise, instead of using the built-in aggregation functions, then you need to loop over all of the rows from your cursor:
open curseur;
loop
fetch curseur into V_SOMME;
exit when curseur%NOTFOUND;
V_NBPATIENT := V_NBPATIENT + 1;
end loop;
close curseur;
But as you'll quickly realise, you'll end up with the v_somme variable having the last value retrieved, not the sum of all the values. You need a separate variable keep to track of the sum - fetch each value into a variable, and add that to your running total, all within the loop. But as requested, not giving a complete solution.
As you're starting out you should really use ANSI join syntax, not the old from/where syntax you have now. It's a shame that is still being taught. So your cursor query should be something like:
select PATIENT.INDICE_EFFICACITE_METABO_PAT
from ETUDE_PATIENT
join ETUDE
-- missing on clause !
join DROGUE_VARIANT
on DROGUE_VARIANT.NO_DROGUE = ETUDE.NO_DROGUE
and DROGUE_VARIANT.NO_VAR_GEN = ETUDE.NO_VAR_GEN
join PATIENT
on PATIENT.NO_PATIENT = ETUDE_PATIENT.NO_PATIENT
where ETUDE_PATIENT.NO_ETUDE = P_NO_ETUDE;
... which shows you that you are missing a join condition between ETUDE_PATIENT and ETUDE - it's unlikely you want a cartesian product, and it's much easier to spot that missing join using this syntax than with what you had.
You need to look at your update statement carefully too, particularly the exists clause. That will basically always return true if the cursor found anything, so it will update every row in DROGUE_VARIANT with your calculated average, which presumably isn't what you want.
There is no correlation between the rows in the table you're updating and the subquery in that clause - the DROGUE_VARIANT in the subquery is independent of the DROGUE_VARIANT you're updating. By which I mean, it's the same table, obviously; but the update and the subquery are looking at the table separately and so are looking at different rows. It also has the same missing join condition as the cursor query.

HSQL Iterated FOR Statement not working

Using HSQL 2.2.5 I need to shudder process one row at a time in a stored procedure, so I thought the "Iterated FOR" statement might do the trick for me. Unfortunately I don't seem to be able to make it work. It's supposed to look something like:
FOR SELECT somestuff FROM sometable DO
some random SQL statements
END FOR;
That leaves off a bit of the syntax, but it's close enough for now.
The problem seems to be that the statements inside the loop never execute. I've verified that my SELECT statement does indeed return something.
So let's get concrete. When I execute this stored procedure:
CREATE PROCEDURE b()
MODIFIES SQL DATA
BEGIN ATOMIC
DECLARE count_var INTEGER;
SET count_var = 0;
WHILE count_var < 10 DO
INSERT INTO TTP2 VALUES(count_var);
SET count_var = count_var + 1;
END WHILE;
END;
I get 10 rows inserted into table TTP2, with values 0 through 9. (TTP2 has just one column defined, of type INTEGER.)
But when I substitute a FOR statement for the WHILE like so:
CREATE PROCEDURE c()
MODIFIES SQL DATA
BEGIN ATOMIC
DECLARE count_var INTEGER;
SET count_var = 0;
FOR SELECT id FROM ttp_by_session FETCH 10 ROWS ONLY DO
INSERT INTO TTP2 VALUES(count_var);
SET count_var = count_var + 1;
END FOR;
END;
I get nothing inserted into TTP2. (I have verified that the SELECT statement returns 10 rows, one column of integers.)
When I leave the FETCH clause off I still get no results. ttp_by_session is a view, but the same thing happens with a bare table.
What am I doing wrong?
Thanks for the help.
This works fine with the latest version of HSQLDB. Try with the 2.3.0 release candidate snapshot from the HSQLDB web site.
When the FOR statement was initially added about two years ago, it had limited functionality. The functionality was extended in later versions.

What is wrong with my Oracle Trigger?

CREATE OR REPLACE TRIGGER Net_winnings_trigger
AFTER UPDATE OF total_winnings ON Players
FOR EACH ROW
DECLARE
OldTuple OLD
NewTuple NEW
BEGIN
IF(OldTuple.total_winnings > NewTuple.total_winnings)
THEN
UPDATE Players
SET total_winnings = OldTuple.total_winnings
WHERE player_no = NewTuple.player_no;
END IF;
END;
/
I am trying to get a trigger that will only allow the 'total_winnings' field to be updated to a value greater than the current value.
If an update to a smaller value occurs, the trigger should just leave the set the value to the old value (as if the update never occured)
Since you want to override the value that is specified in the UPDATE statement, you'd need to use a BEFORE UPDATE trigger. Something like this
CREATE OR REPLACE TRIGGER Net_winnings_trigger
BEFORE UPDATE OF total_winnings ON Players
FOR EACH ROW
BEGIN
IF(:old.total_winnings > :new.total_winnings)
THEN
:new.total_winnings := :old.total_winnings;
END IF;
END;
But overriding the value specified in an UPDATE statement is a dangerous game. If this is something that shouldn't happen, you really ought to raise an error so that the application can be made aware that there was a problem. Otherwise, you're creating all sorts of potential for the application to make incorrect decisions down the line.
Something like this should work.. although it will be hiding the fact that an update is not taking place if you try to update to a smaller value. To the user, everything will look like it worked but the data will remain unchanged.
CREATE OR REPLACE TRIGGER Net_winnings_trigger
BEFORE UPDATE OF total_winnings
ON Players
FOR EACH ROW
DECLARE
BEGIN
:new.total_winnings := greatest(:old.total_winnings,:new.total_winnings);
END;