Rollback under condition in MariaDB - sql

I have a transaction which reduces the variable with amount of money in loop, if the variable with money is below 0, the money amount should return to the value before transaction. How can I appropriately use rollback in MariaDB in this case?
---edit
I have something like that, and it doesn't work, check out the lines in if(budget<0) because if the money is below 0 and some, but not all of them, iterations were made and saved to the temp table, the table shows them
BEGIN
DECLARE temppesel text;
DECLARE tempsalary int;
DECLARE budget int DEFAULT cash;
DECLARE done bool DEFAULT false;
DECLARE occ CURSOR FOR (SELECT pesel, pensja FROM pracownicy where zawod=occupation);
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = true;
START TRANSACTION;
DROP TABLE IF EXISTS temp;
CREATE TABLE temp ( Result text );
OPEN occ;
occ : LOOP
FETCH occ INTO temppesel, tempsalary;
SET budget = budget - tempsalary;
IF(done) THEN
LEAVE occ;
END IF;
IF(budget<0) THEN
ROLLBACK;
LEAVE occ;
END IF;
INSERT INTO temp VALUES (concat('********',substr(temppesel,9,3), ', wyplacono'));
END LOOP;
CLOSE occ;
SELECT * FROM temp;
DROP TABLE temp;
COMMIT;
END

I believe that the CREATE TABLE statements are causing the transaction to be committed. Here is a list of commands that cause an implicit COMMIT.
As the aforementioned link describes, you can either move the START TRANSACTION statement after the DROP and CREATE commands or use the CREATE TEMPORARY TABLE syntax to create a temporary table:
CREATE TEMPORARY TABLE temp ( Result text );

BEGIN;
do some SQL
Loop:
do some SQL
if something is wrong, ROLLBACK and exit the loop and transaction
do some SQL
if something, go back to Loop
do some SQL
COMMIT;
That is, let ROLLBACK undo everything since the BEGIN.
More
Now that the SP is visible...
What engine is temp? If it MyISAM, then it is not rolled back. SHOW VARIABLES LIKE 'default_storage_engine';.
Please don't use occ for 2 different things, it confuses the reader.
Do you want the output to be part of the rows of pracownicy when the budget is blown? Or do you want no rows?
If you have multiple connections doing the same thing, there is a serious problem -- temp is visible to all connections, and they could step on each other. Change to CREATE TEMPORARY TABLE temp ...
However, with the pre-test (below), you can completely avoid the use of temp. First test for its need, then (if needed) simply do a single SELECT for all the rows.
If you want nothing, then a simple test something like this would pre-test whether it will overflow, therey obviating the need for testing in the loop:
..
IF ( SELECT SUM(pensja)
FROM pracownicy
where zawod=occupation ) > budget )

Related

PL/SQL table level trigger running after second update

I have this problem whith this trigger which is calling a procedure updating a table after updating a row in other table.The problem is that you have to update table STAVKARACUNA two times to table RACUN update but it uses old values. Here is a code of both:
Here is a code of aa procedure:
create or replace PROCEDURE ukupnaCenaRacun (SIF IN VARCHAR2) AS
SUMA float := 0;
suma2 float := 0;
Mesec NUMBER;
popust float :=0.1;
BEGIN
SELECT SUM(iznos) INTO SUMA
FROM STAVKARACUNA
WHERE SIF = SIFRARAC;
SELECT SUM(vredrobe*pdv) INTO SUMA2
FROM STAVKARACUNA
WHERE SIF = SIFRARAC;
SELECT EXTRACT (MONTH FROM DATUM) INTO Mesec FROM RACUN WHERE SIF=SIFRARAC;
IF(Mesec = 1) THEN
UPDATE RACUN
SET PDVIZNOS = SUMA2, ukupnozanaplatu = suma*(1-popust)
WHERE SIFRARAC=SIF;
END IF;
IF (MESEC != 1) THEN
UPDATE RACUN
SET PDVIZNOS = SUMA2, ukupnozanaplatu = suma
WHERE SIFRARAC=SIF;
END IF;
END;
Here is a trigger:
create or replace TRIGGER "UKUPNACENA_RACUN_UKUPNO"
AFTER INSERT OR UPDATE OR DELETE OF CENA,KOL,PDV ON STAVKARACUNA
DECLARE
SIF VARCHAR2(20) := PACKAGE_STAVKARACUNA.SIFRARAC;
BEGIN
PACKAGE_STAVKARACUNA.ISKLJUCI_TRIGER('FORBID_UPDATING');
ukupnaCenaRacun(SIF);
PACKAGE_STAVKARACUNA.UKLJUCI_TRIGER('FORBID_UPDATING');
END;
The problem is when a table STAVKARACUNA is updated, nothing happens with table RACUN, but next time table STAVKARACUNA is updated the data in table RACUN is updated but with old values.
Thank you very much.
Are you aware that a trigger for an event on a table should not directly access that table? The code is inside a DML event. The table is right in the middle of being altered in some say. So any query back to the same table could well attempt to read data that is in the process of being changed. It could try to read data that does not quite exist before a commit is performed or is one value now but will be a different value once a commit is performed. The table is mutating.
This goes for any code outside the trigger that the triggers calls. So the ukupnaCenaRacun procedure is executed in the context of the trigger. Yet it goes out and queries table STAVKARACUNA in two places (which can be placed in a single query but that is neither here nor there).
Since you're not getting a mutating table error, I can only assume that the update is not taking place until after the triggering event is committed but then you won't see the results until after that is committed sometime later -- like when the second update is committed.
That explanation actually sounds hollow to me as I have always thought that all activity performed by a trigger is committed or rolled back as part of one transaction. But that is the action you are describing.
It appears that SIF is a package variable defined in the package spec. Since everything in the procedure keys off that value and the trigger doesn't change the value, can't SUMA and SUMA2 also be defined as variables, values to be updated whenever SIF changes?

History table referencing other values in the table / accessing package table variables

I have a system for tracking usage of computers in a lab. Slightly simplified, it works out to:
Machines are associated with a lab.
Machines have a binary logged_in state, which gets updated automatically when users log in and out.
There is a view keyed on the lab which gathers the total number of seats associated with the lab, and the current number in use for that lab.
What I would like to do is add a history or audit table, which would track changes to lab population over time. I had a trigger on the machine table to store the time and the total lab population in my lab history table every time the machine table changed. The problem is that, in order to get the new total for the lab, I have to examine the other values in the machine table. This results in a table mutating error.
Some things I found on here and elsewhere suggested that I should create a package to track the labs being changed. Use a before trigger to clear the list, a row trigger to store each labid being changed, and an after trigger to update the history table with new values for only those labs whose ids are in the list. I've tried that, but can't figure out how to access the values I've stored in the package table (or even if it is storing them properly in the first place.) As will no doubt be obvious, I'm unfamiliar with PL/SQL packages and table variables - the whole syntax of referring to table entries like arrays struck me as vaguely heretical though incredibly useful if it works. So most of the below is just copied and adapted from other solutions I've found, but they didn't stretch as far as how to actually use my table of changed lablocids, assuming its being created properly in the first place. The following simply tells me that pg_machine_in_use_pkg.changedlablocids does not exist when I try to compile the final trigger.
create or replace package labstats_adm.pg_machine_in_use_pkg
as
type arr is table of number index by binary_integer;
changedlablocids arr;
empty arr;
end;
/
create or replace trigger labstats_adm.pg_machine_in_use_init
before insert or update
on labstats_adm.pg_machine
begin
-- begin each update with a blank list of changed lablocids
pg_machine_in_use_pkg.changedlablocids := pg_machine_in_use_pkg.empty;
end;
/
--
create or replace trigger labstats_adm.pg_machine_in_use_update
after insert or update of in_use,lablocid
on labstats_adm.pg_machine
for each row
begin
-- record lablocids - old and new - of changed machines
if :new.lablocid is not null then
pg_machine_in_use_pkg.changedlablocids( pg_machine_in_use_pkg.changedlablocids.count+1 ) := :new.lablocid;
end if;
if :old.lablocid is not null and :old.lablocid != :new.lablocid then
pg_machine_in_use_pkg.changedlablocids( pg_machine_in_use_pkg.changedlablocids.count+1 ) := :old.lablocid;
end if;
end;
create or replace trigger labstats_adm.pg_machine_lab_history
after insert or update of in_use,lablocid
on labstats_adm.pg_machine
begin
-- for each lablocation we just logged a change to, update that labs history
insert into labstats_adm.pg_lab_history (labid, time, total_seats, used_seats)
select labid, systimestamp, total_seats, used_seats
from labstats_adm.lab_usage
where labid in (
select distinct labid from pg_machine_in_use_pkg.changedlablocids
);
end;
/
On the other hand, if there is a better overall approach than the package, I'm all ears.
After some reflection I've got to go with #tbone on this one. In my experience a history table should be a copy of the data in the "real" table with fields added to show when particular 'version' of the data shown by a row in the history table was in effect. So if the "real" table is something like
CREATE TABLE REAL_TABLE
(ID_REAL_TABLE NUMBER PRIMARY KEY,
COL2 NUMBER,
COL3 VARCHAR2(50));
then I'd create the history table as
CREATE TABLE HIST_TABLE
(ID_HIST_TABLE NUMBER PRIMARY KEY
ID_REAL_TABLE NUMBER,
COL2 NUMBER,
COL3 VARCHAR2(50),
EFFECTIVE_START_DT TIMESTAMP(9) NOT NULL,
EFFECTIVE_END_DT TIMESTAMP(9));
and I'd have the following triggers to get everything populated:
CREATE TRIGGER REAL_TABLE_BI
BEFORE INSERT ON REAL_TABLE
REFERENCING OLD AS OLD
NEW AS NEW
FOR EACH ROW
BEGIN
IF :NEW.ID_REAL_TABLE IS NULL THEN
:NEW.ID_REAL_TABLE := REAL_TABLE_SEQUENCE.NEXTVAL;
END IF;
END REAL_TABLE_BI;
CREATE TRIGGER HIST_TABLE_BI
BEFORE INSERT ON HIST_TABLE
FOR EACH ROW
BEGIN
IF :NEW.ID_HIST_TABLE IS NULL THEN
:NEW.ID_HIST_TABLE := HIST_TABLE_SEQUENCE.NEXTVAL;
END IF;
END HIST_TABLE_BI;
CREATE TRIGGER REAL_TABLE_AIUD
AFTER INSERT OR UPDATE OR DELETE ON REAL_TABLE
FOR EACH ROW
DECLARE
tsEffective_start_date TIMESTAMP(9) := SYSTIMESTAMP;
tsEffective_end_date TIMESTAMP(9) := dtEffective_start_date - INTERVAL '0.000000001' SECOND;
BEGIN
IF UPDATING OR DELETING THEN
UPDATE HIST_TABLE
SET EFFECTIVE_END_DATE := tsEffective_end_date
WHERE ID_REAL_TABLE = :NEW.ID_REAL_TABLE AND
EFFECTIVE_END_DATE IS NULL;
END IF;
IF INSERTING OR UPDATING THEN
INSERT INTO HIST_TABLE (ID_REAL_TABLE, COL2, COL3, EFFECTIVE_START_DATE)
VALUES (:NEW.ID_REAL_TABLE, :NEW.COL2, :NEW.COL3, tsEffective_start_date);
END IF;
END REAL_TABLE_AIUD;
Using this method the "history" table has all historical versions of the data in the "real" table PLUS a complete copy of the "current" data from the "real" table; this is done to simplify queries which need to report on all versions of the data in the table up to and including present values.
The advantage of using triggers to do all this is that the maintenance of the primary keys and the history table becomes automatic and can't be easily circumvented or forgotten.
Share and enjoy.
Sorry so slow to get back; its taken me a bit of fiddling, and I haven't had a lot of time to work on it.
Thanks to Bob Jarvis for pointing me at the compound triggers, which cleaned up the overall structure significantly. After that, I just had to sanitise the way I'm getting values back out of my table variable. On the odd chance that someone else stumbles over this looking for the answer to the same problem, I'll post my final solution here:
create or replace
trigger pg_machine_in_use_update
for insert or update or delete of in_use,lablocid
on labstats_adm.pg_machine
compound trigger
type arr is table of number index by binary_integer;
changedlabids arr;
idx binary_integer;
after each row is
newlabid labstats_adm.pg_labs.labid%TYPE;
oldlabid labstats_adm.pg_labs.labid%TYPE;
begin
-- store the labids of any changed locations
-- PL/SQL does not like us testing for the existence of something that isn't there, so just set it twice if necessary
if ( :new.lablocid is not null ) then
select labid into newlabid from labstats_adm.pg_lablocation where lablocid = :new.lablocid;
changedlabids( newlabid ) := 1;
end if;
if ( :old.lablocid is not null ) then
select labid into oldlabid from labstats_adm.pg_lablocation where lablocid = :old.lablocid;
changedlabids( oldlabid ) := 1;
end if;
end after each row;
after statement is
begin
idx := changedlabids.FIRST;
while idx is not null loop
insert into labstats_adm.pg_lab_history (labid, time, total_seats, used_seats)
select labid, systimestamp, total_seats, used_seats
from labstats_adm.lab_usage
where labid = idx;
idx := changedlabids.NEXT(idx);
end loop;
end after statement;
end pg_machine_in_use_update;

Fast Update database with more than 10 million records

I am fairly new to SQL and was wondering if someone can help me.
I got a database that has around 10 million rows.
I need to make a script that finds the records that have some NULL fields, and then updates it to a certain value.
The problem I have from doing a simple update statement, is that it will blow the rollback space.
I was reading around that I need to use BULK COLLECT AND FETCH.
My idea was to fetch 10,000 records at a time, update, commit, and continue fetching.
I tried looking for examples on Google but I have not found anything yet.
Any help?
Thanks!!
This is what I have so far:
DECLARE
CURSOR rec_cur IS
SELECT DATE_ORIGIN
FROM MAIN_TBL WHERE DATE_ORIGIN IS NULL;
TYPE date_tab_t IS TABLE OF DATE;
date_tab date_tab_t;
BEGIN
OPEN rec_cur;
LOOP
FETCH rec_cur BULK COLLECT INTO date_tab LIMIT 1000;
EXIT WHEN date_tab.COUNT() = 0;
FORALL i IN 1 .. date_tab.COUNT
UPDATE MAIN_TBL SET DATE_ORIGIN = '23-JAN-2012'
WHERE DATE_ORIGIN IS NULL;
END LOOP;
CLOSE rec_cur;
END;
I think I see what you're trying to do. There are a number of points I want to make about the differences between the code below and yours.
Your forall loop will not use an index. This is easy to get round by using rowid to update your table.
By committing after each forall you reduce the amount of undo needed; but make it more difficult to rollback if something goes wrong. Though logically your query could be re-started in the middle easily and without detriment to your objective.
rowids are small, collect at least 25k at a time; if not 100k.
You cannot index a null in Oracle. There are plenty of questions on stackoverflow about this is you need more information. A functional index on something like nvl(date_origin,'x') as a loose example would increase the speed at which you select data. It also means you never actually have to use the table itself. You only select from the index.
Your date data-type seems to be a string. I've kept this but it's not wise.
If you can get someone to increase your undo tablespace size then a straight up update will be quicker.
Assuming as per your comments date_origin is a date then the index should be on something like:
nvl(date_origin,to_date('absolute_minimum_date_in_Oracle_as_a_string','yyyymmdd'))
I don't have access to a DB at the moment but to find out the amdiOaas run the following query:
select to_date('0001','yyyy') from dual;
It should raise a useful error for you.
Working example in PL/SQL Developer.
create table main_tbl as
select cast( null as date ) as date_origin
from all_objects
;
create index i_main_tbl
on main_tbl ( nvl( to_date(date_origin,'yyyy-mm-dd')
, to_date('0001-01-01' ,'yyyy-mm-dd') )
)
;
declare
cursor c_rec is
select rowid
from main_tbl
where nvl(date_origin,to_date('0001-01-01','yyyy-mm-dd'))
= to_date('0001-01-01','yyyy-mm-dd')
;
type t__rec is table of rowid index by binary_integer;
t_rec t__rec;
begin
open c_rec;
loop
fetch c_rec bulk collect into t_rec limit 50000;
exit when t_rec.count = 0;
forall i in t_rec.first .. t_rec.last
update main_tbl
set date_origin = to_date('23-JAN-2012','DD-MON-YYYY')
where rowid = t_rec(i)
;
commit ;
end loop;
close c_rec;
end;
/

Efficient way to update all rows in a table

I have a table with a lot of records (could be more than 500 000 or 1 000 000). I added a new column in this table and I need to fill a value for every row in the column, using the corresponding row value of another column in this table.
I tried to use separate transactions for selecting every next chunk of 100 records and update the value for them, but still this takes hours to update all records in Oracle10 for example.
What is the most efficient way to do this in SQL, without using some dialect-specific features, so it works everywhere (Oracle, MSSQL, MySQL, PostGre etc.)?
ADDITIONAL INFO: There are no calculated fields. There are indexes. Used generated SQL statements which update the table row by row.
The usual way is to use UPDATE:
UPDATE mytable
SET new_column = <expr containing old_column>
You should be able to do this is a single transaction.
As Marcelo suggests:
UPDATE mytable
SET new_column = <expr containing old_column>;
If this takes too long and fails due to "snapshot too old" errors (e.g. if the expression queries another highly-active table), and if the new value for the column is always NOT NULL, you could update the table in batches:
UPDATE mytable
SET new_column = <expr containing old_column>
WHERE new_column IS NULL
AND ROWNUM <= 100000;
Just run this statement, COMMIT, then run it again; rinse, repeat until it reports "0 rows updated". It'll take longer but each update is less likely to fail.
EDIT:
A better alternative that should be more efficient is to use the DBMS_PARALLEL_EXECUTE API.
Sample code (from Oracle docs):
DECLARE
l_sql_stmt VARCHAR2(1000);
l_try NUMBER;
l_status NUMBER;
BEGIN
-- Create the TASK
DBMS_PARALLEL_EXECUTE.CREATE_TASK ('mytask');
-- Chunk the table by ROWID
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID('mytask', 'HR', 'EMPLOYEES', true, 100);
-- Execute the DML in parallel
l_sql_stmt := 'update EMPLOYEES e
SET e.salary = e.salary + 10
WHERE rowid BETWEEN :start_id AND :end_id';
DBMS_PARALLEL_EXECUTE.RUN_TASK('mytask', l_sql_stmt, DBMS_SQL.NATIVE,
parallel_level => 10);
-- If there is an error, RESUME it for at most 2 times.
l_try := 0;
l_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS('mytask');
WHILE(l_try < 2 and l_status != DBMS_PARALLEL_EXECUTE.FINISHED)
LOOP
l_try := l_try + 1;
DBMS_PARALLEL_EXECUTE.RESUME_TASK('mytask');
l_status := DBMS_PARALLEL_EXECUTE.TASK_STATUS('mytask');
END LOOP;
-- Done with processing; drop the task
DBMS_PARALLEL_EXECUTE.DROP_TASK('mytask');
END;
/
Oracle Docs: https://docs.oracle.com/database/121/ARPLS/d_parallel_ex.htm#ARPLS67333
You could drop any indexes on the table, then do your insert, and then recreate the indexes.
Might not work you for, but a technique I've used a couple times in the past for similar circumstances.
created updated_{table_name}, then select insert into this table in batches. Once finished, and this hinges on Oracle ( which I don't know or use ) supporting the ability to rename tables in an atomic fashion. updated_{table_name} becomes {table_name} while {table_name} becomes original_{table_name}.
Last time I had to do this was for a heavily indexed table with several million rows that absolutely positively could not be locked for the duration needed to make some serious changes to it.
What is the database version? Check out virtual columns in 11g:
Adding Columns with a Default Value
http://www.oracle.com/technology/pub/articles/oracle-database-11g-top-features/11g-schemamanagement.html
update Hotels set Discount=30 where Hotelid >= 1 and Hotelid <= 5504
For Postgresql I do something like this (if we are sure no more updates/inserts take place):
create table new_table as table orig_table with data;
update new_table set column = <expr>
start transaction;
drop table orig_table;
rename new_table to orig_table;
commit;
Update:
One improvement is that if your table is very large you will not lock the table, this operation in this case could take minutes.
Only if you are sure in the process no inserts and/or updates take
place.

life span of temp table

I have the following procedure:
CREATE PROCEDURE foo ()
SELECT * FROM fooBar INTO TEMP tempTable;
-- do something with tempTable here
DROP TABLE tempTable;
END PROCEDURE;
What happens if there is an exception before the DROP TABLE is called? Will tempTable still be around after foo exits?
If so, foo could fail the next time it is called, because tempTable would already exist. How should that be handled.
EDIT: I am using informix 11.5
According to the documentation, temporary tables are dropped when the session ends.
As others stated, temporary tables last until you drop them explicitly or the session ends.
If the stored procedure fails because the table already exists, SPL generates an exception.
You can deal with exceptions by adding an ON EXCEPTION clause -— but you are entering one of the more baroque parts of SPL, Stored Procedure Language.
Here is a mildly modified version of your stored procedure - one that generates a divide by zero exception (SQL -1202):
CREATE PROCEDURE foo ()
define i integer;
SELECT * FROM 'informix'.systables INTO TEMP tempTable;
-- do something with tempTable here
let i = 1 / 0;
DROP TABLE tempTable;
END PROCEDURE;
execute procedure foo();
SQL -1202: An attempt was made to divide by zero.
execute procedure foo();
SQL -958: Temp table temptable already exists in session.
This shows that the first time through the code executed the SELECT, creating the table, and then ran foul of the divide by zero. The second time, though, the SELECT failed because the temp table already existed, hence the different error message.
drop procedure foo;
CREATE PROCEDURE foo()
define i integer;
BEGIN
ON EXCEPTION
DROP TABLE tempTable;
SELECT * FROM 'informix'.systables INTO TEMP tempTable;
END EXCEPTION WITH RESUME;
SELECT * FROM 'informix'.systables INTO TEMP tempTable;
END;
-- do something with tempTable here
let i = 1 / 0;
DROP TABLE tempTable;
END PROCEDURE;
The BEGIN/END block limits the exception handling to the trapped statement. Without the BEGIN/END, the exception handling covers the entire procedure, reacting to the divide by zero error too (and therefore letting the DROP TABLE work and the procedure seems to run successfully).
Note that temptable still exists at this point:
+ execute procedure foo();
SQL -1202: An attempt was made to divide by zero.
+ execute procedure foo();
SQL -1202: An attempt was made to divide by zero.
This shows that the procedure no longer fails because the temp table is present.
You can limit the ON EXCEPTION block to selected error codes (-958 seems plausible for this one) by:
ON EXCEPTION IN (-958) ...
See the IBM Informix Guide to SQL: Syntax manual, chapter 3 'SPL Statements'.
For Informix 12.10 SPL Statements
For Informix 11.70 SPL Statements
For Informix 11.50 SPL Statements
Note that Informix 11.70 added the 'IF EXISTS' and 'IF NOT EXISTS' clauses to CREATE and DROP statements. Thus, you might use the modified DROP TABLE statement:
DROP TABLE IF EXISTS tempTable;
Thus, with Informix 11.70 or later, the easiest way to write the procedure is:
DROP PROCEDURE IF EXISTS foo;
CREATE PROCEDURE foo()
define i integer;
DROP TABLE IF EXISTS tempTable;
SELECT * FROM 'informix'.systables INTO TEMP tempTable;
-- do something with tempTable here
let i = 1 / 0;
DROP TABLE tempTable; -- Still a good idea
END PROCEDURE;
You could also use this, but then you get the previous definition of the procedure, whatever it was, and it might not be what you expected.
CREATE PROCEDURE IF NOT EXISTS foo()
define i integer;
DROP TABLE IF EXISTS tempTable;
SELECT * FROM 'informix'.systables INTO TEMP tempTable;
-- do something with tempTable here
let i = 1 / 0;
DROP TABLE tempTable; -- Still a good idea
END PROCEDURE;
I finally used a variation of Jonathan's and RET's solution:
CREATE PROCEDURE foo ()
ON EXCEPTION IN (-206)
END EXCEPTION WITH RESUME;
DROP TABLE tempTable;
SELECT * FROM fooBar INTO TEMP tempTable;
-- do something with tempTable here
DROP TABLE tempTable;
END PROCEDURE;
Yes, the temp table will still exist. Temp tables by definition have a lifetime of the session that created them, unless explicitly dropped.
The temp table can only be seen by the session that created it, and there is no impediment to the same procedure being run in parallel by multiple users. Adam's answer to test for the existence of the temp table will return a non-zero result if any user is running the procedure. You need to test that the session that owns the temp table is the current session as well. Given that this question is within the scope of a stored procedure, it might be simpler to add an explicit DROP, wrapped in some exception handling.
SELECT count(*)
INTO w_count
FROM sysmaster:systabnames s,sysmaster:systabinfo i
WHERE i.ti_partnum = s.partnum
AND sysmaster:BITVAL(i.ti_flags,'0x0020') = 1
AND s.tabname = 'tempTable' ;
If w_count is 1, delete table before SELECT ... INTO. Same with DROP TABLE.