Temporarily Disabling Constraints in Oracle Database - sql

I have an Oracle SQL delete statement that is taking a very long time to run. It's very simple, something like:
delete from table
where fy = 2016
I also manually tried deleting one of the ~2500 rows I need to get rid of in TOAD and it's taking about 6 minutes to delete a single row.
I think the reason for this is that the table has a lot of constraints on it, so I'm wondering if I can somehow change my delete statement in such a way that it can bypass the constraints or disable them temporarily in order to do it much faster? If so, how can I re-enable them afterwards?

6 minutes to delete a single row does not sound like an issue with constraints on the table. There are three reasonably likely mechanisms that would cause that level of performance problem. Most to least likely in my experience
The table is the parent to one or more large child tables. The child tables have an enforced foreign key reference to the parent table but the foreign key column is not indexed in the child table. If this is the case, the issue is that Oracle has to do a full scan of the table every time you delete a row from the parent in order to verify that there are no orphans. You could go to each child table and drop the foreign key but it would almost certainly make more sense to index the foreign key column in the child table(s). It is very rare that you want to have unindexed foreign keys.
There is a trigger on the table and the trigger is doing something that takes 6 minutes. You'd have to look at the trigger code to see exactly what was taking so long there.
You are doing a single-row delete as a transaction and you have an on commit materialized view that needs to be updated as a result of the change. It's hard to come up with a way to build something like this that would take 6 minutes to run but there are certainly ways to make this slow.

As you stated; the reason is foreign keys to that table. You should disable that keys before delete and enable them after delete. You can use following script to enable/disable keys of a specific table.
declare
l_constraint_sql varchar2(1024);
l_op_type varchar2(10) := 'ENABLE';
begin
-- Test statements here
for r in (select a.*
from user_constraints a, user_constraints b
where a.constraint_type = 'R'
and a.r_constraint_name = b.constraint_name
and a.r_owner = b.owner
and a.owner = 'FAYDIN' --schema
and b.table_name = 'GEN_KOD' --table you want to disable constraints
) loop
l_constraint_sql := 'alter table ' || r.TABLE_NAME ||
' ' || l_op_type || ' constraint ' || r.CONSTRAINT_NAME || '';
--uncomment following line to execute
--execute immediate l_constraint_sql;
dbms_output.put_line(l_constraint_sql);
end loop;
end;

Related

How to list all tables that have data in it?

I have around 2000 tables that most of them are not in use and do not have any data in them.
I know how to list all tables as below
SELECT owner, table_name FROM ALL_TABLES
But I do not know how to list the one that has at least one row of data in it.
Is there anyway to do that?
There are a few ways you could do this:
Brute-force and count the rows in every table
Check the table stats
Check if there is any storage allocated
Brute force
This loops through the tables, counts the rows, and spits out those that are empty:
declare
c integer;
begin
for t in (
select table_name from user_tables
where external = 'NO'
and temporary = 'N'
) loop
execute immediate
'select count(*) from ' || t.table_name
into c;
if c = 0 then
dbms_output.put_line ( t.table_name );
end if;
end loop;
end;
/
This is the only way to be sure there are no rows in the table now. The main drawback to this is it could take a looooong time if you a many tables with millions+ rows.
I've excluded:
Temporary tables. You can only see data inserted in your session. If they're in use in another session you can't see this
External tables. These point to files on the database server's file system. The files could be temporarily missing/blank/etc.
There may be other table types with issues like these - make sure you double check any that are reported as empty.
Check the stats
If all the table stats are up-to-date, you can check the num_rows:
select table_name
from user_tables ut
where external = 'NO'
and temporary = 'N'
and num_rows = 0;
The caveat with this is this figures may be out-of-date. You can force a regather now by running:
exec dbms_stats.gather_schema_stats ( user );
Though this is likely to take a while and - if gathering has been disabled/deferred - might result in unwanted plan changes. Avoid doing this on your production database!
Check storage allocation
You can look for tables with no segments allocated with:
select table_name
from user_tables ut
where external = 'NO'
and temporary = 'N'
and segment_created = 'NO';
As there's no space allocated to these, there's definitely no rows in them! But a table could have space allocated but no rows in it. So it may omit some of the empty tables - this is particularly likely for tables that did have rows in the past, but are empty now.
Final thoughts
It's worth remembering that a table with no rows now could still be in use. Staging tables used for daily/weekly/monthly loads may be purged at the end of the process; removing these will still break your apps!
There could also be code which refers to empty tables which work as-is, but would error if you drop the table.
A better approach would be to enable auditing, then run this for "a while". Any tables will no audited access in the time period are probably safe to remove.

Delete SQL - Taking forever

Delete SQL scripts are taking very long time and even hanging forever in Oracle 12c. We are having hundreds of delete scripts like below and even tried to run it by parallel operation /*+ PARALLEL (a,4) */ as well, but no luck in the performance improvement.
Is there any way to tune the delete scripts.
Can we use PL/SQL - for loop to make any performance improvement?
If yes, please share your thoughts and advices.
Some of Sample SQL Scripts:
DELETE
FROM
E_PROJ_DETAIL
WHERE
CATEGORY_ID in (SELECT PRIMARY_KEY FROM Y_OBJ_CATEGORY WHERE TREE_POSITION='VEN$_MADD');
COMMIT;
delete
from
e_proj_group_access
where enterprise_object_id in (select primary_key from t_project where application_id in (select application_id from y_object_definition where unique_code ='VEN$_MADD'));
commit;
I don't know about any possibility to 'tune' DELETE statements, maybe except droping any useless (=unused) indexes and constraints upfront and recreating them afterwards.
In these cases (deleting many rows) I used FOR loops with commits inside, something like this:
I := 0;
FOR c IN (SELECT id FROM table WHERE [conditions to delete])
LOOP
DELETE FROM table WHERE t.id = c.id; /* id = primary key */
IF (I > 1000) THEN
COMMIT;
I := 0;
END IF;
I := I + 1;
END LOOP;
But here you can occasionally run into ORA-01555: snapshot too old, because you will delete rows from the same table from which you opened cursor in the FOR loop.
In other situations, you could do CREATE TABLE newtable AS SELECT * FROM oldtable WHERE [conditions for rows I want to keep] and then do TRUNCATE oldtable and INSERT /*+ APPEND */ INTO oldtable SELECT * from newtable; to write correct data back.
It really depends on the situation you are in (as others commented - how many rows do you have in the table, how many rows do you want to delete, etc.).
hth :-)
Depending if it is a one shot deal or not, creating a new table with only the rows you want to keep is often much faster :
CREATE TABLE E_PROJ_DETAIL_NEW AS
SELECT * FROM
E_PROJ_DETAIL
WHERE
CATEGORY_ID NOT IN (SELECT PRIMARY_KEY FROM Y_OBJ_CATEGORY WHERE TREE_POSITION='VEN$_MADD');
Then delete the old table and rename the new one.
You may need to re-create indexes / fk if you had some.

Oracle query performance degrades when inserting many rows in a single transaction

In a single transaction I am inserting may rows into a table, before inserting the row I perform a query to see if there is already a row with the key I am about to insert.
What I see is that the query to check the key exists gets very slow within my transaction, but from another transaction it is fast, and in the next transaction it is fast.
I cant break this work down into smaller transactions as the request I am processing needs to be in a single transaction.
Is there anything I can do to make the select query in this transaction fast?
So, please add constraint / primary key. This will allow you to remove all your selects.
May be consider to use MERGE as # Egor_Skriptunoff recommended.
OR Add indexes for columns you are selected by.
if inserting millions of thousand of records first thing is do incremental commits as you are likely hitting temp space fragmentation or limits which result in slowdowns. that can be done in a begin end block
also, this allows you to add index via
create index b indexName on table_name(col1, col2, col3);
merge is faster as previous answer states.
alternatively add all ignoring duplicates then remove duplicates
this can be done via
for example
begin
insert into table_name select * from table_name; [ if pulling from another table]or[use values and column maps]
delete from table_name A where rowid >(select min(rowid) from table_name B where A.key_value=B.key_value);
end
if in a procedure this would also require
both query and delete can be in begin end block and execute immediate(' you ddl statement here';');

update x set y = null takes a long time

At work, I have a large table (some 3 million rows, like 40-50 columns). I sometimes need to empty some of the columns and fill them with new data. What I did not expect is that
UPDATE table1 SET y = null
takes much more time than filling the column with data which is generated, for example, in the sql query from other columns of the same table or queried from other tables in a subquery. It does not matter if I go through all table rows at once (like in the update query above) or if I use a cursor to go through the table row by row (using the pk). It does not matter if I use the large table at work or if I create a small test table and fill it with some hundredthousands of test-rows. Setting the column to null always takes way longer (Throughout the tests, I encountered factors of 2 to 10) than updating the column with some dynamic data (which is different for each row).
Whats the reason for this? What does Oracle do when setting a column to null? Or - what's is my error in reasoning?
Thanks for your help!
P.S.: I am using oracle 11g2, and found these results using both plsql developer and oracle sql developer.
Is column Y indexed? It could be that setting the column to null means Oracle has to delete from the index, rather than just update it. If that's the case, you could drop and rebuild it after updating the data.
EDIT:
Is it just column Y that exhibits the issue, or is it independent of the column being updated? Can you post the table definition, including constraints?
Summary
I think updating to null is slower because Oracle (incorrectly) tries to take advantage of the way it stores nulls, causing it to frequently re-organize the rows in the block ("heap block compress"), creating a lot of extra UNDO and REDO.
What's so special about null?
From the Oracle Database Concepts:
"Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns,
the columns more likely to contain nulls should be defined last to conserve disk space."
Test
Benchmarking updates is very difficult because the true cost of an update cannot be measured just from the update statement. For example, log switches will
not happen with every update, and delayed block cleanout will happen later. To accurately test an update, there should be multiple runs,
objects should be recreated for each run, and the high and low values should be discarded.
For simplicity the script below does not throw out high and low results, and only tests a table with a single column. But the problem still occurs regardless of the number of columns, their data, and which column is updated.
I used the RunStats utility from http://www.oracle-developer.net/utilities.php to compare the resource consumption of updating-to-a-value with updating-to-a-null.
create table test1(col1 number);
BEGIN
dbms_output.enable(1000000);
runstats_pkg.rs_start;
for i in 1 .. 10 loop
execute immediate 'drop table test1 purge';
execute immediate 'create table test1 (col1 number)';
execute immediate 'insert /*+ append */ into test1 select 1 col1
from dual connect by level <= 100000';
commit;
execute immediate 'update test1 set col1 = 1';
commit;
end loop;
runstats_pkg.rs_pause;
runstats_pkg.rs_resume;
for i in 1 .. 10 loop
execute immediate 'drop table test1 purge';
execute immediate 'create table test1 (col1 number)';
execute immediate 'insert /*+ append */ into test1 select 1 col1
from dual connect by level <= 100000';
commit;
execute immediate 'update test1 set col1 = null';
commit;
end loop;
runstats_pkg.rs_stop();
END;
/
Result
There are dozens of differences, these are the four I think are most relevant:
Type Name Run1 Run2 Diff
----- ---------------------------- ------------ ------------ ------------
TIMER elapsed time (hsecs) 1,269 4,738 3,469
STAT heap block compress 1 2,028 2,027
STAT undo change vector size 55,855,008 181,387,456 125,532,448
STAT redo size 133,260,596 581,641,084 448,380,488
Solutions?
The only possible solution I can think of is to enable table compression. The trailing-null storage trick doesn't happen for compressed tables.
So even though the "heap block compress" number gets even higher for Run2, from 2028 to 23208, I guess it doesn't actually do anything.
The redo, undo, and elapsed time between the two runs is almost identical with table compression enabled.
However, there are lots of potential downsides to table compression. Updating to a null will run much faster, but every other update will run at least slightly slower.
That's because it deletes from blocks that data.
And delete is the hardest operation. If you can avoid a delete, do it.
I recommend you to create another table with that column null(Create table as select for example, or insert select), and fill it(the column) with your procedure. Drop old table and then rename the new table with current name.
UPDATE:
Another important thing is that you should update the column as is, with new values. It is useless to set them null and after that refill them.
If you do not have values for all rows, you can do the update like this:
udpate table1
set y = (select new_value from source where source.key = table1.key)
and will set to null those rows that does not exists in source.
I would try what Tom Kyte suggested on large updates.
When it comes to huge tables, it best to go like this : take a few rows, update them, take some more, update those etc. Don't try to issue an update on all the table. That's a killer move right from the start.
Basically create binary_integer indexed table, fetch 10 rows at a time, and update them.
Here is a piece of code that i have used of large tables with success. Because im lazy and its like 2AM now ill just copy paste it here and let you figure it out, but let me know if you need help :
DECLARE
TYPE BookingRecord IS RECORD (
bprice number,
bevent_id number,
book_id number
);
TYPE array is TABLE of BookingRecord index by binary_integer;
l_data array;
CURSOR c1 is
SELECT LVC_USD_PRICE_V2(ev.activity_version_id,ev.course_start_date,t.local_update_date,ev.currency,nvl(t.delegate_country,ev.sponsor_org_country),ev.price,ev.currency,t.ota_status,ev.location_type) x,
ev.title,
t.ota_booking_id
FROM ota_gsi_delegate_bookings_t#diseulprod t,
inted_parted_events_t#diseulprod ev
WHERE t.event_id = ev.event_id
and t.ota_booking_id =
BEGIN
open c1;
loop
fetch c1 bulk collect into l_data limit 20;
for i in 1..l_data.count
loop
update ou_inc_int_t_01
set price = l_data(i).bprice,
updated = 'Y'
where booking_id = l_data(i).book_id;
end loop;
exit when c1%notfound;
end loop;
close c1;
END;
what can also help speed up updates is to use alter table table1 nologging so that the update won't generate redo logs. another possibility is to drop the column and re-add it. since this is a DDL operation it will generate neither redo nor undo.

Oracle - drop table constraints without dropping tables

I'm doing some bulk migration of a large Oracle database. The first step of this involves renaming a whole load of tables as a preparation for dropping them later (but I need to keep the data in them around for now). Any foreign key constraints on them need to be dropped - they shouldn't be connected to the rest of the database at all. If I were dropping them now I could CASCADE CONSTRAINTS, but rename simply alters the constraints.
Is there a way I can drop all of the constraints that CASCADE CONSTRAINTS would drop without dropping the table itself?
You can do it with dynamic SQL and the data dictionary:
begin
for r in ( select table_name, constraint_name
from user_constraints
where constraint_type = 'R' )
loop
execute immediate 'alter table '|| r.table_name
||' drop constraint '|| r.constraint_name;
end loop;
end;
If the tables are owned by more than one user you'll need to drive from DBA_CONSTRAINTS and include OWNER in the projection and the executed statement. If you want to touch less than all the tables I'm afraid you'll need to specify the list in the WHERE clause, unless there's some pattern to their names.
You can disable/re-enable constraints without dropping them. Take a look at this article.