At work, I have a large table (some 3 million rows, like 40-50 columns). I sometimes need to empty some of the columns and fill them with new data. What I did not expect is that
UPDATE table1 SET y = null
takes much more time than filling the column with data which is generated, for example, in the sql query from other columns of the same table or queried from other tables in a subquery. It does not matter if I go through all table rows at once (like in the update query above) or if I use a cursor to go through the table row by row (using the pk). It does not matter if I use the large table at work or if I create a small test table and fill it with some hundredthousands of test-rows. Setting the column to null always takes way longer (Throughout the tests, I encountered factors of 2 to 10) than updating the column with some dynamic data (which is different for each row).
Whats the reason for this? What does Oracle do when setting a column to null? Or - what's is my error in reasoning?
Thanks for your help!
P.S.: I am using oracle 11g2, and found these results using both plsql developer and oracle sql developer.
Is column Y indexed? It could be that setting the column to null means Oracle has to delete from the index, rather than just update it. If that's the case, you could drop and rebuild it after updating the data.
EDIT:
Is it just column Y that exhibits the issue, or is it independent of the column being updated? Can you post the table definition, including constraints?
Summary
I think updating to null is slower because Oracle (incorrectly) tries to take advantage of the way it stores nulls, causing it to frequently re-organize the rows in the block ("heap block compress"), creating a lot of extra UNDO and REDO.
What's so special about null?
From the Oracle Database Concepts:
"Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns,
the columns more likely to contain nulls should be defined last to conserve disk space."
Test
Benchmarking updates is very difficult because the true cost of an update cannot be measured just from the update statement. For example, log switches will
not happen with every update, and delayed block cleanout will happen later. To accurately test an update, there should be multiple runs,
objects should be recreated for each run, and the high and low values should be discarded.
For simplicity the script below does not throw out high and low results, and only tests a table with a single column. But the problem still occurs regardless of the number of columns, their data, and which column is updated.
I used the RunStats utility from http://www.oracle-developer.net/utilities.php to compare the resource consumption of updating-to-a-value with updating-to-a-null.
create table test1(col1 number);
BEGIN
dbms_output.enable(1000000);
runstats_pkg.rs_start;
for i in 1 .. 10 loop
execute immediate 'drop table test1 purge';
execute immediate 'create table test1 (col1 number)';
execute immediate 'insert /*+ append */ into test1 select 1 col1
from dual connect by level <= 100000';
commit;
execute immediate 'update test1 set col1 = 1';
commit;
end loop;
runstats_pkg.rs_pause;
runstats_pkg.rs_resume;
for i in 1 .. 10 loop
execute immediate 'drop table test1 purge';
execute immediate 'create table test1 (col1 number)';
execute immediate 'insert /*+ append */ into test1 select 1 col1
from dual connect by level <= 100000';
commit;
execute immediate 'update test1 set col1 = null';
commit;
end loop;
runstats_pkg.rs_stop();
END;
/
Result
There are dozens of differences, these are the four I think are most relevant:
Type Name Run1 Run2 Diff
----- ---------------------------- ------------ ------------ ------------
TIMER elapsed time (hsecs) 1,269 4,738 3,469
STAT heap block compress 1 2,028 2,027
STAT undo change vector size 55,855,008 181,387,456 125,532,448
STAT redo size 133,260,596 581,641,084 448,380,488
Solutions?
The only possible solution I can think of is to enable table compression. The trailing-null storage trick doesn't happen for compressed tables.
So even though the "heap block compress" number gets even higher for Run2, from 2028 to 23208, I guess it doesn't actually do anything.
The redo, undo, and elapsed time between the two runs is almost identical with table compression enabled.
However, there are lots of potential downsides to table compression. Updating to a null will run much faster, but every other update will run at least slightly slower.
That's because it deletes from blocks that data.
And delete is the hardest operation. If you can avoid a delete, do it.
I recommend you to create another table with that column null(Create table as select for example, or insert select), and fill it(the column) with your procedure. Drop old table and then rename the new table with current name.
UPDATE:
Another important thing is that you should update the column as is, with new values. It is useless to set them null and after that refill them.
If you do not have values for all rows, you can do the update like this:
udpate table1
set y = (select new_value from source where source.key = table1.key)
and will set to null those rows that does not exists in source.
I would try what Tom Kyte suggested on large updates.
When it comes to huge tables, it best to go like this : take a few rows, update them, take some more, update those etc. Don't try to issue an update on all the table. That's a killer move right from the start.
Basically create binary_integer indexed table, fetch 10 rows at a time, and update them.
Here is a piece of code that i have used of large tables with success. Because im lazy and its like 2AM now ill just copy paste it here and let you figure it out, but let me know if you need help :
DECLARE
TYPE BookingRecord IS RECORD (
bprice number,
bevent_id number,
book_id number
);
TYPE array is TABLE of BookingRecord index by binary_integer;
l_data array;
CURSOR c1 is
SELECT LVC_USD_PRICE_V2(ev.activity_version_id,ev.course_start_date,t.local_update_date,ev.currency,nvl(t.delegate_country,ev.sponsor_org_country),ev.price,ev.currency,t.ota_status,ev.location_type) x,
ev.title,
t.ota_booking_id
FROM ota_gsi_delegate_bookings_t#diseulprod t,
inted_parted_events_t#diseulprod ev
WHERE t.event_id = ev.event_id
and t.ota_booking_id =
BEGIN
open c1;
loop
fetch c1 bulk collect into l_data limit 20;
for i in 1..l_data.count
loop
update ou_inc_int_t_01
set price = l_data(i).bprice,
updated = 'Y'
where booking_id = l_data(i).book_id;
end loop;
exit when c1%notfound;
end loop;
close c1;
END;
what can also help speed up updates is to use alter table table1 nologging so that the update won't generate redo logs. another possibility is to drop the column and re-add it. since this is a DDL operation it will generate neither redo nor undo.
Related
Below is the update statement that I am running for 32k times and it is taking more than 15 hours and running.
I have to update the value in table 2 for 32k different M_DISPLAY VALUES.
UPDATE TABLE_2 T2 SET T2.M_VALUE = 'COL_ANC'
WHERE EXISTS (SELECT 1 FROM TABLE_2 T1 WHERE TRIM(T1.M_DISPLAY) = 'ANCHORTST' AND T1.M_LABEL=T2.M_LABEL );
Am not sure Why is it taking such a long time as I have tuned the query,
I have copied 32000 update statements in a Update.sql file and running the SQL in command line.
Though it is updating the table, it is a neverending process
Please advice if I have gone wrong anywhere
Regards
Using FORALL
If you cannot rewrite the query to run a single bulk-update instead of 32k individual updates, you might still get lucky by using PL/SQL's FORALL. An example:
DECLARE
TYPE rec_t IS RECORD (
m_value table_2.m_value%TYPE,
m_display table_2.m_display%TYPE
);
TYPE tab_t IS TABLE OF rec_t;
data tab_t := tab_t();
BEGIN
-- Fill in data object. Replace this by whatever your logic for matching
-- m_value to m_display is
data.extend(1);
data(1).m_value := 'COL_ANC';
data(1).m_display := 'ANCHORTST';
-- Then, run the 32k updates using FORALL
FORALL i IN 1 .. data.COUNT
UPDATE table_2 t2
SET t2.m_value = data(i).m_value
WHERE EXISTS (
SELECT 1
FROM table_2 t1
WHERE trim(t1.m_display) = data(i).m_display
AND t1.m_label = t2.m_label
);
END;
/
Concurrency
If you're not the only process on the system, 32k updates in a single transaction can hurt. It's definitely worth committing a few thousand rows in sub-transactions to reduce concurrency effects with other processes that might read the same table while you're updating.
Bulk update
Really, the goal of any improvement should be bulk updating the entire data set in one go (or perhaps split in a few bulks, see concurrency).
If you had a staging table containing the update instructions:
CREATE TABLE update_instructions (
m_value VARCHAR2(..),
m_display VARCHAR2(..)
);
Then you could pull off something along the lines of:
MERGE INTO table_2 t2
USING (
SELECT u.*, t1.m_label
FROM update_instructions u
JOIN table_2 t1 ON trim(t1.m_display) = u.m_display
) t1
ON t2.m_label = t1.m_label
WHEN MATCHED THEN UPDATE SET t2.m_value = t1.m_value;
This should be even faster than FORALL (but might have more concurrency implications).
Indexing and data sanitisation
Of course, one thing that might definitely hurt you when running 32k individual update statements is the TRIM() function, which prevents using an index on M_DISPLAY efficiently. If you could sanitise your data so it doesn't need trimming first, that would definitely help. Otherwise, you could add a function based index just for the update (and then drop it again):
CREATE INDEX i ON table_2 (trim (m_display));
The query and subquery query the same table: TABLE_2. Assuming that M_LABEL is unique, the subquery returns 1s for all rows in TABLE_2 where M_DISPLAY is ANCHORTST. Then the update query updates the same (!) TABLE_2 for all 1s returned from subquery - so for all rows where M_DISPLAY is ANCHORTST.
Therefore, the query could be simplified, exploiting the fact that both update and select work on the same table - TABLE_2:
UPDATE TABLE_2 T2 SET T2.M_VALUE = 'COL_ANC' WHERE TRIM(T2.M_DISPLAY) = 'ANCHORTST'
If M_LABEL is not unique, then the above is not going to work - thanks to commentators for pointing that out!
For significantly faster execution:
Ensure that you have created an index on M_DISPLAY and M_LABEL columns that are in your WHERE clause.
Ensure that M_DISPLAY has a function based index. If it does not, then do not pass it to the TRIM function, because the function will prevent the database from using the index that you have created for the M_DISPLAY column. TRIM the data before storing in the table.
Thats it.
By the way, as has been mentioned, you shouldn't need 32k queries for meeting your objective. One will probably suffice. Look into query based update. As an example, see the accepted answer here:
Oracle SQL: Update a table with data from another table
I need to delete about 5 millions of records from Oracle table.
Due to the performance (REDO logs) I would like to remove 100000 records per transaction, like this:
DECLARE
v_limit PLS_INTEGER :=100000;
CURSOR person_deleted_cur
IS
SELECT rowid
FROM Persons p
WHERE City = 'ABC'
AND NOT EXISTS
(SELECT O_Id
FROM Orders o
WHERE p.P_Id = o.P_Id);
TYPE person_deleted_nt IS TABLE OF person_deleted_cur%ROWTYPE
INDEX BY PLS_INTEGER;
BEGIN
OPEN person_deleted_cur;
LOOP
FETCH person_deleted_cur
BULK COLLECT INTO person_deleted_nt LIMIT v_limit;
FORALL indx IN 1 .. person_deleted_nt.COUNT
DELETE FROM Persons WHERE rowid=person_deleted_nt(indx);
EXIT WHEN person_deleted_cur%NOTFOUND;
END LOOP;
CLOSE person_deleted_cur;
COMMIT;
END;
But Liquibase runs changeSet in one transaction and rolls it back if there are any errors. Is a good habit to use COMMIT manifestly in Liquibase scripts?
What should be a well-written script?
In the book "Oracle for professionals" Tom Kyte written about update in others transactions. The point is: if you can change table with one query then so do. Because one query will faster than differ transactions or plsql loop with partition delete.
Another a approach would be to use CREATE TABLE with NOLOGGING instead of UPDATE/DELETE. It is the best solution for a change many rows.
So create nologging table with your query, than to delete original table and recreate index, constraints and etc, than rename temp table to original table.
Agree with #jimmbraddock, but a more simple solution that is lower impact when it comes to an OLTP system might be to repeatedly run this query until it affects no more rows:
DELETE FROM Persons p
WHERE City = 'ABC'
AND NOT EXISTS
(SELECT O_Id
FROM Orders o
WHERE p.P_Id = o.P_Id)
AND ROWNUM <= 100000;
The total resource usage would be higher than a single delete, and thus a single delete would still be better if your system can accommodate it, but this would be pretty robust, and with an index on persons(city,p_id) and one on orders(p_id) it should be very performant.
Hi I have two table with million rows in each.I have oracle 11 g R1
I am sure many of us must have gone through this situation.
What is the most efficient and fast way to update from one table to another where the values are DIFFERENT.
Eg: Table 1 has 4 NUMBER columns with a high precision eg : 0.2212454215454212
Table 2 has 6 columns.
update table 2's four columns based on common column on both the tables, only the different ones.
I have something like this
DECLARE
TYPE test1_t IS TABLE OF test.score%TYPE INDEX BY PLS_..;
TYPE test2_t IS TABLE OF test.id%TYPE INDEX BY PLS..;
TYPE test3_t IS TABLE OF test.Crank%TYPE INDEX BY PLS..;
vscore test1_t;
vid test2_t;
vurank test4_t;
BEGIN
SELECT id,score,urank
BULK COLLECT INTO vid,vscore,vurank
FROM test;
FORALL i IN 1 .. vid.COUNT
MERGE INTO final T
USING (SELECT vid (i) AS o_id,
vurank (i) AS o_urank,
vscore (i) AS o_score FROM DUAL) S
ON (S.o_id = T.id)
WHEN MATCHED THEN
UPDATE SET T.crank = S.o_crank
WHERE T.crank <> S.o_crank;
Since the numbers are with high precision is it slowing down?
I tried Bulk Collect and Merge combination still its taking time ~ 30 mins for worst case scenario if I have to update 1 million rows.
Is there something with rowid?
Help will be appreciated.
If you want to update all the rows, then just use update:
update table_1
set (col1,
col2) = (
select col1,
col2
from table2
where table2.col_a = table1.col_a and
table2.col_b = table1.col_b)
Bulk collect or any PL/SQL technique will always be slower than a pure SQL technique.
The numeric precision is probably not significant, and rowid is not relevant as there is no common value between the two tables.
When dealing with millions of rows, parallel DML is a game changer. Of course you need to have Enterprise Edition to use parallel, but it's really the only thing which will make much difference.
I recommend you read an article on OraFAQ by rleishman comparing 8 Bulk Update Methods. His key finding is that "the cost of disk reads so far outweighs the context switches that that they are barely noticable (sic)". In other words, unless your data is already cached in memory there really isn't a significant difference between SQL and PL/SQL approaches.
The article does have some neat suggestions on employing parallel. The surprising outcome is that a parallel pipelined function offers the best performance.
Focusing on the syntax have been used and skipping the logic (may using a pure update + pure insert may solve the problem, merge cost, indexes, possible full scan on merge and else )
You should use Limit in Bulk Collect syntax
Using a bulk collect with no limit
Will case all records to be loaded in memory
With no partially committed merges, you will create a larg redolog,
that must be apply in the end of the process.
Both will reason in low performance.
DECLARE
v_fetchSize NUMBER := 1000; -- based on hardware, design and .... could be scaled
CURSOR a_cur IS
SELECT id,score,urank FROM test;
TYPE myarray IS TABLE OF a_cur%ROWTYPE;
cur_array myarray;
BEGIN
OPEN a_cur;
LOOP
FETCH a_cur BULK COLLECT INTO cur_array LIMIT v_fetchSize;
FORALL i IN 1 .. cur_array.COUNT
// DO Operation
COMMIT;
EXIT WHEN a_cur%NOTFOUND;
END LOOP;
CLOSE a_cur;
END;
Just to be sure: test.id and final.id must be indexed.
With first select ... from test you got too much records from Table 1 and after that you need to compare all of them with records on Table 2. Try to select only what you need to update. So, there are at least 2 variants:
a) select only changed records:
SELECT source_table.id, source_table.score, source_table.urank
BULK COLLECT INTO vid,vscore,vurank
FROM
test source_table,
final destination_table
where
source_table.id = destination_table.id
and
source_table.crank <> destination_table.crank
;
b) Add new field to source table with datetime value and fill it in trigger with current time. While synchronizing pick only records changed during last day. This field needs to be indexed.
After such a change on update phase you don't need to compare other fields, only match ID's:
FORALL i IN 1 .. vid.COUNT
MERGE INTO FINAL T
USING (
SELECT vid (i) AS o_id,
vurank (i) AS o_urank,
vscore (i) AS o_score FROM DUAL
) S
ON (S.o_id = T.id)
WHEN MATCHED
THEN UPDATE SET T.crank = S.o_crank
If you worry about size of undo/redo segments then variant b) is more useful, because you can get records from source Table 1 divided to time slices and commit changes after updating every slice. E.g. from 00:00 to 01:00 , from 01:00 to 02:00 etc.
In this variant update can be done just by SQL statement without selecting a data into collections in row with maintaining acceptable sizes of redo/undo logs.
I want to insert rows with a MERGE statement in a specified order to avoid deadlocks. Deadlocks could otherwise happen because multiple transaction will call this statement with overlapping sets of keys. Note that this code is also sensitive to duplicate value exception but I handle that by retrying so that is not my question. I was doing the following:
MERGE INTO targetTable
USING (
SELECT ...
FROM sourceCollection
ORDER BY <desiredUpdateOrder>
)
WHEN MATCHED THEN
UPDATE ...
WHEN NOT MATCHED THEN
INSERT ...
Now I'm still getting the dead lock so I'm becoming unsure whether oracle maintains the order of the sub-query. Does anyone know how to best make sure that oracle locks the rows in targetTable in the same order in this case? Do I have to do a SELECT FOR UPDATE before the merge? In which order does the SELECT FOR UPDATE lock the rows? Oracle UPDATE statement has an ORDER BY clause that MERGE seems to be missing. Is there another way to avoid dead locks other than locking the rows in the same order every time?
[Edit]
This query is used to maintain a count of how often a certain action has taken place. When the action happens the first time a row is inserted, when it happens a second time the "count" column is incremented. There are millions of different actions and they happen very often. A table lock wouldn't work.
Controlling the order in which the target table rows are modified requires that you control the query execution plan of the USING subquery. That's a tricky business, and depends on what sort of execution plans your query is likely to be getting.
If you're getting deadlocks then I'd guess that you're getting a nested loop join from the source collection to the target table, as a hash join would probably be based on hashing the source collection and would modify the target table roughly in target-table rowid order because that would be full scanned -- in any case, the access order would be consistent across all of the query executions.
Likewise, if there was a sort-merge between the two data sets you'd get consistency in the order in which target table rows are accessed.
Ordering of the source collection seems to be desirable, but the optimiser might not be applying it so check the execution plan. If it is not then try inserting your data into a global temporary table using APPEND and with an ORDER BY clause, and then selecting from there without an order by clause, and explore the us of hints to entrench a nested loop join.
I don't believe the ORDER BY will affect anything (though I'm more than willing to be proven wrong); I think MERGE will lock everything it needs to.
Assume I'm completely wrong, assume that you get row-by-row locks with MERGE. Your problem still isn't solved as you have no guarantees that your two MERGE statements won't hit the same row simultaneously. In fact, from the information given, you have no guarantees that an ORDER BY improves the situation; it might make it worse.
Despite there being no skip locked rows syntax as there is with UPDATE there is still a simple answer, stop trying to update the same row from within different transactions. If feasible, you can use some form of parallel execution, for instance the DBMS_PARALLEL_EXECUTE subprogram CREATE_CHUNKS_BY_ROWID and ensure that your transactions only work on a specific sub-set of the rows in the table.
As an aside I'm a little worried by your description of the problem. You say there's some duplicate erroring that you fix by rerunning the MERGE. If the data in these duplicates is different you need to ensure that the ORDER BY is done not only on the data to be merged but the data being merged into. If you don't then there's no guarantee that you don't overwrite the correct data with older, incorrect, data.
First locks are not really managed at row level but at block level. You may encounter an ORA-00060 error even without modifying the same row. This can be tricky. Managing this is the request developper's job.
One possible workaround is to organize your table (never do that on huge tables or table with heavy change rates)
https://use-the-index-luke.com/sql/clustering/index-organized-clustered-index
Rather than do a merge, I suggest that you try and lock the row. If successful update it, if not insert new row. By default lock will wait if another process has a lock on the same thing.
CREATE TABLE brianl.deleteme_table
(
id INTEGER PRIMARY KEY
, cnt INTEGER NOT NULL
);
CREATE OR REPLACE PROCEDURE brianl.deleteme_table_proc (
p_id IN deleteme_table.id%TYPE)
AUTHID DEFINER
AS
l_id deleteme_table.id%TYPE;
-- This isolates this procedure so that it doesn't commit
-- anything outside of the procedure.
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
-- select the row for update
-- this will pause if someone already has the row locked.
SELECT id
INTO l_id
FROM deleteme_table
WHERE id = p_id
FOR UPDATE;
-- Row was locked, update it.
UPDATE deleteme_table
SET cnt = cnt + 1
WHERE id = p_id;
COMMIT;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
-- we were unable to lock the record, insert a new row
INSERT INTO deleteme_table (id, cnt)
VALUES (p_id, 1);
COMMIT;
END deleteme_table_proc;
CREATE OR REPLACE PROCEDURE brianl.deleteme_proc_test
AUTHID CURRENT_USER
AS
BEGIN
-- This resets the table to empty for the test
EXECUTE IMMEDIATE 'TRUNCATE TABLE brianl.deleteme_table';
brianl.deleteme_table_proc (p_id => 1);
brianl.deleteme_table_proc (p_id => 2);
brianl.deleteme_table_proc (p_id => 3);
brianl.deleteme_table_proc (p_id => 2);
FOR eachrec IN ( SELECT id, cnt
FROM brianl.deleteme_table
ORDER BY id)
LOOP
DBMS_OUTPUT.put_line (
a => 'id: ' || eachrec.id || ', cnt:' || eachrec.cnt);
END LOOP;
END;
BEGIN
-- runs the test;
brianl.deleteme_proc_test;
END;
In a single transaction I am inserting may rows into a table, before inserting the row I perform a query to see if there is already a row with the key I am about to insert.
What I see is that the query to check the key exists gets very slow within my transaction, but from another transaction it is fast, and in the next transaction it is fast.
I cant break this work down into smaller transactions as the request I am processing needs to be in a single transaction.
Is there anything I can do to make the select query in this transaction fast?
So, please add constraint / primary key. This will allow you to remove all your selects.
May be consider to use MERGE as # Egor_Skriptunoff recommended.
OR Add indexes for columns you are selected by.
if inserting millions of thousand of records first thing is do incremental commits as you are likely hitting temp space fragmentation or limits which result in slowdowns. that can be done in a begin end block
also, this allows you to add index via
create index b indexName on table_name(col1, col2, col3);
merge is faster as previous answer states.
alternatively add all ignoring duplicates then remove duplicates
this can be done via
for example
begin
insert into table_name select * from table_name; [ if pulling from another table]or[use values and column maps]
delete from table_name A where rowid >(select min(rowid) from table_name B where A.key_value=B.key_value);
end
if in a procedure this would also require
both query and delete can be in begin end block and execute immediate(' you ddl statement here';');