Oracle Merge Sql with insert in destination and delete from source - sql

I have a Scenario where i have 2 Tables 1st is source(my_data) and 2nd is destination(my_data_backup),I want to some kind of archiving of actual data and move that data into backup table on the daily basis and delete from source table using Merge SQL in oracle.
i.e.
my_data and my_data_backup both have same schema
my_data table contains 10 rows and my_data_backup contains 0 rows i want to insert 10 records into my_data_backup and delete those records from my_data.

MERGE is useful to do manipulation on the destination table, not source.
You can use an anonymous PLSQL block:
begin
delete from my_data_backup;
insert into my_data_backup
select *
from my_data;
delete from my_data;
commit;
exception
when others then
rollback;
-- handle here
end;
/
You can also put the above in a procedure and call the procedure.
You can think about using truncate statement instead of delete which will be faster when the table size is larger but be careful that it, being a DDL, will do an implicit commit.
execute immediate 'truncate table tablename';

Related

Should i commit at the end of the procedure which is called by an oracle scheduler job

I am running an oracle JOB which will run a PROCEDURE to CREATE TRUNCATE INSERT DROP some relevant tables.
Is this the best way to do a functionality like this ?
Should I Commit at the end of the procedure or not ?
CREATE OR REPLACE Procedure PR_NAME
IS
BEGIN
CREATE TABLE TABLE_1_BAC AS SELECT * FROM TABLE1_VIA_DBLINK;
TRUNCATE TABLE TABLE_1;
INSERT INTO TABLE_1 SELECT * FROM TABLE_1_BAC;
DROP TABLE TABLE_1_BAC;
--COMMIT;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
END;
To create TABLE_1 only once for present data :
CREATE TABLE TABLE_1 AS SELECT * FROM TABLE1_VIA_DBLINK;
and creating an insert trigger for TABLE1_VIA_DBLINK, populating TABLE_1 through this trigger for new datas, and to get rid of this job and procedure seems more feasible.
As you stay in this job, perhaps you'll wait for huge data to be inserted.
By the way, if you insist on using this job, you don't need to issue commit, since and there's already an implicit commit exists inside job mechanism.
What kind of job do you use? JOB or SCHEDULER JOB?
I don't see any reason to DROP/CREATE the table. I don't see any reason why you use the intermediate table at all.
Simply make
CREATE OR REPLACE Procedure PR_NAME
IS
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE TABLE_1';
INSERT INTO TABLE_1 SELECT * FROM TABLE1_VIA_DBLINK;
COMMIT;
END;
You don't need any exception handler. In case of JOB you will not see the exception anyway. In case of SCHEDULER JOB you can see exception in views
*_SCHEDULER_JOB_LOG
*_SCHEDULER_JOB_RUN_DETAILS
If you make just this kind of operation you should consider MATERIALIZED VIEW which basically make the same: TRUNCATE and INSERT INTO ... SELECT * FROM ...
No. You don't need to commit.
Those commands are DDL (Data Definition Language) in SQL. So Oracle Database will issue a commit together with the command.
DML (Data Manipulating Language) - like SELECT, UPDATE, INSERT, DELETE requires a commit.
In a scenario, where you will update, delete and insert records. Then you ran a create table command. The records inserted, updated and deleted will be committed (save) to the database.

execute immediate in creating a table PL/SQL

I would just like to ask why this code does not create the table?
BEGIN
EXECUTE IMMEDIATE 'create table temp1 as (select * from table)';
COMMIT;
END;
when I try this, it creates the table but no record/data.
BEGIN
EXECUTE IMMEDIATE 'drop table temp1';
EXECUTE IMMEDIATE 'create table temp1 as (select * from table)';
COMMIT;
END;
the table is a global temp table that is why there are no data when I select, but when I run the report, the output has data.
I am trying to fix a duplicate data error that is why I need to create the temp table.
Data in a global temp table (GTT) is only accessible to the session that inserted it; other sessions will only see their own data. It's a common misunderstanding that the data in a global temporary table is not "global" - only the table definition is "global", the data in that table is local to the session that inserted it, and automatically cleared when the session ends (or commits, depending on what type of GTT it is).
If your report queries a GTT and finds data, there must have been a process which populated the global temp table first.
When you ran your "create table" statement (which, by the way, does not need to be explicitly committed), it queried the GTT and found no records because the session in which you created the table did not first populate the GTT with the data needed.

BULK COLLECT/FORALL statements with dynamic query and table name- Oracle PL/SQL

I need help in optimizing this query to use bulk collect and forall statements. I have created backup tables (BCK_xxxx) to copy all data from original tables (ORIG_xxx) but I am having problems converting this to bulk collect. Most examples I saw in BULK collect includes already defining the table name and structure using %rowtype. However, I have hundreds of tables to backup so I need my query specifically the table name to be dynamic. This my original query that inserts/deleted data one by one without bulk collect and takes a lot of time:
DECLARE
--select all table names from backup tables (ex: BCK_tablename)
CURSOR cur_temp_tbl IS
SELECT table_name
FROM all_tables
WHERE OWNER = 'BCKUP'
ORDER BY 1;
--select all table names from original tables (ex: ORIG_tablename)
CURSOR cur_original_tbl IS
SELECT table_name
FROM all_tables
WHERE OWNER = 'ORIG'
ORDER BY 1;
l_tbl_nm VARCHAR2(30 CHAR);
BEGIN
--first loop to delete all tables from backup
FOR a IN cur_temp_tbl LOOP
l_tbl_nm := a.table_name;
EXECUTE IMMEDIATE 'DELETE FROM '|| l_tbl_nm;
l_deleted_cnt := l_deleted_cnt +1;
END LOOP;
--second loop to insert data from original to backup
FOR b IN cur_original_tbl LOOP
l_tbl_nm := b.table_name;
CASE
WHEN INSTR(l_tbl_nm,'ORIG_') > 0 THEN
l_tbl_nm := REPLACE(l_tbl_nm,'ORIG_','BCK_');
ELSE
l_tbl_nm := 'BCK_' || l_tbl_nm;
END CASE;
EXECUTE IMMEDIATE 'INSERT INTO ' || l_tbl_nm || ' SELECT * FROM ' || b.table_name;
l_inserted_cnt := l_inserted_cnt +1;
END LOOP;
dbms_output.put_line('Deleted/truncated tables from backup :' ||l_deleted_cnt);
dbms_output.put_line('No of tables inserted with data from original to backup :' ||l_inserted_cnt);
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(SQLERRM);
dbms_output.put_line(l_tbl_nm);
END;
I am thinking of including the code below to add after my second loop but I am having problems how to declare the 'cur_tbl' cursor and 'l_tbl_data' TABLE data type. I am unable to use rowtype since the tablename should be dynamic and will change in each iteration of my second loop that will list all table names from original table:
TYPE CurTblTyp IS REF CURSOR;
cur_tbl CurTblTyp;
TYPE l_tbl_t IS TABLE OF tablename.%ROWTYPE;
l_tbl_data l_tbl_t ;
OPEN cur_tbl FOR 'SELECT * FROM :s ' USING b.table_name;
FETCH cur_tbl BULK COLLECT INTO l_tbl_data LIMIT 5000;
EXIT WHEN cur_tbl%NOTFOUND;
CLOSE cur_tbl;
FORALL i IN 1 .. l_tbl_data .count
EXECUTE IMMEDIATE 'insert into '||l_tbl_nm||' values (:1)' USING
l_tbl_data(i);
Hope you can help me and suggest how I can make this code much simpler. Thanks a lot.
It looks like you want to remove all rows from the existing backup tables, then re-copy the entire contents from the original tables to the backup tables. If this is correct, using DELETE for the deletion and any loop operation for the insert will be slow.
First, to remove the data, use TRUNCATE. Since you are going to repopulate, use the REUSE STORAGE option. This is the most efficient way to delete all rows from the table.
TRUNCATE TABLE <backup table> REUSE STORAGE;
Second, to repopulate, just INSERT with a SELECT.
INSERT INTO <backup table> SELECT * FROM <orig table>;
You can use these in your loops as you loop by table. No need to cursor through the table rows as this will be faster.
If you have a new table, you can do something similar with a CTAS...
CREATE TABLE <backup table> AS SELECT * FROM <orig_table>;
There is a 3rd option in addition to the delete and truncate options: that's rename/drop. You rename the old back up tables, recreate the new backups (CTAS). If the create - insert is successful you drop the renamed tables, if the new backup fails you rename the prior old backups back to the initial backup names. You basically trade temporary usage of disk space for redo logs.
You don't need bulk processing, CTAS is still faster than bulk processing.
Have you used FORCE DELETE?
It was first introduced by the Oracle Master J.B.E
it is used to delete the data and ignores the constraint that the table may have and is a lot more faster than other delete statements.
FORCE DELETE FROM <table_name>;

Delete all data from a table after selecting all data from the same table

All i want is to select all rows from a table and once it is selected and displayed, the data residing in table must get completely deleted. The main concern is that this must be done using sql only and not plsql. Is there a way we can do this inside a package and call that package in a select statement? Please enlighten me here.
Dummy Table is as follows:
ID NAME SALARY DEPT
==================================
1 Sam 50000 HR
2 Max 45000 SALES
3 Lex 51000 HR
4 Nate 66000 DEV
Any help would be greatly appreciated.
select * from Table_Name;
Delete from Table_Name
To select the data from a SQL query try using a pipelined function.
The function can define a cursor for the data you want (or all the data in the table), loop through the cursor piping each row as it goes.
When the cursor loop ends, i.e. all data has been consumed by your query, the function can perform a TRUNCATE table.
To select from the function use the following syntax;
SELECT *
FROM TABLE(my_function)
See the following Oracle documentation for information pipelined functions - https://docs.oracle.com/cd/B28359_01/appdev.111/b28425/pipe_paral_tbl.htm
This cannot be done inside a package, because " this must be done using sql only and not plsql". A package is PL/SQL.
However it is very simple. You want two things: select the table data and delete it. Two things, two commands.
select * from mytable;
truncate mytable;
(You could replace truncate mytable; with delete from mytable;, but this is slower and needs to be followed by commit; to confirm the deletion and end the transaction.)
Without pl/sql it's not possible.
Using pl/sql you can create a function which will populate a row, and then delete
Here is example :
drop table tempdate;
create table tempdate as
select '1' id from dual
UNION
select '2' id from dual
CREATE TYPE t_tf_row AS OBJECT (
id NUMBER
);
CREATE TYPE t_tf_tab IS TABLE OF t_tf_row;
CREATE OR REPLACE FUNCTION get_tab_tf RETURN t_tf_tab PIPELINED AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR rec in (select * from tempdate) LOOP
PIPE ROW(t_tf_row(rec.id));
END LOOP;
delete from tempdate ; commit;
END;
select * from table(get_tab_tf) -- it will populate and then delete
select * from tempdate --you can check here result of deleting
you can use below query
select * from Table_demo delete from Table_demo
The feature you seek is SERIALIZABLE ISOLATION LEVEL. This feature enables repeatable reads, which in particular guarantee that both SELECTand DELETEwill read and process the same identical data.
Example
Alter session set isolation_level=serializable;
select * from tempdate;
--- now insert from other session a new record
delete from tempdate ;
commit;
-- re-query the table old records are deleted, new recor preserved.

Is it possible to move a record from one table to another using a single SQL statement?

I need a query to move a record from one table to another without using multiple statements?
No, you cannot move records in one SQL statement. You have to use an INSERT followed by a DELETE statement. You should wrap these statements into a transaction, to make sure that the copy operation remains atomic.
START TRANSACTION;
INSERT INTO
new_table
SELECT
*
FROM
old_table
WHERE
some_field = 'your_criteria';
DELETE FROM old_table WHERE some_field = 'your_criteria';
COMMIT;
If you really want to do this in a single SQL statement, one way to accomplish this would be to create an "after delete" trigger on the source table that inserts the row into the target table. This way you can move the row from the source table to the target table simply by deleting it from the source table. Of course this will only work if you want to insert into target table for every delete on the source table.
DELIMITER $$
DROP TRIGGER IF EXISTS TR_A_DEL_SOURCE_TABLE $$
CREATE TRIGGER TR_A_DEL_SOURCE_TABLE AFTER DELETE ON SOURCE_TABLE FOR EACH ROW BEGIN
INSERT IGNORE INTO TARGET_TABLE(id,val1,val2) VALUES(old.id,old.va1,old.val2);
END $$
DELIMITER ;
So to move the row with id 42 from source table to target table:
delete from source_table where id = 42;
No - you might be able to do the INSERT in one nested statement, but you still need to remove the record.
There is no way to make it a single query, but if you HAVE to do it in a single query within an application you can create a Stored Procedure to do it for you.
DELIMITER $$
DROP PROCEDURE IF EXISTS `copydelete` $$
CREATE PROCEDURE `copydelete` (id INT)
BEGIN
INSERT INTO New_Table SELECT * from Old_Table WHERE Old_Table.IdField=id;
DELETE FROM Old_Table where IdField=id;
END $$
DELIMITER ;
Then you're new query is just
CALL copydelete(4);
Which will delete WHERE IdField=4;
Please note that the time delay between insert-select and delete can cause you to delete to much.
For a safe route you could use an update field:
update old_table set move_flag=1 where your_criteria
insert into ... select from ... where move_flag = 1
delete from old_table where move_flag=1
Or use a transaction which locks the old_table so no data can be added between insert... select and delete.