Restoring a Truncated Table from a Backup - sql

I am restoring the data of a truncated table in an Oracle Database from an exported csv file. However, I find that the primary key auto-increments and does not insert the actual values of the primary key constrained column from the backed up file.
I intend to do the following:
1. drop the primary key
2. import the table data
3. add primary key constraints on the required column
Is this a good approach? If not, what is recommended? Thanks.
EDIT: After more investigation, I observed there's a trigger to generate nextval on a sequence to be inserted into the primary key column. This is the source of the predicament. Hence, following the procedure above would not solve the problem. It lies in the trigger (and/or sequence) on the table. This is solved!

easier to use your .csv as an external table and then go
create table your_table_temp as select * from external table
examine the data in the new temp table to ensure you know what range of primary keys is present
do a merge into the new table
samples from here and here
CREATE TABLE countries_ext (
country_code VARCHAR2(5),
country_name VARCHAR2(50),
country_language VARCHAR2(50)
)
ORGANIZATION EXTERNAL (
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tab_data
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
(
country_code CHAR(5),
country_name CHAR(50),
country_language CHAR(50)
)
)
LOCATION ('Countries1.txt','Countries2.txt')
)
PARALLEL 5
REJECT LIMIT UNLIMITED;
and the merge
MERGE INTO employees e
USING hr_records h
ON (e.id = h.emp_id)
WHEN MATCHED THEN
UPDATE SET e.address = h.address
WHEN NOT MATCHED THEN
INSERT (id, address)
VALUES (h.emp_id, h.address);
Edit: after you have merged the data you can drop the temp table and the result is your previous table with the old data and the new data together
Edit you mention " During imports, the primary key column does not insert from the file, but auto-increments". This can only happen when there is a trigger on the table, likely, Before insert on each row. Disable the trigger and then do your import. Re-enable the trigger after committing your inserts.

I used the following procedure to solve it:
drop trigger trigger_name
Imported the table data into target table
drop sequence sequence_name
CREATE SEQUENCE SEQ_NAME INCREMENT BY 1 START WITH start_index_for_next_val MAXVALUE max_val MINVALUE 1 NOCYCLECACHE 20 NOORDER
CREATE OR REPLACE TRIGGER "schema_name"."trigger_name"
before insert on target_table
for each row
begin
select seq_name.nextval
into :new.unique_column_name
from dual;
end;

Related

Missing Keyword Error in Oracle SQL Database [duplicate]

I was wondering how can I add an identity column to existing oracle table? I am using oracle 11g. Suppose I have a table named DEGREE and I am going to add an identity column to that.
FYI table is not empty.
You can not do it in one step. Instead,
Alter the table and add the column (without primary key constraint)
ALTER TABLE DEGREE ADD (Ident NUMBER(10));
Fill the new column with data which will fulfill the primary key constraint (unique/not null), e.g. like
UPDATE DEGREE SET Ident=ROWNUM;
Alter the table and add the constraint to the column
ALTER TABLE DEGREE MODIFY (Ident PRIMARY KEY);
After that is done, you can set up a SEQUENCE and a BEFORE INSERT trigger to automatically set the id value for new records.
From Oracle 12c you would use an identity column.
For example, say your table is called demo and has 3 columns and 100 rows:
create table demo (col1, col2, col3)
as
select dbms_random.value(1,10), dbms_random.value(1,10), dbms_random.value(1,10)
from dual connect by rownum <= 100;
You could add an identity column using:
alter table demo add demo_id integer generated by default on null as identity;
update demo set demo_id = rownum;
Then reset the internal sequence to match the data and prevent manual inserts:
alter table demo modify demo_id generated always as identity start with limit value;
and define it as the primary key:
alter table demo add constraint demo_pk primary key (demo_id);
This leaves the new column at the end of the column list, which shouldn’t normally matter (except for tables with a large number of columns and row chaining issues), but it looks odd when you describe the table. However, we can at least tidy up the dictionary order using the invisible/visible hack:
SQL> desc demo
Name Null? Type
-------------------------------- -------- ----------------------
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
DEMO_ID NOT NULL NUMBER(38)
begin
for r in (
select column_name from user_tab_columns c
where c.table_name = 'DEMO'
and c.column_name <> 'DEMO_ID'
order by c.column_id
)
loop
execute immediate 'alter table demo modify '||r.column_name||' invisible';
execute immediate 'alter table demo modify '||r.column_name||' visible';
end loop;
end;
/
SQL> desc demo
Name Null? Type
-------------------------------- -------- ----------------------
DEMO_ID NOT NULL NUMBER(38)
COL1 NUMBER
COL2 NUMBER
COL3 NUMBER
One thing you can't do (as of Oracle 18.0) is alter an existing column to make it into an identity column, so you have to either go through a process like the one above but copying the existing values and finally dropping the old column, or else define a new table explicitly with the identity column in place and copy the data across in a separate step. Otherwise you'll get:
-- DEMO_ID column exists but is currently not an identity column:
alter table demo modify demo_id generated by default on null as identity start with limit value;
-- Fails with:
ORA-30673: column to be modified is not an identity column
add the column
alter table table_name add (id INTEGER);
create a sequence table_name_id_seq with start with clause, using number of rows in the table + 1 or another safe value(we don't want duplicate ids);
lock the table (no inserts)
alter table table_name lock exclusive mode;
fill the column
update table_name set id = rownum; --or another logic
add a trigger to automaticaly put the id on insert using the sequence(you can find examples on internet, for example this answer)
When you'll fire the create trigger the lock will be released. (it automatically commits).
Also, you may add unique constraint on the id column, it is best to do so.
For Oracle :
CREATE TABLE new_table AS (SELECT ROWNUM AS id, ta.* FROM old_table ta)
remember this id column is not auto incremented

Delete and Copy Big Table with Autoincrement

I want to delete many rows (More than a million) from a big table.
My table is like this:
Create table MY_TABLE (
MY_ID NUMBER GENERATED BY DEFAULT AS IDENTITY (Start with 1) primary key,
PROCESS NUMBER,
INFORMATION VARCHAR2(100)
);
Instead of using "delete from MY_TABLE where PROCESS = 3"
I do:
CREATE TABLE BCK_MY_TABLE AS (SELECT * FROM MY_TABLE WHERE PROCESS <> 3);
DROP TABLE MY_TABLE;
RENAME BCK_MY_TABLE to MY_TABLE;
Problem is: When i create another table (BCK_MY_TABLE) i lose the autoincrement on the column MY_ID. What can i do?
There isn't a straightforward way to do this with 'create table as select' (CTAS), because my_id in the new table won't be an identity column, and you can't make existing columns into identity columns.
One way would be to create the table explicitly with an identity column, copy the data and reset the identity value:
create table bck_my_table
( my_id number generated by default as identity primary key
, process number
, information varchar2(100) );
insert into bck_my_table (my_id, process, information)
select my_id, process, information from my_table;
alter table bck_my_table
modify my_id generated always as identity start with limit value;
(We have to use generated by default so the column is updatable, then change it to generated always to prevent further changes.)
Another way would be to copy the table using CTAS then add a new identity column, update it from the old my_id, reset it using start with limit value, drop the old column and rename the new one.

Storing date & time and updater/user of row data in Oracle Apex

Looking for an ideal way to store (1) date & time of update (2) updater/user per row of table data using Oracle Apex. I am thinking of adding 2 extra columns to store the info and trying to come up with a good as to how changes per row can be tracked.
If you want create logs of insert, update , delete on your table, adding 2 columns not enough. Each new update will erase previous and delete couldn't be logged. So you need to store log table separately from data table, and fill it on before and after triggers created on your data table. If you want sample I can provide some.
Here simplified example, of course in real life data will be more complex and I guess a trigger should be more smarter, but this is a simple start point to create your own. After executing codes below, try to insert, update, delete delete records in table TEST_DATA, see what happens in TEST_LOG
Create data table
create table TEST_DATA (
UNID number,
COL_B varchar2(50)
);
-- Create/Recreate primary, unique and foreign key constraints
alter table TEST_DATA
add constraint PK_TEST_DATA_UNID primary key (UNID);
Create log table for it
create table TEST_LOG (
UNID number,
OPERATION varchar2(1),
COL_OLD varchar2(50),
COL_NEW varchar2(50),
CHNGUSER varchar2(50),
CHNGDATE date
);
and finally create trigger which tracks changes
create or replace trigger TR_LOG_TEST_DATA
after update or insert or delete on TEST_DATA
referencing new as new old as old
for each row
begin
if Inserting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'I', null, :new.col_b, user, sysdate);
end if;
if Updating then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:new.unid, 'U', :old.col_b, :new.col_b, user, sysdate);
end if;
if Deleting then
insert into TEST_LOG
(UNID, OPERATION, COL_OLD, COL_NEW, CHNGUSER, CHNGDATE)
values
(:old.unid, 'D', :old.col_b, null, user, sysdate);
end if;
end;

How to use PostgreSql trigger and procedure to audit/restore parent-child tables

We want to keep the editing history of some table and restore them if necessary. For example, we have following tables. We want to audit the emp table when insert /delete action performed. Besides these, when update on version field happens, we also need to save a copy of related emp_addr records to emp_addr_audit. When users want to roll back to a specif version of one emp, we need to restore the record from emp_audit and emp_addr_audit.
I am thinking to use a trigger to do audit work and a procedure to do restore work. I know the key part is how to maintain the integrity of parent-child tables in audit and restore work. I need some advices. Thanks.
create table emp (
emp_id integer primary key,
version varchar(50)
);
/* Address table */
create table emp_addr (
addr_id integer primary key,
emp_id integer, -- references table emp
line1 varchar(30),
);
/* Audit table for emp table */
create table emp_audit (
operation character(1),
updatetime timestamp,
emp_id integer,
version varchar(50)
);
/* Audit table for emp_addr table */
create table emp_addr_audit (
operation character(1),
addr_id integer,
emp_id integer,
line1 varchar(30),
);
I'd recommend adding id field as a primary key to audit tables, since you need to reference to the emp table when rolling back to it and you also need that emp to reference corresponding version of emp_addr.
So audit table DDLs should look like this:
/* Audit table for emp table */
create table emp_audit (
id bigserial,
operation character(1),
updatetime timestamp,
emp_id integer,
version varchar(50),
CONSTRAINT audit_emp_id PRIMARY KEY (id)
);
/* Audit table for emp_addr table */
create table emp_addr_audit (
id bigserial,
operation character(1),
addr_id integer,
emp_id integer,
line1 varchar(30),
CONSTRAINT fk_ audit_emp_id FOREIGN KEY emp_audit_id
REFERENCES emp_audit (id) MATCH SIMPLE ON UPDATE CASCADE ON DELETE CASCADE;
);
Next you will need to create a trigger, to store changes. Note, that you will have to monitor changes in both tables and create references to the corresponding stored records.
CREATE TRIGGER t_audit_emp_IUD
AFTER INSERT OR UPDATE OR DELETE -- probably u want only update. Not sure
ON emp
FOR EACH ROW
EXECUTE PROCEDURE emp_modified();
CREATE TRIGGER t_audit_emp_addr_IUD
AFTER INSERT OR UPDATE OR DELETE
ON emp_addr
FOR EACH ROW
EXECUTE PROCEDURE emp_addr_modified();
And finally define the functions. Note, that functions should be stored in database before triggers, since triggers reference to the functions.
Rollback function should take emp_audit.id as an input and restore the state according to the audit table. It would be a good idea to save state before rolling back to prevent possible data loss.
If this doesn't answer your question, please clarify which part do you actually need help with.

Oracle SQL to change column type from number to varchar2 while it contains data

I have a table (that contains data) in Oracle 11g and I need to use Oracle SQLPlus to do the following:
Target: change the type of column TEST1 in table UDA1 from number to varchar2.
Proposed method:
backup table
set column to null
change data type
restore values
The following didn't work.
create table temp_uda1 AS (select * from UDA1);
update UDA1 set TEST1 = null;
commit;
alter table UDA1 modify TEST1 varchar2(3);
insert into UDA1(TEST1)
select cast(TEST1 as varchar2(3)) from temp_uda1;
commit;
There is something to do with indexes (to preserve the order), right?
create table temp_uda1 (test1 integer);
insert into temp_uda1 values (1);
alter table temp_uda1 add (test1_new varchar2(3));
update temp_uda1
set test1_new = to_char(test1);
alter table temp_uda1 drop column test1 cascade constraints;
alter table temp_uda1 rename column test1_new to test1;
If there was an index on the column you need to re-create it.
Note that the update will fail if you have numbers in the old column that are greater than 999. If you do, you need to adjust the maximum value for the varchar column
Add new column as varchar2, copy data to this column, delete old column, rename new column as actual column name:
ALTER TABLE UDA1
ADD (TEST1_temp VARCHAR2(16));
update UDA1 set TEST1_temp = TEST1;
ALTER TABLE UDA1 DROP COLUMN TEST1;
ALTER TABLE UDA1
RENAME COLUMN TEST1_temp TO TEST1;
Look at Oracle's package DBMS_REDEFINE. With some luck you can do it online without downtime - if needed. Otherwise you can:
Add new VARCHAR2 column
Use update to copy NUMBER into VARCHAR2
Drop NUMBER column
Rename VARCHAR2 column
Here you go, this solution did not impact the existing NOT NULL or Primary key constraints. Here i am going to change the type of Primary key from Number to VARCHAR2(3), Here are the Steps on example table employee.
Take backup of table and Index, Constraints
created table employee_bkp
create table employee_bkp as select * from employee
commit;
Truncate the table to empty it
truncate table employee
Alter the table to change the type
ALTER TABLE employee MODIFY employee_id varchar2(30);
Copy the data back from backup table
insert into employee (select * from employee_bkp)
commit;
Verify