How to compute a column value in oracle 10g? - sql

create table ord_tbl
(
ord_id number(10) primary key,
ord_name varchar2(20),
quantity number(20),
cost_per_item number(30),
total_cost number(30)--This colm shud be (quantity*cost_per_item),
ord_date date
)
So when I insert rows then the 'total_cost' should automatically get generated and inserted into a table

10g doesn't have this feature. Instead, use a view:
create table ord_tbl
(
ord_id number(10) primary key,
ord_name varchar2(20),
quantity number(20),
cost_per_item number(30),
ord_date date
);
create view vw_ord_tbl as
select ord_id, ord_name, quantity, cost_perId, (quantity*cost_per_item) as total_cost, ord_date
from ord_tbl;
The alternative would be to have the column in the table to maintain the value using a trigger -- for both updates and inserts. I would suggest using the view, because maintaining the triggers adds a lot of maintenance overhead.
EDIT (by Jason):
In 11g you can create a virtual column in the table definition.
create table ord_tbl (
ord_id number(10) primary key,
ord_name varchar2(20),
quantity number(20),
cost_per_item number(30),
total_cost as (quantity*cost_per_item),
ord_date date
)

Like Gordon Linoff answered, you can create a view. Alternatively you can create a trigger and store the actual value:
create trigger tiua_ord_tbl
on ord_tbl after insert or update
for each row
begin
:new.total_cost := :new.quantity * :new.cost_per_item;
end;
The advantage of storing the data, is that you can access it faster and index it if you need. The disadvantage is that you store redundant data (it can be calculated at runtime) and it requires more storage space.
I would advise to use the view at first, and only start storing the value in the table if you need this 'cached' version for performance.

Related

Trigger after insert to check and compare records between table

In an Oracle Database, I need to create some trigger or procedure to treat this case in the most performative way possible (is an extremely large amount of data).
I have a table called ORDER_A that every day receives a full load (its truncated, and all records are inserted again).
I have a table called ORDER_B which is a copy of ORDER_A, containing the same data and some additional control dates.
Each insertion on ORDER_A must trigger a process that looks for a record with the same identifier (primary key: order_id) in table B.
If a record exists with the same order_id, and any of the other columns have changed, an update must be performed on table B
If a record exists with the same order_id, and no values ​​in the other columns have been modified, nothing should be performed, the record must remain the same in table B.
If there is no record with the same order_id, it must be inserted in table B.
My tables are like this
CREATE TABLE ORDER_A
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
CREATE TABLE ORDER_B
(
ORDER_ID NUMBER NOT NULL,
ORDER_CODE VARCHAR2(50),
ORDER_STATUS VARCHAR2(20),
ORDER_USER_ID NUMBER,
ORDER_DATE TIMESTAMP(6)
INSERT_AT TIMESTAMP(6),
UPDATED_AT TIMESTAMP(6),
PRIMARY KEY (ORDER_ID)
);
I have no idea how to do this and what is the best way (with a trigger, procedure, using merge, etc.)
Can someone give me a direction, please?
Here is some pseudo-code to show you a potential trigger based solution that does not fall back into slow row-by-row processing.
create or replace trigger mytrg
for insert or update delete on ordera
compound trigger
pklist sys.odcinumberlist;
before statement is
begin
pklist := sys.odcinumberlist();
end before statement ;
after each row is
begin
pklist.extend;
pklist(pklist.count) := :new.order_id;
end before each row;
after statement is
begin
merge into orderb b
using (
select a.*
from ordera a,
table(pklist) t
where a.order_id = t.column_value
) m
when matched then
update
set b.order_code = m.order_code,
b.order_status = m.order_status,
...
where decode(b.order_code,m.order_code,0,1)=1
or decode(b.order_status,m.order_status,0,1)=1
....
when not matched then
insert (b.order_id,b.order_code,....)
values (m.order_id,m.order_code,....);
end after statement ;
end;
We hold the impacted primary keys, and then build a single merge later, with an WHERE embed to minimise update activities.
If your application allows the update of primary keys, you'd need some additions, but this should get you started

Multiple autoincrement ids based on table column

I need help in database design.
I have following tables.
Pseudo code:
Table order_status {
id int[pk, increment]
name varchar
}
Table order_status_update {
id int[pk, increment]
order_id int[ref: > order.id]
order_status_id int[ref: > order_status.id]
updated_at datetime
}
Table order_category {
id int[pk, increment]
name varchar
}
Table file {
id int[pk, increment]
order_id int[ref: > order.id]
key varchar
name varchar
path varchar
}
Table order {
id int [pk] // primary key
order_status_id int [ref: > order_status.id]
order_category_id int [ref: > order_category.id]
notes varchar
attributes json // no of attributes is not fixed, hence needed a json column
}
Everything was okay, but now I need an auto-increment id for each type of order_category_id column.
For example, if I have 2 categories electronics and toys , then I would need electronics-1, toy-1, toy-2, electronics-2, electronics-3, toy-3, toy-4, toy-5 values associated with rows of order table. But it's not possible as auto-increment increments based on each new row, not column type.
In other words, for table order instead of
id order_category_id
---------------------
1 1
2 1
3 1
4 2
5 1
6 2
7 1
I need following,
id order_category_id pretty_ids
----------------------------
1 1 toy-1
2 1 toy-2
3 1 toy-3
4 2 electronics-1
5 1 toy-4
6 2 electronics-2
7 1 toy-5
What I tried:
I created separate table for each order category (not an ideal solution but currently I have 6 order categories, so it works for now )
Now, I have table for electronics_order and toys_order. Columns are repetitive, but it works. But now I have another problem, my every relationship with other tables got ruined. Since, both electronics_order and toys_orders can have same id, I cannot use id column to reference order_status_update, order_status, file tables.
I can create another column order_category in each of these tables, but will it be the right way? I am not experienced in database design, so I would like to know how others do it.
I also have a side question.
Do I need tables for order_category and order_status just to store names? Because these values will not change much and I can store them in code and save in columns of order table.
I know separate tables are good for flexibility, but I had to query database 2 times to fetch order_status and order_category by name before inserting new row to order table. And later it will be multiple join for querying order table.
--
If it helps, I am using flask-sqlalchemy in backend and postgresql as database server.
In order to track the increment id which is based on the order_category, we can keep track of this value on another table. Let us call this table: order_category_sequence. To show my solution, I just created simplified version of order table with order_category.
CREATE TABLE order_category (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NULL
);
CREATE TABLE order_category_sequence (
id SERIAL PRIMARY KEY,
order_category_id int NOT NULL,
current_key int not null
);
Alter Table order_category_sequence Add Constraint "fk_order_category_id" FOREIGN KEY (order_category_id) REFERENCES order_category (id);
Alter Table order_category_sequence Add Constraint "uc_order_category_id" UNIQUE (order_category_id);
CREATE TABLE "order" (
id SERIAL PRIMARY KEY,
order_category_id int NOT NULL,
pretty_id VARCHAR(100) null
);
Alter Table "order" Add Constraint "fk_order_category_id" FOREIGN KEY (order_category_id) REFERENCES order_category (id);
The order_category_id column in order_category_sequence table refers the order_category. The current_key column holds the last value in order.
When a new order row is added, we can use a trigger to read the last value from order_category_sequence and update pretty_id. The following trigger definition can be used to achieve this.
--function called everytime a new order is added
CREATE OR REPLACE FUNCTION on_order_created()
RETURNS trigger AS
$BODY$
DECLARE
current_pretty_id varchar(100);
BEGIN
-- increment last value of the corresponding order_category_id in the sequence table
Update order_category_sequence
set current_key = (current_key + 1)
where order_category_id = NEW.order_category_id;
--prepare the pretty_id
Select
oc.name || '-' || s.current_key AS current_pretty_id
FROM order_category_sequence AS s
JOIN order_category AS oc on s.order_category_id = oc.id
WHERE s.order_category_id = NEW.order_category_id
INTO current_pretty_id;
--update order table
Update "order"
set pretty_id = current_pretty_id
where id = NEW.id;
RETURN NEW;
END;
$BODY$ LANGUAGE plpgsql;
CREATE TRIGGER order_created
AFTER INSERT
ON "order"
FOR EACH ROW
EXECUTE PROCEDURE on_order_created();
If we want to synchronize the two table, order_category and order_category_sequence, we can use another trigger to have a row in the latter table every time a new order category is added.
//function called everytime a new order_category is added
CREATE OR REPLACE FUNCTION on_order_category_created()
RETURNS trigger AS
$BODY$
BEGIN
--insert a new row for the newly inserted order_category
Insert into order_category_sequence(order_category_id, current_key)
values (NEW.id, 0);
RETURN NEW;
END;
$BODY$ LANGUAGE plpgsql;
CREATE TRIGGER order_category_created
AFTER INSERT
ON order_category
FOR EACH ROW
EXECUTE PROCEDURE on_order_category_created();
Testing query and result:
Insert into order_category(name)
values ('electronics'),('toys');
Insert into "order"(order_category_id)
values (1),(2),(2);
select * from "order";
Regarding your side question, I prefer to store the lookup values like order_status and order_category in separate tables. Doing this allows to have the above flexibility and it is easy when we have changes.
To answer your side question: yes, you should keep tables with names in them, for a number of reasons. First of all, such tables are small and generally kept in memory by the database, so there is negligible performance benefit to not using the tables. Second, you want to be able to use external tools to query the database and generate reports, and you want these kind of labels available to those tools. Third, you want to minimize the coupling of your software to the actual data so that they can evolve independently. Adding a new category should not require modifying your software.
Now, to the main question, there is no built-in facility for the kind of auto-increment you want. You have to build it yourself.
I suggest you keep the sequence number for each category as a column in the category table. Then you can update it and use the updated sequence number in the order table, like this (which is specific to PostgreSQL):
-- set up the tables
create table orders (
id SERIAL PRIMARY KEY,
order_category_id int,
pretty_id VARCHAR
);
create unique index order_category_pretty_id_idx
on orders (pretty_id);
create table order_category (
id SERIAL PRIMARY KEY,
name varchar NOT NULL,
seq int NOT NULL default 0
);
-- create the categories
insert into order_category
(name) VALUES
('toy'), ('electronics');
-- create orders, specifying the category ID and generating the pretty ID
WITH
new_category_id (id) AS (VALUES (1)), -- 1 here is the category ID for the new order
pretty AS (
UPDATE order_category
SET seq = seq + 1
WHERE id = (SELECT id FROM new_category_id)
RETURNING *
)
INSERT into orders (order_category_id, pretty_id)
SELECT new_category_id.id, concat(pretty.name, '-', pretty.seq)
FROM new_category_id, pretty;
You just plug in your category ID where I have 1 in the example and it will create the new pretty_id for that category. The first category will be toy-1, the next toy-2, etc.
| id | order_category_id | pretty_id |
| --- | ----------------- | ------------- |
| 1 | 1 | toy-1 |
| 2 | 1 | toy-2 |
| 3 | 2 | electronics-1 |
| 4 | 1 | toy-3 |
| 5 | 2 | electronics-2 |
In order to do toys-1 toys-2 toys-3 you should repeat the logic of order_status update, There is no difference between track some status by time or by count.
Just in the order_status update it is simpler you just put now() into updated_at for lets say order_category_track you would take last value + 1 or have different sequences respectively category (would not recommend to do like this because it binds database objects with data in the DB).
I would change a schema to:
In this schema might be in inconsistent state. But in my opinion in your application there are three different entities "order","order_status","order category track" which live their own lives.
And still it is almost impossible to achieve consistent state for this task with out locks for example. This task is complicated by condition that next rows depends on previous what contradicts with SQL.
I would suggest to split category into 2-level hierarchy: category (toy, electronic) and subcategory (toy-1, toy-2, electronic-1, etc.):
So you can use column order_subcategory.full_name contain compiled "toy-1" value, or you can create view to make this field on the fly:
select oc.name || "-" || os.number
from order_category as oc
join order_subcategory as os on oc.id = os.category_id
https://dbdiagram.io/d/5dd6a132edf08a25543e34f8
Regarding your questions "Do I need tables for order_category and order_status just to store names?":
It is best practice to store this kind of data as a separate dictionary table. It gives you consistency and reliability. Querying those tables is very fast and easy for RDBMS, so feel free to use it.
I'll focus on only 3 tables you showed: order, order_status and order_category.
Creating a new table for a new record is not the right way. As your explanation, I think you trying to use order and order_category tables as many to many relationship. If it's so, the thing you need is a pivot table like this:
I currently add order_status column in order table,
you can add this column one of these tables as your need.
side question:
for order_status, if order status is fixed,( like only ACTIVE,INACTIVE and it won't be more values in the future) it would be better to use a column with ENUM type.
The easy answer would be to answer directly to your question. But I do not think it is a good thing in this case. So I will do otherwise.
I think that maybe the whole conception is wrong.
First things first : clarification of your business needs and assertions.
One order can have multiple categories
One category can concern multiple orders
One order can only have one status at a time but multiple through time
One status can be used by multiple orders
One order correspond to a file (probably a billing proof)
One file concerns only one order
Second : Advices
There is a little amount of reserved key words that you must not use in production environment. (https://www.postgresql.org/docs/current/sql-keywords-appendix.html). So for example I replace the word 'order' by 'command'.
Remaining questions that mandatory needs an answer before production : why the attributes attribute in your 'order' table? There is a risk of non respect of normal forms here. (https://www.geeksforgeeks.org/normal-forms-in-dbms/)
Third : conception solution
This normally is enough to give you a good start. But I wanna have fun a little more :) So...
Fourth : interrogation on needed performance
estimation of load per day/month in order (ten million rows per month?)
Fifth : physical solution proposition
Archiving in another tablespace (trigger when cancel or terminated => archived)
Indexes in another tablespace (your dba will thank you for that)
Possible partitionning of order table (https://pgxn.org/dist/pg_partman/doc/pg_partman.html, https://www.postgresql.org/docs/current/ddl-partitioning.html)
Hardware and option choosings (high availibility? disaster management? if it is: the elaboration needs further study but few)
Data transposition (is it really needed? if it is: the elaboration needs further study but few)
The finaaaaaal code-down ! (with the good music)
-- as a postgres user
CREATE DATABASE command_system;
CREATE SCHEMA in_prgoress_command;
CREATE SCHEMA archived_command;
--DROP SCHEMA public;
-- create tablespaces on other location than below
CREATE TABLESPACE command_indexes_tbs location 'c:/Data/indexes';
CREATE TABLESPACE archived_command_tbs location 'c:/Data/archive';
CREATE TABLESPACE in_progress_command_tbs location 'c:/Data/command';
CREATE TABLE in_prgoress_command.command
(
id bigint /*or bigserial if you use a INSERT RETURNING clause*/ primary key
, notes varchar(500)
, fileULink varchar (500)
)
TABLESPACE in_progress_command_tbs;
CREATE TABLE archived_command.command
(
id bigint /*or bigserial if you use a INSERT RETURNING clause*/ primary key
, notes varchar(500)
, fileULink varchar (500)
)
TABLESPACE archived_command_tbs;
CREATE TABLE in_prgoress_command.category
(
id int primary key
, designation varchar(45) NOT NULL
)
TABLESPACE in_progress_command_tbs;
INSERT INTO in_prgoress_command.category
VALUES (1,'Toy'), (2,'Electronic'), (3,'Leather'); --non-exaustive list
CREATE TABLE in_prgoress_command.status
(
id int primary key
, designation varchar (45) NOT NULL
)
TABLESPACE in_progress_command_tbs;
INSERT INTO in_prgoress_command.status
VALUES (1,'Shipping'), (2,'Cancel'), (3,'Terminated'), (4,'Payed'), (5,'Initialised'); --non-exaustive list
CREATE TABLE in_prgoress_command.command_category
(
id bigserial primary key
, idCategory int
, idCommand bigint
)
TABLESPACE in_progress_command_tbs;
ALTER TABLE in_prgoress_command.command_category
ADD CONSTRAINT fk_command_category_category FOREIGN KEY (idCategory) REFERENCES in_prgoress_command.category(id);
ALTER TABLE in_prgoress_command.command_category
ADD CONSTRAINT fk_command_category_command FOREIGN KEY (idCommand) REFERENCES in_prgoress_command.command(id);
CREATE INDEX idx_command_category_category ON in_prgoress_command.command_category USING BTREE (idCategory) TABLESPACE command_indexes_tbs;
CREATE INDEX idx_command_category_command ON in_prgoress_command.command_category USING BTREE (idCommand) TABLESPACE command_indexes_tbs;
CREATE TABLE archived_command.command_category
(
id bigserial primary key
, idCategory int
, idCommand bigint
)
TABLESPACE archived_command_tbs;
ALTER TABLE archived_command.command_category
ADD CONSTRAINT fk_command_category_category FOREIGN KEY (idCategory) REFERENCES in_prgoress_command.category(id);
ALTER TABLE archived_command.command_category
ADD CONSTRAINT fk_command_category_command FOREIGN KEY (idCommand) REFERENCES archived_command.command(id);
CREATE INDEX idx_command_category_category ON archived_command.command_category USING BTREE (idCategory) TABLESPACE command_indexes_tbs;
CREATE INDEX idx_command_category_command ON archived_command.command_category USING BTREE (idCommand) TABLESPACE command_indexes_tbs;
CREATE TABLE in_prgoress_command.command_status
(
id bigserial primary key
, idStatus int
, idCommand bigint
, change_timestamp timestamp --anticipate if you can the time-zone problematic
)
TABLESPACE in_progress_command_tbs;
ALTER TABLE in_prgoress_command.command_status
ADD CONSTRAINT fk_command_status_status FOREIGN KEY (idStatus) REFERENCES in_prgoress_command.status(id);
ALTER TABLE in_prgoress_command.command_status
ADD CONSTRAINT fk_command_status_command FOREIGN KEY (idCommand) REFERENCES in_prgoress_command.command(id);
CREATE INDEX idx_command_status_status ON in_prgoress_command.command_status USING BTREE (idStatus) TABLESPACE command_indexes_tbs;
CREATE INDEX idx_command_status_command ON in_prgoress_command.command_status USING BTREE (idCommand) TABLESPACE command_indexes_tbs;
CREATE UNIQUE INDEX idxu_command_state ON in_prgoress_command.command_status USING BTREE (change_timestamp, idStatus, idCommand) TABLESPACE command_indexes_tbs;
CREATE OR REPLACE FUNCTION sp_trg_archiving_command ()
RETURNS TRIGGER
language plpgsql
as $function$
DECLARE
BEGIN
-- Copy the data
INSERT INTO archived_command.command
SELECT *
FROM in_prgoress_command.command
WHERE new.idCommand = idCommand;
INSERT INTO archived_command.command_status (idStatus, idCommand, change_timestamp)
SELECT idStatus, idCommand, change_timestamp
FROM in_prgoress_command.command_status
WHERE idCommand = new.idCommand;
INSERT INTO archived_command.command_category (idCategory, idCommand)
SELECT idCategory, idCommand
FROM in_prgoress_command.command_category
WHERE idCommand = new.idCommand;
-- Delete the data
DELETE FROM in_prgoress_command.command_status
WHERE idCommand = new.idCommand;
DELETE FROM in_prgoress_command.command_category
WHERE idCommand = new.idCommand;
DELETE FROM in_prgoress_command.command
WHERE idCommand = new.idCommand;
END;
$function$;
DROP TRIGGER IF EXISTS t_trg_archiving_command ON in_prgoress_command.command_status;
CREATE TRIGGER t_trg_archiving_command
AFTER INSERT
ON in_prgoress_command.command_status
FOR EACH ROW
WHEN (new.idstatus = 2 or new.idStatus = 3)
EXECUTE PROCEDURE sp_trg_archiving_command();
CREATE TABLE archived_command.command_status
(
id bigserial primary key
, idStatus int
, idCommand bigint
, change_timestamp timestamp --anticipate if you can the time-zone problematic
)
TABLESPACE archived_command_tbs;
ALTER TABLE archived_command.command_status
ADD CONSTRAINT fk_command_command_status FOREIGN KEY (idStatus) REFERENCES in_prgoress_command.category(id);
ALTER TABLE archived_command.command_status
ADD CONSTRAINT fk_command_command_status FOREIGN KEY (idCommand) REFERENCES archived_command.command(id);
CREATE INDEX idx_command_status_status ON archived_command.command_status USING BTREE (idStatus) TABLESPACE command_indexes_tbs;
CREATE INDEX idx_command_status_command ON archived_command.command_status USING BTREE (idCommand) TABLESPACE command_indexes_tbs;
CREATE UNIQUE INDEX idxu_command_state ON archived_command.command_status USING BTREE (change_timestamp, idStatus, idCommand) TABLESPACE command_indexes_tbs;
Conclusion:
In many cases, when you are worried by the disposition of your keys it is because they are not in the good place. Same goes for cars ones! :D
Do not take any solution as prophetic solution : benchmark it.

Update trigger assistance

Evening, needing assistance regarding triggers.
Within my development environment I have two tables, one containing employee data (which contains various data errors that will be amended via ALTER TABLE) and the log table.
How do i go about designing a trigger that will update multiple rows contained within the log table such as 'issue_status','status_update_date' when ALTER TABLE sql is used to amend the data contained in the first data table?
-- employee table
CREATE TABLE emp(
emp_id INTEGER NOT NULL,
emp_name VARCHAR(30),
emp_postcode VARCHAR(20),
emp_registered DATE,
CONSTRAINT pk_emp PRIMARY KEY (emp_id));
-- SQL for the log table
CREATE TABLE data_log
(issue_id NUMBER(2) NOT NULL,
table_name VARCHAR2(20),
row_name VARCHAR2(20),
data_error_code NUMBER(2),
issue_desc VARCHAR2(50),
issue_date DATE,
issue_status VARCHAR2(20),
status_update_date DATE);
-- example log insert
INSERT INTO data_log( SELECT DI_SEQ.nextval, 'emp', emp.emp_id, '1', 'emp_name case size not UPPER', SYSDATE, 'un-fixed', '' FROM emp WHERE emp_name != UPPER(emp_name));
This is the example of the issue inserted into the log table. All i want to do is if I update the emp table to set 'emp_name' to Upper the trigger will register this update and change the rows 'issue_status' and 'status_update_date' to 'fixed' and the 'sysdate' of when the change was made
I've done some browsing however i'm still struggling to understand, any literature recommendations would be appreciated also.
Thanks in advance for the assistance.
Note that we don't use ALTER TABLE to update the rows of a table as you have mentioned.We use Update statement. Your trigger should be a BEFORE UPDATE TRIGGER like this.
CREATE OR REPLACE TRIGGER trig_emp_upd BEFORE
UPDATE ON emp
FOR EACH ROW
WHEN ( new.emp_name = upper(old.emp_name) )
BEGIN
UPDATE data_log
SET
issue_status = 'fixed',
status_update_date = SYSDATE
WHERE
row_name =:new.emp_id;
END;
/

sql error 04098 invalid trigger

I need some assistance with troubleshooting the trigger that I'm trying to create/use for logging updates and inserts on a table.
I'm using a customers_history table to track all the changes being made on the customers table.
CREATE TABLE customers (
custID INTEGER PRIMARY KEY,
custFName VARCHAR2(30),
custLName VARCHAR2(30),
custState CHAR(20),
custZip NUMBER(5)
);
-- log inserts and updates on customers table
CREATE TABLE customers_history (
histID INTEGER PRIMARY KEY,
cID INTEGER,
cFName VARCHAR2(30),
cLName VARCHAR2(30),
cState CHAR(20),
cZip NUMBER(5)
);
Also, for the histID I'm using a sequence to auto increment the histID on customers_history table.
CREATE SEQUENCE ch_seq
MINVALUE 1
START WITH 1
INCREMENT BY 1;
CREATE OR REPLACE TRIGGER audit_customers
BEFORE UPDATE
OR INSERT ON customers
FOR EACH ROW
BEGIN
INSERT INTO customers_history(histID,cID,cFName,cLName,cState,cZip)
VALUES(ch_seq.nextval,:NEW.custID,:NEW.custFName,:NEW.custLName,
:NEW.custState,:NEW.custZip);
END;
/
I have been inserting two rows on customers prior to creating the trigger, and they work fine. After I create the trigger, it will not allow me to insert anymore rows on customers and it also throws the ORA-04098: trigger 'SYSTEM.AUDIT_CUSTOMERS' is invalid and failed re-validation 04098. 00000 - "trigger '%s.%s' is invalid and failed re-validation" error message.
I've tried to see if there is any code errors using select * from user_errors where type = 'TRIGGER' and name = 'audit_customers'; and it returned no lines. Not sure if that helps or not. Thanks.
you have created your trigger for multiple DML operations (Insert and Update) so you need to specipy the DML operation by using IF INSERTING THEN....END IF; and IF UPDATING THEN....END IF; for example:
CREATE OR REPLACE TRIGGER audit_customers
BEFORE UPDATE
OR INSERT ON customers
FOR EACH ROW
BEGIN
IF INSERTING THEN
INSERT INTO customers_history(histID,cID,cFName,cLName,cState,cZip)
VALUES(ch_seq.nextval,:NEW.custID,:NEW.custFName,:NEW.custLName,
:NEW.custState,:NEW.custZip);
END IF;
END;
/
I went through and changed a couple things and it seems to work for both insert and update operations. I dropped the tables, sequence, and trigger, then did the tried the following code and it works now. Thank you all for your time and inputs!
CREATE TABLE customers (
custID INTEGER,
custFName VARCHAR2(30),
custLName VARCHAR2(30),
custState CHAR(20),
custZip NUMBER(5)
);
CREATE TABLE customers_history (
histID INTEGER,
cID INTEGER,
cFName VARCHAR2(30),
cLName VARCHAR2(30),
cState CHAR(20),
cZip NUMBER(5)
);
CREATE OR REPLACE TRIGGER audit_customers
BEFORE UPDATE
OR INSERT ON customers
FOR EACH ROW
BEGIN
INSERT INTO customers_history(
histID,
cID,
cFName,
cLName,
cState,
cZip
)
VALUES(
ch_seq.nextval,
:new.custID,
:new.custFName,
:new.custLName,
:new.custState,
:new.custZip
);
END;
/
I used the same sequence code as before, added a couple of rows, and it worked. Then I altered the two tables, by adding the primary keys,
ALTER TABLE customers ADD PRIMARY KEY(custID);
ALTER TABLE customers_history ADD PRIMARY KEY(histID);
inserted a couple other columns, modified a few rows, and it still worked. I'm a happy camper, although I'm not certain what or how it actually got fixed.

Confused with Oracle Procedure with sequence, linking errors and filling null fields

I am trying to make a procedure that takes makes potential empty "received" fields use the current date. I made a sequence called Order_number_seq that populates the order number (Ono) column. I don't know how to link errors in the orders table to a entry in the Orders_errors table.
this is what i have so far:
CREATE PROCEDURE Add_Order
AS BEGIN
UPDATE Orders
CREATE Sequence Order_number_seq
Start with 1,
Increment by 1;
UPDATE Orders SET received = GETDATE WHERE received = null;
These are the tables I am working with:
Orders table
(
Ono Number Not Null,
Cno Number Not Null,
Eno Number Not Null,
Received Date Null,
Shipped_Date Date Null,
Creation_Date Date Not Null,
Created_By VARCHAR2(10) Not Null,
Last_Update_Date Date Not Null,
Last_Updated_By VARCHAR2(10) Not Null,
CONSTRAINT Ono_PK PRIMARY KEY (Ono),
CONSTRAINT Cno_FK FOREIGN KEY (Cno)
REFERENCES Customers_Proj2 (Cno)
);
and
Order_Errors table
(
Ono Number Not Null,
Transaction_Date Date Not Null,
Message VARCHAR(100) Not Null
);
Any help is appreciated, especially on linking the orders table errors to create a new entry in OrderErrors table.
Thanks in advance.
Contrary to Martin Drautzburg's answer, there is no foreign key for the order number on the Order_Errors table. There is an Ono column which appears to serve that purpose, but it is not a foreign as far as Oracle is concerned. To make it a foreign key, you need to add a constraint much like the Cno_FK on Orders. An example:
CREATE TABLE Order_Errors
(
Ono Number Not Null,
Transaction_Date Date Not Null,
Message VARCHAR(100) Not Null,
CONSTRAINT Order_Errors_Orders_FK FOREIGN KEY (Ono) REFERENCES Orders (Ono)
);
Or, if your Order_Errors table already exists and you don't want to drop it, you can use an ALTER TABLE statement:
ALTER TABLE Order_Errors
ADD CONSTRAINT Order_Errors_Orders_FK FOREIGN KEY (Ono) REFERENCES Orders (Ono)
;
As for the procedure, I'm inclined to say what you're trying to do does not lend itself well to a PROCEDURE. If your intention is that you want the row to use default values when inserted, a trigger is better suited for this purpose. (There is some performance hit to using a trigger, so that's a consideration.)
-- Create sequence to be used
CREATE SEQUENCE Order_Number_Sequence
START WITH 1
INCREMENT BY 1
/
-- Create trigger for insert
CREATE TRIGGER Orders_Insert_Trigger
BEFORE INSERT ON Orders
FOR EACH ROW
DECLARE
BEGIN
IF :NEW.Ono IS NULL
THEN
SELECT Order_Number_Sequence.NEXTVAL INTO :NEW.Ono FROM DUAL;
END IF;
IF :NEW.Received IS NULL
THEN
SELECT CURRENT_DATE INTO :NEW.O_Received FROM DUAL;
END IF;
END;
/
This trigger will then be executed on every single row inserted into the Orders table. It checks if the Ono column was NULL and replaces it with an ID from the sequence if so. (Be careful that you don't ever provide an ID that will later be generated by the sequence; it will get a primary key conflict error.) It then checks if the received date is NULL and sets it to the current date, using the CURRENT_DATE function (which I believe was one of the things you were trying to figure out), if so.
(Side note: Other databases may not require a trigger to do this and instead could use a default value. I believe PostgreSQL, for instance, allows the use of function calls in its DEFAULT clauses, and that is how its SERIAL auto-increment type is implemented.)
If you are merely trying to update existing data, I would think the UPDATE statements by themselves would suffice. Is there a reason this needs to be a PROCEDURE?
One other note. Order_Errors has no primary key. You probably want to have an auto-incrementating surrogate key column, or at least create an index on its Ono column if you only ever intend to select off that column.
There are a number of confusing things in your question:
(1) You are creating a sequence inside a procedure. Does this even compile?
(2) Your procedure does not have any parameters. It just updates the RECEIVED column of all rows.
(3) You are not telling us what you want in the MESSAGE column.
My impression is that you should first go "back to the books" before you ask questions here.
As for your original question
how to link errors in the orders table to a entry in the Orders_errors
table.
This is aleady (correctly) done. The Orders_error table contains an ONO foreign key which points to an order.