I have 6 tables:
Staff ( StaffID, Name )
Product ( ProductID, Name )
Faq ( FaqID, Question, Answer, ProductID* )
Customer (CustomerID, Name, Email)
Ticket ( TicketID, Problem, Status, Priority, LoggedTime, CustomerID* , ProductID* )
TicketUpdate ( TicketUpdateID, Message, UpdateTime, TicketID* , StaffID* )
Question to be answered:
Given a Product ID, remove the record for that Product. When a product is removed all associated FAQ can stay in the database but should have a null reference in the ProductID field. The deletion of a product should, however, also remove any associated tickets and their updates. For completeness deleted tickets and their updates should be copied to an audit table or a set of tables that maintain historical data on products, their tickets and updates. (Hint: you will need to define a additional table or set or tables to maintain this audit information and automatically copy any deleted tickets and ticket updates when a product is deleted). Your audit table/s should record the user which requested the deletion and the timestamp for the deletion operation.
I have created additional maintain_audit table:
CREATE TABLE maintain_audit(
TicketID INTEGER NOT NULL,
TicketUpdateID INTEGER NOT NULL,
Message VARCHAR(1000),
mdate TIMESTAMP NOT NULL,
muser VARCHAR(128),
PRIMARY KEY (TicketID, TicketUpdateID)
);
Addittionally I have created 1 function and trigger:
CREATE OR REPLACE FUNCTION maintain_audit()
RETURNS TRIGGER AS $BODY$
BEGIN
INSERT INTO maintain_audit (TicketID,TicketUpdateID,Message,muser,mdate)
(SELECT Ticket.ID,TicketUpdate.ID,Message,user,now() FROM Ticket, TicketUpdate WHERE Ticket.ID=TicketUpdate.TicketID AND Ticket.ProductID = OLD.ID);
RETURN OLD;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER maintain_audit
BEFORE DELETE
ON Product
FOR EACH ROW
EXECUTE PROCEDURE maintain_audit()
DELETE FROM Product WHERE Product.ID=30;
When I run this all I get this :
ERROR: null value in column "productid" violates not-null constraint
CONTEXT: SQL statement "UPDATE ONLY "public"."faq" SET "productid" = NULL WHERE $1 OPERATOR(pg_catalog.=) "productid""
GUYS,Could you help me in sorting out this problem?
What you probably want is triggers. Not sure what RDBMS you are using, but that's where you should start. I started from zero and had triggers up and running in a somewhat similar situation within an hour.
In case you don't already know, triggers do something after a specific type of query happens on a table, such as an insert, update or delete. You can do any type of query.
Another tip I would give you is not to delete anything, since that could break data integrity. You could just add an "active" boolean field, set active to false, then filter those out in most of your system's queries. Alternatively, you could just move the associated records out to a Products_archive table that has the same structure. Easy to do with:
select * into destination from source where 1=0
Still, I would do the work you need done using triggers because they're so automatic.
create a foreign key for Ticket.product_id, and TicketUpdate.Ticket_id which has ON DELETE CASCADE. This will automatically delete all tickets and ticketupdates when you delete the product.
create an audit table for Product deleters with product_id, user and timestamp. audit tables for ticket,ticketUpdate should mirror them exactly.
create a BEFORE DELETE TRIGGER for table Ticket which copies tickets to the audit table.
Do the same for TicketUpdate
Create an AFTER DETETE Trigger on Products to capture who requested a product be deleted in the product audit table.
In table FAQ create Product_id as a foreign key with ON DELETE SET NULL
Related
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
I'm a new SQL developer. After recommendations I have altered my trigger (for this task I need to use a trigger so can't avoid it), but I have re-altered my trigger. I want it to prevent a duplication in the Rentals table of the BikeID foreign key contained within it.
This is my code at the moment:
CREATE TRIGGER BikeNotAvailable
ON dbo.SA_Rental
AFTER INSERT
AS
IF EXISTS (SELECT *
FROM SA_Rental
INNER JOIN inserted i ON i.BikeID = dbo.SA_Rental.BikeID)
BEGIN
ROLLBACK
RAISERROR ('This bike is already being hired', 16, 1);
END
go
But when I enter the BikeID in the Rentals table, even though the BikeID is not present inside a row yet, it still raises the error - why? (I have also tested this on an empty table and it still raises the error)
Just some context on my data, the BikeID is a primary key from the 'Bike' table that is shared as a foreign key to the Rentals table, not sure if this has anything to do with the error.
Can someone please help me fix this trigger so it works.
Thanks.
Well, as it's an AFTER trigger, the trigger is run after the new record is added to the table (at least visible for your trigger).
Supposing that your table has an automatically generated ID column, you should exclude the inserted row from your check like this:
CREATE TRIGGER BikeNotAvailable ON dbo.SA_Rental
AFTER INSERT
AS
if exists ( select * from SA_Rental
inner join inserted i on i.BikeID=dbo.SA_Rental.BikeID
where SA_Rental.RentalID <> i.RentalID)
begin
rollback
RAISERROR ('This bike is already being hired', 16, 1);
end
go
A far simpler way to achieve what you are after is to create a unique index:
CREATE UNIQUE INDEX BikeRented ON SA_Rental (BikeID);
This, of course, assumes that you delete the row from your table when the bike is no longer rented (as this is the implied logic in your post). If this is not the case, then we need more detail; what specifies on your table that the rental has completed?
If we assume you have a return date, and the return date is NULL when the bike is yet to be returned, then you would use a filtered index like so:
CREATE UNIQUE INDEX BikeRented ON SA_Rental (BikeID)
WHERE ReturnedDate IS NULL;
This seems so simple, but I haven't been able to find an answer to this question.
What do I want? A master table with rows that delete themselves whenever they are not referenced (via foreign keys) anymore. The solution may or may not be specific to PostgreSql.
How? One of my approaches to solving this problem (actually, the only approach so far) involves the following: For every table that references this master table, on UPDATE or DELETE of a row, to check for the referenced row in master, how many other other rows still refer to the referenced row. If it drops down to zero, then I delete that row in master as well.
(If you have a better idea, I'd like to know!)
In detail:
I have one master table referenced by many others
CREATE TABLE master (
id serial primary key,
name text unique not null
);
All the other tables have the same format generally:
CREATE TABLE other (
...
master_id integer references master (id)
...
);
If one of these are not NULL, they refer to a row in master. If I go to this and try to delete it, I will get an error message, because it is already referred to:
ERROR: update or delete on table "master" violates foreign key constraint "other_master_id_fkey" on table "other"
DETAIL: Key (id)=(1) is still referenced from table "other".
Time: 42.972 ms
Note that it doesn't take too long to figure this out even if I have many tables referencing master. How do I find this information out without having to raise an error?
You can do one of the following:
1) Add reference_count field to master table. Using triggers on detail tables increase the reference count whenever a row with this master_id is added. Decrease the count, when row gets deleted. When reference_count reaches 0 - delete the record.
2) Use pg_constraint table (details here) to get the list of referencing tables and create a dynamic SQL query.
3) Create triggers on every detail table, that deletes master_id in main table. Silence error messages with BEGIN ... EXCEPTION ... END.
In case someone wants a real count of rows in all other tables that reference a given master row, here is some PL/pgSQL. Note that this works in plain case with single column constraints. It gets more involved for multi-column constraints.
CREATE OR REPLACE FUNCTION count_references(master regclass, pkey_value integer,
OUT "table" regclass, OUT count integer)
RETURNS SETOF record
LANGUAGE 'plpgsql'
VOLATILE
AS $BODY$
declare
x record; -- constraint info for each table in question that references master
sql text; -- temporary buffer
begin
for x in
select conrelid, attname
from pg_constraint
join pg_attribute on conrelid=attrelid and attnum=conkey[1]
where contype='f' and confrelid=master
and confkey=( -- here we assume that FK references master's PK
select conkey
from pg_constraint
where conrelid=master and contype='p'
)
loop
"table" = x.conrelid;
sql = format('select count(*) from only %s where %I=$1', "table", x.attname);
execute sql into "count" using pkey_value;
return next;
end loop;
end
$BODY$;
Then use it like
select * from count_references('master', 1) where count>0
This will return a list of tables that have references to master table with id=1.
SELECT *
FROM master ma
WHERE EXISTS (
SELECT *
FROM other ot
WHERE ot.master_id = ma.id
);
Or, the other way round:
SELECT *
FROM other ot
WHERE EXISTS (
SELECT *
FROM master ma
WHERE ot.master_id = ma.id
);
SO if you want to update (or delete) only the rows in master that are not referenced by other, you could:
UPDATE master ma
SET id = 1000+id
, name = 'blue'
WHERE name = 'green'
AND NOT EXISTS (
SELECT *
FROM other ot
WHERE ot.master_id = ma.id
);
I created a trigger for a table (Person) so that each time a new Person entry is created its ID is inserted in another table (Person_ID). Person_ID table has only 2 rows: ID (primary key, int and Identity) and Person_ID (GUID whose values is passed with the trigger).
This schema cannot be changed due to other dependencies with our business logic.
Now I need to update a field of Person table with the Person_ID.ID (the identity automatically generated once that person has been created). To do that I created a trigger for Person_ID so that once a new entry is created, the generated ID will updated the target field in Person table:
UPDATE Person
SET target_Person = (SELECT JPerson_ID.ID FROM inserted)
WHERE Person.ID IN (SELECT JPerson_ID.PersonID FROM inserted)
When I create a new Person I get the following exception:
Violation of PRIMARY KEY constraint 'TBL_PERSON_A_PK'.
Cannot insert duplicate key in object 'dbo.TBL_PERSON_A.
There is an update trigger associated to table Person that comes from a legacy component and I cannot edit or see it (it is encrypted). It seems the reason of the exception above.
Why do I get this exception even if I simply make an UPDATE and not an insert?
To solve this I disable such a trigger before executing the update and then enabling it again and it works like a charm:
EXECUTE sp_executesql N'DISABLE TRIGGER dbo.TBL_PERSON_TRU1 ON PERSON'
...
EXECUTE sp_executesql N'ENABLE TRIGGER dbo.TBL_PERSON_TRU1 ON PERSON'
However how can I be sure that this will not bring to logic errors?
Thanks.
I don't know if this causes your problem, but have you kept in mind that your (SELECT JPerson_ID.ID FROM inserted) could return more than one row?
So you insert trigger must be changed to somewhat like:
CREATE TRIGGER [dbo].[trgJPerson_ID] ON [dbo].[TBL_PERSON_A]
FOR INSERT
AS
INSERT INTO dbo.JPerson_ID(ID )
SELECT ID FROM inserted
and accordingly the delete-trigger(if you have one)
CREATE TRIGGER [dbo].[trgJPerson_ID] ON [dbo].[TBL_PERSON_A]
FOR DELETE
AS
DELETE FROM dbo.JPerson_ID
WHERE ID IN(SELECT ID FROM DELETED)
Unless I have misunderstood, you seem to have answered your own question; it appears that the trigger you can't see the definition of is doing something to prevent the update statement from completing.
Without knowing what that trigger does I can't see how we can help you.
As a side note, I don't understand why you are updating the Person table with the ID from the Person_ID table. It seems a bit pointless really as you can just join on to the Person_ID to retrieve the ID
Select p.ID, -- The Person GUID
p.Name,
pi.ID -- The Person_ID ID
From dbo.Person p
Join dbo.Person_ID pi on p.ID = pi.PersonID
Let's say that I have a table of items, and for each item, there can be additional information stored for it, which goes into a second table. The additional information is referenced by a FK in the first table, which can be NULL (if the item doesn't have additional info).
TABLE item (
...
item_addtl_info_id INTEGER
)
CONSTRAINT fk_item_addtl_info FOREIGN KEY (item_addtl_info)
REFERENCES addtl_info (addtl_info_id)
TABLE addtl_info (
addtl_info_id INTEGER NOT NULL
GENERATED BY DEFAULT
AS IDENTITY (
INCREMENT BY 1
NO CACHE
),
addtl_info_text VARCHAR(100)
...
CONSTRAINT pk_addtl_info PRIMARY KEY (addtl_info_id)
)
What is the "best practice" to update an item's additional info (in IBM DB2 SQL, preferably)?
It should be an UPSERT operation, meaning that if additional info does not yet exist then a new record is created in the second table, but if it does, then it is only updated, and the FK in the first table does not change.
So imperatively, this is the logic:
UPSERT(item, item_info):
CASE WHEN item.item_addtl_info_id IS NULL THEN
INSERT INTO addtl_info (item_info)
UPDATE item.item_addtl_info_id (addtl_info.addtl_info_id)
^^^^^^^^^^^^^
ELSE
UPDATE addtl_info (item_info)
END
My main problem is how to get the newly inserted addtl_info row's id (underlined above). In a stored proc I can request the id from a sequence and store it in a variable, but maybe there is a more straightforward way. Isn't it something that comes up all the time when programming databases?
I mean, I'm really not interested in what the id of the addtl_info record is as long as it remains unique and is referenced properly. So using sequences seems a bit of an overkill to me in this case.
As a matter of fact, this UPSERT operation should be part of the SQL language as a standard operation (maybe it is, and I just don't know about it?)...
The syntax I was looking for is:
SELECT * FROM NEW TABLE ( INSERT INTO phone_book VALUES ( 'Peter Doe','555-2323' ) )
from Wikipedia (http://en.wikipedia.org/wiki/Insert_%28SQL%29)
This is how to refer to the record that was just inserted in the table.
My colleague called this construct an "in-place trigger", which what it really is...
Here is the first version that I put together as a compound SQL statement:
begin atomic
declare addtl_id integer;
set addtl_id = (select item_addtl_info_id from item where item.item_id = XXX);
if addtl_id is null
then
set addtl_id = (select addtl_info_id from new table
(insert into addtl_info
(addtl_info_text)
values ('My brand new additional info')
)
);
update item set item.item_addtl_info_id = addtl_id
where item.item_id = XXX;
else
update addtl_info set addtl_info_text = 'My updated additional info'
where addtl_info.addtl_info_id = addtl_id;
end if;
end
XXX being equal to the item id to be updated - this code can now be easily inserted into a sproc, and XXX can be converted to an input parameter.
I also tried using MERGE INTO, but I couldn't figure out a syntax for updating a table different from what was specified as the target.