I'm preparing for an exam on Model-Driven Development. I came across a specific database trigger:
CREATE TRIGGER tManager_bi
FOR Manager BEFORE INSERT AS
DECLARE VARIABLE v_company_name CHAR(30);
BEGIN
SELECT M.company
FROM Manager M
WHERE M.nr = NEW.reports_to
INTO :v_company_name;
IF (NOT(NEW.company = v_company_name))
THEN EXCEPTION eReportsNotOwnCompany;
END
This trigger is designed to prevent input in which a manager reports to an outside manager, i.e. one that is not from the same company. The corresponding OCL constraint is:
context Manager
inv: self.company = self.reports_to.company
The relevant table looks like (simplified):
CREATE TABLE Manager
(
nr INTEGER NOT NULL,
company VARCHAR(50) NOT NULL,
reports_to INTEGER,
PRIMARY KEY (nr),
FOREIGN KEY (reports_to) REFERENCES Manager (nr)
);
The textbook says that this trigger will also work correctly when the newly inserted manager doesn't report to anyone (i.e. NEW.reports_to is NULL), and indeed, upon testing, it does work correctly.
But I don't understand this. If NEW.reports_to is NULL, that would mean the variable v_company_name will be empty (uninitialized? NULL?), which would then mean the comparison NEW.company = v_company_name would return false, causing the exception to be thrown, right?
What am I missing here?
(The SQL shown is supposed to be SQL:2003 compliant. The MDD tool is Cathedron, which uses Firebird as an RDBMS.)
You're missing the fact that when you compare NULL to NULL (or to any other value), the answer is NULL, not false. And negation of NULL is still NULL, so in the IF statement the ELSE part would fire (if there is one).
I suggest you read the Firebird Null Guide for better understanding it all.
AS. Making this answer for the sake of code highlighting.
You might want to modify your trigger to respond to both updates and inserts.
CREATE TRIGGER tManager_bi
FOR Manager BEFORE INSERT OR UPDATE AS
...
You also may avoid hand-writing the trigger at all, if you do not need that specific exception identifier.
You may just use SQL Check Constraint for that
alter table Manager
add constraint chk_ManagerNotRespondsOneself
CHECK ( NOT EXISTS (
SELECT * FROM Manager M
WHERE M.nr = reports_to
AND M.company = company
) )
Specifying custom exceptions over CHECK constraints does not look possible now... http://tracker.firebirdsql.org/browse/CORE-1852
Related
Is there a way in Informix (v12 or higher) to retrieve the name of the current SAVEPOINT?
In Oracle there is something similar: You can name the transaction using SET TRANSACTION NAME and then select the transaction name from v$transaction:
SELECT name
FROM v$transaction
WHERE xidusn
|| '.'
|| xidslot
|| '.'
|| xidsqn = DBMS_TRANSACTION.LOCAL_TRANSACTION_ID;
That is not very straightforward, but it does the trick. Effectively we can use that to have a transaction scoped variable (yes, that is ugly, but it works for years now).
We have a mechanism based on this and would like to port that to Informix. Is there a way to do that?
Of course, if there is a different mechanism providing transaction scoped variables (so DEFINE GLOBAL is not what we are looking for), that would be helpful, too, but I doubt, there is one.
Thank you all for your comments so far.
Let me show the solution I have come up with. It is just a work in progress idea, but I hope it will lead somewhere:
I will need a "audit_lock" table which always contains a record for the current transaction carrying information about the current transaction, especially a username and a unique transaction_id (UUID or similar). That row will be inserted on starting the transaction and deleted before committing it.
Then I have a generic audit_trail table containing the audited information.
All audited tables fill the generic audit trail table using triggers, serializing each audited column into a separate record of the generic audit trail table.
The audit_lock and the audit_trail table need to use row locking. Also to avoid read locks on the audit_lock table we need to set the isolation level to COMMITTED READ LAST COMMITTED. If your use case does not support that, the suggested pattern does not work.
Here's the DDL:
CREATE TABLE audit_lock
(
transaction_id varchar(40) primary key,
username varchar(40)
);
alter table audit_lock
lock mode(ROW);
CREATE TABLE audit_trail
(
id serial primary key,
tablename varchar(255) NOT NULL,
record_id numeric(10) NOT NULL,
username varchar(40) NOT NULL,
transaction_id varchar(40) NOT NULL,
changed_column_name varchar(40),
old_value varchar(40),
new_value varchar(40),
operation varchar(40) NOT NULL,
operation_date datetime year to second NOT NULL
);
alter table audit_trail
lock mode(ROW);
Now we need to have the audited table:
CREATE TABLE audited_table
(
id serial,
somecolumn varchar(40)
);
And the table has an insert trigger writing into the audit_trail:
CREATE PROCEDURE proc_trigger_audit_audited_table ()
REFERENCING OLD AS o NEW AS n FOR audited_table;
INSERT INTO audit_trail
(
tablename,
record_id,
username,
transaction_id,
changed_column_name,
old_value,
new_value,
operation,
operation_date
)
VALUES
(
'audited_table',
n.id,
(SELECT username FROM audit_lock),
(SELECT transaction_id FROM audit_lock),
'somecolumn',
'',
n.somecolumn,
'INSERT',
sysdate
);
END PROCEDURE;
CREATE TRIGGER audit_insert_audited_table INSERT ON audited_table REFERENCING NEW AS post
FOR EACH ROW(EXECUTE PROCEDURE proc_trigger_audit_audited_table() WITH TRIGGER REFERENCES);
Now let's use that: First the caller of the transaction needs to generate a transaction_id for himself, maybe using a UUID generation mechanism. In the example below the transaction_id is simply '4711'.
BEGIN WORK;
SET ISOLATION TO COMMITTED READ LAST COMMITTED; --should be set globally
-- Issue the generation of the audit_lock entry at the beginnig of each transaction
insert into audit_lock (transaction_id, username) values ('4711', 'userA');
-- Is it there?
select * from audit_lock;
-- do data manipulation stuff
insert into audited_table (somecolumn) values ('valueA');
-- Issue that at the end of each transaction
delete from audit_lock
where transaction_id = '4711';
commit;
In a quick test, all of this worked even in simultaneaous transactions. Of course, that still needs a lot of work and testing, but I currently hope that path is feasible.
Let me also add a little bit more info on the other approach we are using in Oracle:
In Oracle we are (ab)using the transaction name, to store exactly the information that in the suggestion above is stored in the audit_lock table.
The rest is the same as above. The triggers work perfectly in that specific application, even though there are of course a lot of scenarios for other applications, where putting insert, delete and update triggers on each table generating records for each changed column in the table would be nuts. In our application it works perfectly for ten years now and it has no mentionable performance impact on the way the application is used.
In the java application server all code blocks, that are changing data, start with setting the transaction name first and then do loads of changes to various tables, that might be issuing all these triggers. All of these are running in the same transaction and since that has a transaction name which contains the application user, the triggers can write that information to the audit trail table.
I know there are other approaches to the problem, and you could even do that with hibernate features only, but our approach allows us to enforce some consistency through the database (NOT NULL constraint in the audit trail table on the username). Since everything is done via triggers, we can let those fail, if the transaction name is not containing the user (by requiring it to be in a specific format). If there any other portions of the application, other applications or ignorant administrators trying to issue updates to the audited tables without respecting to set the transaction name to the specific format, those updates will fail. This makes updates to the audited tables, that do not generate the required audit table entries harder (certainly not impossible, a ill willing admin can do anything, of course).
So all of you that are cringing now, let me quote Luis: Might seem like a terrible idea, but I have my use case ;)
The idea of #Luís to creating a specific table in each transaction to store the information causes a locking issue in systables. Let's call that "transaction info table". That idea did not cross my mind, since DDL causes commits in Oracle. So I tried that in Informix but if I try to create a table called "tblX" in two simultaneaous transactions, the second transaction get's a locking exception:
Cannot update system catalog (systables). [SQL State=IX000, DB Errorcode=-312]
Next: ISAM error: key value locked [SQL State=IX000, DB Errorcode=-144]
But letting all transactions use the same table as above works, as far as I tested it right now.
I have a table where each row represents a key-value pair containing application-specific settings (such as the number of days to retain alerts, etc.). Each of these key-value pairs has a different range of valid values, so no single check constraint will apply equally to all rows. Some rows might need no validation at all and others might have string values needing special consideration. Is there some way I can create a check constraint on a per-row basis and have that constraint enforced when that row is updated?
I have attempted several times to achieve this, but have run into hurdles each time. Each attempt relies on the existence of a [Check] column on the table, wherein the constraint is defined for that row, similar to a normal table-based constraint (such as "((CAST Value AS INTEGER) <= 60)").
My first attempt was to create a normal check constraint that calls a user-defined function that reads the contents of the [Check] column (based on an identity value), performs a test of the constraint, and returns a true/false result, depending on whether or not the constraint is not violated. The problem with this approach is that it requires writing dynamic SQL to get the contents of the [Check] column as well as executing the code that it contains. But of course, dynamic SQL is not permitted in a function.
Next, I tried changing the function to a stored procedure, but it does not appear to be possible to call a stored procedure via a check constraint.
Finally, I tried creating a function AND a stored procedure, and calling the stored procedure from the function, but that is not permitted either.
The only way I know that will work is to write a huge, monolithic check constraint, containing checks for each row by identity value, all OR'ed together, like this:
(ID = 1 AND (CAST Value AS INTEGER) <= 100) OR (ID = 2 AND Value IN ('yes', 'no')) OR...
But that's an error-prone maintenance nightmare. Does anyone know of a way to accomplish what I want, without resorting to a monolithic check constraint?
As requested, consider the following table definition and some sample rows:
CREATE TABLE [dbo].[GenericSetting]
(
[ID] [INT] IDENTITY(1,1) NOT NULL,
[Name] [NVARCHAR](50) NOT NULL,
[Value] [NVARCHAR](MAX) NULL,
[Check] [NVARCHAR](MAX) NULL,
CONSTRAINT [PK_GenericSetting] PRIMARY KEY CLUSTERED ([ID])
)
INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('AlertRetentionDays', 60, 'CAST(Value AS INTEGER) <= 60');
INSERT INTO [dbo].[GenericSetting] ([Name],[Value],[Check]) VALUES ('ExampleMode', 60, 'CAST(Value AS INTEGER) IN (1,2,5)');
You need to create trigger on this table to accomplish this task.
You would write such check constraints using conditional logic. For safety, this is actually a case where I would use case for boolean logic:
alter table eav add constraint chk_eav_value
check (case when attribute = 'amount'
then (case when try_convert(int, value) >= 0 then 'ok' else 'bad' end)
when attribute = 'us_zip'
then (case when value like '[0-9][0-9][0-9][0-9][0-9]' then 'ok' else 'bad' end)
when attribute like 'city'
then (case when value not like '%[a-zA-Z ']%' then 'ok' else 'bad'
else 'ok'
end) = 'ok');
Check constraints arent really designed to do that...best you vould do would be
a validation trigger on the table, which sucks, or
implement all your writes as stored procs themselves, and disable INS/UPD for the table otherwise. This also sucks.
At the risk of being a SO stereotype, you seem like you are putting business logic in the db layer...check constraints are great for static checks, but they werent really intended for much beyond that. I would be tempted to suggest looking upsteam (DA layer or common layer of your codebase) for solutions as well.
Yes, i went off a little there. Sorry in advance.
In theory, you can implement this kind of check via a scalar UDF. However, be aware that they can be quite troublesome in such scenarios.
Considering that you have already chosen EAV design approach for your system, adding UDF as a check constraint might degrade the overall performance from bad to worst.
I have a table, something like:
create table state {foo int not null, bar int not null, baz varchar(32)};
create unique index on state(foo,bar);
I'd like to lock for a unique record in this table. However, if there's no existing record I'd like to prevent anyone else from inserting a record, but without inserting myself.
I'd use "FOR UPDATE WITH RS USE AND KEEP EXCLUSIVE LOCKS" but that only seems to work if the record exists.
A) You can let DB2 create every ID number. Let's say you have defined your Customer table
CREATE TABLE Customers
( CustomerID Int NOT NULL
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY
, Name Varchar(50)
, Billing_Type Char(1)
, Balance Dec(9,2) NOT NULL DEFAULT
);
Insert rows without specifying the CustomerID, since DB2 will always produce the value for you.
INSERT INTO Customers
(Name, Billing_Type)
VALUES
(:cname, :billtype);
If you need to know what the last value assigned in your session was, you can then use the IDENTITY_VAL_LOCAL() function.
B) In my environment, I generally specify GENERATED BY DEFAULT. This is in part due to the nature of our principle programming language, ILE RPG-IV, where developers have traditionally to allowed the compiler to use the entire record definition. This leads me to I can tell everyone to use a sequence to generate ID values for a given table or set of tables.
You can grant select to only you, but if there are others with secadm or other privileges, they could insert.
You can do something with a trigger, something like check the current session, and if the user is your user, then it inserts the row.
if (SESSION_USER <> 'Alex) then
rollback -- or generate an exception
end if;
It seems that you also want to keep just one row, then, you can control that also in a trigger:
select count(0) into value from state
if (value > 1) then
rollback -- or generate an exception
end if;
Can a Check Constraint (or some other technique) be used to prevent a value from being set that contradicts its prior value when its record is updated.
One example would be a NULL timestamp indicating something happened, like "file_exported". Once a file has been exported and has a non-NULL value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only permitted to increase, but can never decrease.
If it helps I'm using postgresql, but I'd like to see solutions that fit any SQL implementation
Use a trigger. This is a perfect job for a simple PL/PgSQL ON UPDATE ... FOR EACH ROW trigger, which can see both the NEW and OLD values.
See trigger procedures.
lfLoop has the best approach to the question. But to continue Craig Ringer's approach using triggers, here is an example. Essentially, you are setting the value of the column back to the original (old) value before you update.
CREATE OR REPLACE FUNCTION example_trigger()
RETURNS trigger AS
$BODY$
BEGIN
new.valuenottochange := old.valuenottochange;
new.valuenottochange2 := old.valuenottochange2;
RETURN new;
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
DROP TRIGGER IF EXISTS trigger_name ON tablename;
CREATE TRIGGER trigger_name BEFORE UPDATE ON tablename
FOR EACH ROW EXECUTE PROCEDURE example_trigger();
One example would be a NULL timestamp indicating something happened,
like "file_exported". Once a file has been exported and has a non-NULL
value, it should never be set to NULL again.
Another example would be a hit counter, where an integer is only
permitted to increase, but can never decrease.
In both of these cases, I simply wouldn't record these changes as attributes on the annotated table; the 'exported' or 'hit count' is a distinct idea, representing related but orthogonal real world notions from the objects they relate to:
So they would simply be different relations. Since We only want "file_exported" to occur once:
CREATE TABLE thing_file_exported(
thing_id INTEGER PRIMARY KEY REFERENCES(thing.id),
file_name VARCHAR NOT NULL
)
The hit counter is similarly a different table:
CREATE TABLE thing_hits(
thing_id INTEGER NOT NULL REFERENCES(thing.id),
hit_date TIMESTAMP NOT NULL,
PRIMARY KEY (thing_id, hit_date)
)
And you might query with
SELECT thing.col1, thing.col2, tfe.file_name, count(th.thing_id)
FROM thing
LEFT OUTER JOIN thing_file_exported tfe
ON (thing.id = tfe.thing_id)
LEFT OUTER JOIN thing_hits th
ON (thing.id = th.thing_id)
GROUP BY thing.col1, thing.col2, tfe.file_name
Stored procedures and functions in PostgreSQL have access to both old and new values, and that code can access arbitrary tables and columns. It's not hard to build simple (crude?) finite state machines in stored procedures. You can even build table-driven state machines that way.
I got the following trigger on my sql server 2008 database
CREATE TRIGGER tr_check_stoelen
ON Passenger
AFTER INSERT, UPDATE
AS
BEGIN
IF EXISTS(
SELECT 1
FROM Passenger p
INNER JOIN Inserted i on i.flight= p.flight
WHERE p.flight= i.flightAND p.seat= i.seat
)
BEGIN
RAISERROR('Seat taken!',16,1)
ROLLBACK TRAN
END
END
The trigger is throwing errors when i try to run the query below. This query i supposed to insert two different passengers in a database on two different flights. I'm sure both seats aren't taken, but i can't figure out why the trigger is giving me the error. Does it have to do something with correlation?
INSERT INTO passagier VALUES
(13392,5315,3,'Janssen Z','2A','October 30, 2006 10:43','M'),
(13333,5316,2,'Janssen Q','2A','October 30, 2006 11:51','V')
UPDATE:
The table looks as below
CREATE TABLE Passagier
(
passengernumber int NOT NULL CONSTRAINT PK_passagier PRIMARY KEY(passagiernummer),
flight int NOT NULL CONSTRAINT FK_passagier_vlucht REFERENCES vlucht(vluchtnummer)
ON UPDATE NO ACTION ON DELETE NO ACTION,
desk int NULL CONSTRAINT FK_passagier_balie REFERENCES balie(balienummer)
ON UPDATE NO ACTION ON DELETE NO ACTION,
name varchar(255) NOT NULL,
seat char(3) NULL,
checkInTime datetime NULL,
gender char(1) NULL
)
There are a few problems with this subquery:
SELECT 1
FROM Passenger p
INNER JOIN Inserted i on i.flight= p.flight
WHERE p.flight= i.flight AND p.seat= i.seat
First of all, the WHERE p.flight = i.flight is quite unnecessary, as it's already part of your join.
Second, the p.seat = i.seat should also be part of the JOIN.
Third, this trigger runs after the rows have been inserted, so this will always match, and your trigger will therefore always raise an error and roll back.
You can fix the trigger, but a much better method would be to not use a trigger at all. If I understand what you're trying to do correctly, all you need is a UNIQUE constraint on flight, seat:
ALTER TABLE passgier
ADD CONSTRAINT IX_passagier_flightseat
UNIQUE (flight, seat)
If you run your trigger after inserting a record, and then look for a record with the values you just inserted, you will always find it. You might try an INSTEAD OF trigger so you can check for an existing records before actually doing the insert.
It might be throwing the error by finding itself in the table (circular reference back to itself). You might want to add an additional filter to the where clause like " AND Passenger.ID <> inserted.ID "