SQL Server Update Trigger only if its Update and only for specific values in the column - sql

There is a certain job that will insert and update this table called ContactInfo (with 2 columns - Id, EmailId) several times a day.
What's a good way to write a trigger on this table to revert back the EmailId for only specific Ids, whenever only those EmailIds for those Ids get updated?
Don't mind hard-coding those Ids in the trigger since the list is about 40.
But specifically concerned about trigger not firing for every update, since updates happen all the time, and don't want the trigger to cause resource issues.
Additional info: table has about 600k entries and is indexed on Id.
In summary: is it possible for the trigger to get fired only when certain values are updated in the column, and not any update on the column.

One alternative mechanism you might consider would be adding another table, called, say, LockedValues. I'm a bit unsure from your narrative what values you're trying to prevent changes to, but here's an example:
Table T, contains two columns, ID and Val:
create table T (
ID int not null,
Val int not null,
constraint PK_T PRIMARY KEY (ID),
constraint UK_T_Lockable UNIQUE (ID,Val)
)
And we have 3 rows:
insert into T(ID,Val)
select 1,10 union all
select 2,20 union all
select 3,30
And we want to prevent the row with ID 2 from having it's Val changed:
create table Locked_T (
ID int not null,
Val int not null,
constraint UQ_Locked_T UNIQUE (ID,Val), --Only really need an index here
constraint FK_Locked_T_T FOREIGN KEY (ID,Val) references T (ID,Val)
)
insert into Locked_T (ID,Val) select 2,20
And so now, of course, any application that is only aware of T will be unable to edit the row with ID 2, but can freely alter rows 1 and 3.
This has the benefit that the enforcement code is built into SQL Server already, so probably quite efficient. You don't need a unique key on Locked_T, but it should be indexed so that it's quite quick to detect that values aren't present.
This all assumes that you were going to write a trigger that rejected changes, rather than one that reverted changes. For that, you'd still have to write a trigger (though I'd still suggest having this separate table, and then writing your trigger to do an update inner joining inserted with Locked_T - which should be quite efficient still).
(Be warned, however: Triggers that revert changes are evil)

Related

Updating sequence for primary key in postgres for a table with a lot of deletions

I have the following problem: I have a table with a lot of deletions and inserts of rows. Also I would like to assign to each of current rows in tables some id number. Currently I'm trying to do it with
DROP SEQUENCE IF EXISTS market_orders_seq
CREATE SEQUENCE market_orders_seq CACHE 1
CREATE TABLE market_orders (id int NOT NULL DEFAULT nextval('market_orders_seq') PRIMARY KEY, typ varchar(5), tag varchar(30), owner_id int, owner_tag varchar(5), amount int, price int, market_id int)
ALTER SEQUENCE market_orders_seq OWNED BY market_orders.id
But if I understand correctly, sequences are monotonous and can't go down when i delete some rows, so i had encountered a problem of ids being inflated quite fast. What is an solution to this problem? I would like to use the first unused id for my inserts, but I don't know how to do it.
While this is technically feasible, I would not actually recommend going this way.
First, an integer value can store values up to 2 about billions, which you probably won’t hit - and you can still switch to bigint, which may reach about 1^19.
Also, identifying the gaps requires scanning the table for each and every insert, which is inefficient (the larger the table, the less efficient).
insert into market_orders(id, typ, ...)
select
min(id) + 1,
'foo',
...
from market_orders mo
where not exists(select 1 from market_orders mo1 where mo1.id = mo + 1)
Side note: you should be using the [big]serial datatype, so you don’t need to handle the sequence by yourself.

How to ignore duplicate Primary Key in SQL?

I have an excel sheet with several values which I imported into SQL (book1$) and I want to transfer the values into ProcessList. Several rows have the same primary keys which is the ProcessID because the rows contain original and modified values, both of which I want to keep. How do I make SQL ignore the duplicate primary keys?
I tried the IGNORE_DUP_KEY = ON but for rows with duplicated primary key, only 1 the latest row shows up.
CREATE TABLE dbo.ProcessList
(
Edited varchar(1),
ProcessId int NOT NULL PRIMARY KEY WITH (IGNORE_DUP_KEY = ON),
Name varchar(30) NOT NULL,
Amount smallmoney NOT NULL,
CreationDate datetime NOT NULL,
ModificationDate datetime
)
INSERT INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList
Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?
How do I make SQL ignore the duplicate primary keys?
Under no circumstances can a transaction be committed that results in a table containing two distinct rows with the same primary key. That is fundamental to the nature of a primary key. SQL Server's IGNORE_DUP_KEY option does not change that -- it merely affects how SQL Server handles the problem. (With the option turned on it silently refuses to insert rows having the same primary key as any existing row; otherwise, such an insertion attempt causes an error.)
You can address the situation either by dropping the primary key constraint or by adding one or more columns to the primary key to yield a composite key whose collective value is not duplicated. I don't see any good candidate columns for an expanded PK among those you described, though. If you drop the PK then it might make sense to add a synthetic, autogenerated PK column.
Also, if I have a row and I update the values of that row, is there any way to keep the original values of the row and insert a clone of that row below, with the updated values and creation/modification date updated automatically?
If you want to ensure that this happens automatically, however a row happens to be updated, then look into triggers. If you want a way to automate it, but you're willing to make the user ask for the behavior, then consider a stored procedure.
try this
INSERT IGNORE INTO ProcessList SELECT Edited, ProcessId, Name, Amount, CreationDate, ModificationDate FROM Book1$
SELECT * FROM ProcessList
You drop the constraint. Something like this:
alter table dbo.ProcessList drop constraint PK_ProcessId;
You need to know the constraint name.
In other words, you can't ignore a primary key. It is defined as unique and not-null. If you want the table to have duplicates, then that is not the primary key.

Condidate for Check Constraint

I have a table that holds Tasks for a particular person.
TaskID INT PK
PersonID INT (FK to Person Table)
TaskStatusID INT (FK To list of Statuses)
Deleted DATETIME NULL
The business rule is that a person can not have more than one active task at a time. A task is 'Active' based on it's TaskStatusID. The statuses are:
'5=New, 6=In 7=Progress, 8=Under 9=Review, 10=Complete, 11=Cancelled'
These are values in my Status table.
So, 5,6,7,8 and 9 are Active tasks. These rest are finalised.
A person can only have one task which is in an active state.
So, to test if I can add a task for this person, I would do:
CASE EXISTS(SELECT * FROM Task WHERE PersonID = 123 AND TaskStatusIN IN (5,6,7,8,9)) THEN 0 ELSE 1 END AS CanAdd
The table has a lot of rows. Around 200,000.
I was thinking of adding a Check Constraint on this table, so on update/insert, I make that query to see if the row being added/edited will break the data integrity with regards the business rules.
Is a check constraint suitable for this, or is there a more efficient way to keep the data integral.
Something like:
ADD CONSTRAINT chk_task CHECK (
EXISTS(SELECT * FROM Task WHERE PersonID = ?? AND TaskStatusIN IN (5,6,7,8,9)))
You can't easily do it with a check constraint because they only (naturally) can make assertions about columns within the same row. There are some kludgy ways to get around that by using a UDF to query other rows but most implementations I've seen have odd edge cases where it's possible to work around the UDF and end up with invalid rows after all.
What you can do is to create an indexed view that maintains the constraint:
create table dbo.Tasks (
TaskID INT not null primary key,
PersonID INT not null,
TaskStatusID INT not null,
Deleted DATETIME NULL
)
go
create view dbo.DRI_Tasks_OneActivePerPerson
with schemabinding
as
select PersonID from dbo.Tasks
where TaskStatusID IN (5,6,7,8,9)
go
create unique clustered index UX_DRI_Tasks_OneActivePerPerson
on dbo.DRI_Tasks_OneActivePerPerson (PersonID)
And now this insert succeeds (because there's only one row with an active status for person 1:
insert into dbo.Tasks (TaskID,PersonID,TaskStatusID)
values (1,1,5),(2,1,1),(3,1,4)
But this insert fails:
insert into dbo.Tasks (TaskID,PersonID,TaskStatusID)
values (4,2,6),(5,2,8)
With the message:
Cannot insert duplicate key row in object 'dbo.DRI_Tasks_OneActivePerPerson'
with unique index 'UX_DRI_Tasks_OneActivePerPerson'.
The duplicate key value is (2).
If you are using SQL Server 2008 or later version, you could create a unique filtered index:
CREATE UNIQUE INDEX UQ_ActiveStatus
ON dbo.Task (PersonID)
WHERE TaskStatusID IN (5, 6, 7, 8, 9);
It would act as a unique constraint specifically for rows with the specified statuses. You would only be able to have one of the specified statuses per person.
You can use above check constraint, but the best methodology I will suggest good to write dml trigger, before insert/before update, that one raise the statement.

How to make trigger on two tables?

I have a two tables which insert using jdbc. For example its parcelsTable and filesTableAnd i have some cases:
1. INSERT new row in both tables.
2. INSERT new row only in parcelsTable.
TABLES:
DROP parcelsTable;
CREATE TABLE(
num serial PRIMARY KEY,
parcel_name text,
filestock_id integer
)
DROP filesTable;
CREATE TABLE(
num serial PRIMARY KEY,
file_name text,
files bytea
)
I want to set parcelsTable.filestock_id=filesTable.num when i have INSERT in both tables using TRIGGER.
Its possible? How to know that i insert in both tables?
You don't need to use a trigger to get the foreign key value in this case. Since you have it set as serial you can access the latest value using currval. Run something like this this from your app:
insert into filesTable (file_name, files) select 'f1', 'asdf';
insert into parcelsTable (parcel_name, filestock_id) select 'p1', currval('filesTable_num_seq');
Note that this should only be used when inserting one record at a time to grab individual key values from currval. I'm calling the default sequence name of table_column_seq, which you should be able to use unless you've explicitly declared something different.
I would also recommend explicitly declaring nullability and the relationship:
CREATE TABLE parcelsTable (
...
filestock_id integer NULL REFERENCES filesTable (num)
);
Here is a working demo at SqlFiddle.
This might not be an answer, but it may be what you need. I am making this an answer instead of a comment because I need the space.
I don't know if you can have a trigger on two tables. Typically this is not needed. As in your case, typically either you are creating a parent record and a child record, or you are just creating a child record of an existing record.
So, typically, if you need a trigger when creating both, it is sufficient to put the trigger on the parent record.
I don't think you can do what you need. What you are trying to do is populate the foreign key with the parent record primary key in the same transaction. I don't think you can do that. I think you will have to provide the foreign key in the insert for parcelsTable.
You will end up leaving this NULL when you are creating a record in the parcelsTable at times when you are not creating a record in filesTable. So I think you will want to set the foreign key in the INSERT statement.
Only idea I've got by now is that you can create function that do indirect insert to the tables. then you can have whatever condition you need, with parallel inserts too.

Oracle Sql Check Constraint

What I want to do is simple and below are details. I have two tables.
Create Table Event(
IDEvent number (8) primary key,
StartDate date not null,
EndDate date not null
);
This is fine.
Here is second table.
Create Table Game(
IDGame number (8) primary key,
GameDate date not null,
constraint checkDate
check (GameDate >= to_date(StartDate references from Event(StartDate)))
);
The constraint checkDate is to check if the date is bigger than the startdate. While checking I'm getting error : Missing right parenthesis.
My question is, If this is possible to do then why it is giving me an error?
A check constraint in a table can only verify conditions on the columns of that particular table. You can not refer to columns from other tables.
If you need to verify conditions that involves columns from a different table, you can do it from a before insert/update trigger on that table.
What you want to do is far from simple.
The syntax you propose, doesn't work on any RDBMS. It would be nice to have, but none of the RDBMS vendors have implemented it, because it enforcing such a cross table integrity rule would mean locking the referenced table while updating the game table. If you try to build it yourself, you'll have to do the locking yourself. You'll have to take into account all actions that could possibly violate your rule, such as:
inserting a game
updating the gamedate to a less recent date
updating the event startdate to a more recent date
deleting an event
And for each of these actions you'll have to think of writing code that is multi user proof, by locking the right records in the other table.
If you want to reduce this complexity, you might want to look at a product called RuleGen (www.rulegen.com)
Or you may want to build a specific API and include the checks in just the right places. You'll still have to manually lock yourself in this scenario.
Hope this helps.
Regards,
Rob.
There is one hack that you can make, but I doubt that performance of inserting games or events will be acceptable, once the tables grow to a certain size:
CREATE TABLE Event
(
IDEvent NUMBER(8) PRIMARY KEY,
StartDate DATE NOT NULL,
EndDate DATE NOT NULL
);
CREATE TABLE Game
(
IDGame NUMBER(8) PRIMARY KEY,
GameDate DATE NOT NULL,
eventid NUMBER(8), -- this is different to your table definition
CONSTRAINT fk_game_event FOREIGN KEY (eventid) REFERENCES event (idevent)
);
CREATE INDEX game_eventid ON game (eventid);
CREATE MATERIALIZED VIEW LOG ON event
WITH ROWID, SEQUENCE (idevent, startdate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON game
WITH ROWID, SEQUENCE (idgame, eventid, gamedate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_event_game
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT ev.idevent,
ev.startdate,
g.gamedate
FROM event ev, game g
WHERE g.eventid = ev.idevent;
ALTER TABLE mv_event_game
ADD CONSTRAINT check_game_start check (gamedate >= startdate);
Now any transaction that inserts a game that starts before the referenced event will throw an error when trying to commit the transaction:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning and OLAP options
SQL> INSERT INTO event
2 (idevent, startdate, enddate)
3 values
4 (1, date '2012-01-22', date '2012-01-24');
1 row created.
SQL>
SQL> INSERT INTO game
2 (idgame, eventid, gamedate)
3 VALUES
4 (1, 1, date '2012-01-01');
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (FOOBAR.CHECK_GAME_START) violated
But again: This will make inserts in both tables slower as the query inside the mview needs to be run each time a commit is performed.
I wasn't able to change the refresh type to FAST which probably would improve commit performance.