Trigger creation/modification to ensure field equals insertion date - sql

I have a table named Customer and the column in question is dbupddate. This column should contain the datetime of the query that resulted in the record bein inserted.
I have already made a default constraint to getdate():
CREATE TABLE [dbo].[customer]
(
[dbupddate] [DATETIME] NOT NULL
CONSTRAINT [DF_customer_dbupddate] DEFAULT (GETDATE()),...
but this does not prevent someone ofaccidentally entering an irrelevant value.
How can I ensure the column dbupddate has the insert datetime?
I guess the answer will contain a trigger. In this case, consider the following already existing trigger, that should not have its effects lost/modified in any way:
CREATE TRIGGER [dbo].[customer_ins_trig]
ON [dbo].[customer]
AFTER INSERT
AS
BEGIN
DELETE u
FROM transfer_customer_unprocessed u
WHERE EXISTS (SELECT 1 FROM inserted i WHERE i.code = u.code)
INSERT INTO transfer_customer_unprocessed (code, dbupddate)
SELECT code, dbupddate
FROM inserted
END
Maybe I could add some lines to that one to suit my needs? Or maybe create another one?

In the procedure which is inserting the data, just don't provide a variable for that column. Granted someone could open SSMS if they have the rights and update it, but you could restrict this with access too.
Additionally, you may want to look into rowversion if this is part of a larger initiative to track changes.

Here's a trigger that does what you want, I think. Note that the user cannot control content going into InsertDate.
This is a reasonable approach for keeping "last updated" info for your data. However, #scsimon if you are doing this for other reasons, ROWVERSION is worth exploring, does not require a trigger, and will be much more performant.
DROP TABLE IF EXISTS Test;
GO
CREATE TABLE Test (
Id INT NOT NULL ,
Content NVARCHAR(MAX) NOT NULL ,
InsertDate DATETIME NULL
);
GO
CREATE TRIGGER TR_Test
ON Test
AFTER INSERT, UPDATE
AS BEGIN
UPDATE t SET t.InsertDate = GETDATE() FROM Test t INNER JOIN inserted i ON i.Id = t.Id;
END;
GO
INSERT Test VALUES (1, '1', NULL), (2, '2', NULL), (3, '3', NULL);
SELECT * FROM Test;
GO
UPDATE Test SET Id = 4, Content = 4 WHERE Id = 1;
UPDATE Test SET Id = 5, Content = 5, InsertDate = NULL WHERE Id = 2;
SELECT * FROM Test;
GO

Related

Insert trigger doesnt do what i want it to do

i made a trigger which should avoid inserting a record in the rental 'uitlening' table if the person has an overdue payment (Boete). Unfortunately it doesnt work and i cant find the reason why. 'Boete' is an attribute of another table than rental. Can someone help me?
CREATE TRIGGER [dbo].[Trigger_uitlening]
ON [dbo].[Uitlening]
FOR INSERT
AS
BEGIN
DECLARE #Boete decimal(10, 2);
SET #Boete = (SELECT Boete FROM Lid WHERE LidNr = (SELECT LidNr FROM inserted));
IF #Boete = 0
BEGIN
INSERT INTO Uitlening
SELECT *
FROM inserted;
END;
END;
It sounds like what you actually need is a cross-table constraint.
You can either do this by throwing an error in the trigger:
CREATE TRIGGER [dbo].[Trigger_uitlening]
ON [Rental]
AFTER INSERT
AS
SET NOCOUNT ON;
IF EXISTS (SELECT 1
FROM inserted i
INNER JOIN dbo.Person p ON i.[personID] = p.[personID]
WHERE p.[PaymentDue] <= 0
)
THROW 50001, 'PaymentDue is less than 0', 1;
A better solution is to utilize a trick with an indexed view. This is based on an article by spaghettidba.
We first create a dummy table of two rows
CREATE TABLE dbo.DummyTwoRows (dummy bit not null);
INSERT DummyTwoRows (dummy) VALUES(0),(1);
Then we create the following view:
CREATE VIEW dbo.vwPaymentLessThanZero
WITH SCHEMBINDING -- needs schema-binding
AS
SELECT 1 AS DummyOne
FROM dbo.Rental r
JOIN dbo.Person p ON p.personID = r.personID
CROSS JOIN dbo.DummyTwoRows dtr
WHERE p.PaymentDue <= 0;
This view should in theory always have no rows in it. To enforce that, we create an index on it:
CREATE UNIQUE CLUSTERED INDEX CX_vwPaymentLessThanZero
ON dbo.vwPaymentLessThanZero (DummyOne);
Now if you try to add a row that qualifies for the view, it will fail with a unique violation, because the cross-join is doubling up the rows.
Note that in practice the view index takes up no space because there are never any rows in it.
Assuming you just want to insert records into [Rental] of those users, who have [PaymentDue] <= 0. As you mentioned in your last comment:
no record in rental can be inserted if the person has a PaymentDue
thats greater than zero
And other records should be silently discarded as you didn't give a clear answer to #Larnu's question:
should that row be silently discarded, or should an error be thrown?
If above assumptions are true, your trigger would look like:
CREATE TRIGGER [dbo].[Trigger_uitlening]
ON [Rental]
INSTEAD OF INSERT
AS
BEGIN
INSERT INTO [Rental] ( [DATE], [personID], [productID])
SELECT i.[DATE], i.[personID], i.[productID]
FROM INSERTED i
INNER JOIN Person p ON i.[personID] = p.[personID]
WHERE p.[PaymentDue] <= 0
END;
Attention! When you create a trigger by FOR INSERT or AFTER INSERT then don't write insert into table select * from inserted, because DB will insert data automatically, you can do only ROLLBACK this process. But, when creating a trigger by INSTEAD OF INSERT then you must write insert into table select * from inserted, else inserting not be doing.

T-SQL Trigger - Audit Column Change

Given a simple table, with an ID what is the correct way to audit a column being changed. I am asking after looking after various answers which seem not to be working.
Here is what I have:
Create Table Tbl_Audit
(
AuditId int identity(1,1) not null,
Tbl_Id int not null.
Tbl_Old_ColumnValue varchar(255),
Tbl_New_ColumnValue varchar(255)
)
GO
Create Trigger Tr_Tbl_ColumnChanged on Tbl
after insert, update
As
begin
if(update(ColumnName))
begin
insert into tbl_audit
(
Tbl_Id,
Tbl_Old_ColumnName,
Tbl_New_ColumnName
)
select
tbl.PKId,
tbl.ColumnName,
i.ColumnName,
from
Tbl tbl join
inserted i
on tbl.PKId = i.PKId
end
What I see is thousands of examples where Tbl_Old_ColumnValue = Tbl_New_ColumnValue , which is not what I want.
I would expect to run:
select top 10 * from tbl_audit where Tbl_Old_ColumnValue !=Tbl_New_ColumnValue
But this returns no results.
In order to get results of columns that actually changed, I need to run a very expensive query:
select top 10
old.AuditId,
old.Tbl_Old_ColumnValue,
new.Tbl_Old_ColumnValue as [Tbl_New_ColumnValue]
from tbl_audit [old]
join Tbl_Audit [new]
on [ol].Tbl_Id= [new].Tbl_Id and [old].AuditId != [new].AuditId
where [old].Tbl_Old_ColumnValue != [new].Tbl_Old_ColumnValue
Results:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10051 1 old_value old_value
10052 1 new_value new_value
But that doesn't produce what I expect:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10057 1 old_value Some New Value
Oddly, If I modify the column directly via SSMS using:
update Tbl set Tbl.ColumnValue = 'Some New Value'
I see what I expect from my trigger:
AuditId Tbl_Id Tbl_Old_ColumnValue Tbl_New_ColumnValue
10057 1 old_value Some New Value
What am I doing wrong?
Also, how do I eliminate auditing of row where update(ColumnName) is actually false. IE, the ColumnName (even if being set) is not audit when it is being set to the previous/old value.
update(ColumnName) doesn't mean that the value has changed, just that that column was involved in the insert/update - and it will always be involved in an insert. You need to compare the old and new values using inserted and deleted e.g.
insert into tbl_audit
(
Tbl_Id,
Tbl_Old_ColumnName,
Tbl_New_ColumnName
)
select
tbl.PKId,
tbl.ColumnName,
i.ColumnName,
from
inserted i
left join deleted d on d.PKId = i.PKId
-- Insert d.PKId is null, there are no records in deleted
where d.PKId is null
-- Change from null to value
or (i.ColumnName is null and d.ColumnName is not null)
-- Change from value to null
or (i.ColumnName is not null and d.ColumnName is null)
-- Change in value
or i.ColumnName <> d.ColumnName;
You can potentially simplify the null check using coalesce and a suitable value which will never actually occur in your data.
The documentation is actually pretty good on all this.
And if the column is not always included in an update, then the update(ColumnName) test is still worth doing because it speeds up the trigger, and triggers should be as fast as possible. Personally I short circuit out early e.g. if not update(ColumnName) return;
Obviously you need to adapt that logic to handle all the columns you are auditing.

SQL - Unique key across 2 columns of same table?

I use SQL Server 2016. I have a database table called "Member".
In that table, I have these 3 columns (for the purpose of my question):
idMember [INT - Identity - Primary Key]
memEmail
memEmailPartner
I want to prevent a row to use an email that already exists in the table.
Both email columns are not mandatory, so they can be left blank (NULL).
If I create a new Member:
If not blank, the values entered for "memEmail" and "memEmailPartner" (independently) should not be found in any other rows in columns memEmail nor memEmailPartner.
So if I want to create a row with email (dominic#email.com) I must not find any occurrences of that value in memEmail or memEmailPartner.
If I update an existing Member:
I must not find any occurrences of that value in memEmail or memEmailPartner, with the exception that I am updating the row (idMembre) which already have the value in memEmail or memEmailPartner.
--
From what I read on Google, it should be possible to do something with a Function-Based Check Constraint but I can't make that work.
Anyone have a solution to my problem ?
Thank you.
I may have misunderstood exactly what you were asking but it looks like you want a simple upsert query with IF EXISTS conditions.
DECLARE #emailAddress VARCHAR(255)= 'dominic#email.com', --dummy value
#id INT= 2; --dummy value
IF NOT EXISTS
(
SELECT 1
FROM #Member
WHERE memEmail = #emailAddress
OR memEmailPartner = #emailAddress
)
BEGIN
SELECT 'insert';
END;
ELSE IF EXISTS
(
SELECT 1
FROM #Member
WHERE idMember = #id
)
BEGIN
SELECT 'update';
END;
A trigger is the traditional way of doing doing what you're asking for. Here's a simple demo;
--if object_id('member') is not null drop table member
go
create table member (
idMember INT Identity Primary Key,
memEmail varchar(100),
memEmailPartner varchar(100)
)
go
create trigger trg_member on member after insert, update as
begin
set nocount on
if exists (select 1 from member m join inserted i on i.memEmail = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmail = m.memEmailPartner and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmailPartner and i.idMember <> m.idMember)
begin
raiserror('Email addresses must be unique.', 16, 1)
rollback
end
end
go
insert member(memEmail, memEmailPartner) values('a#a.com', null), ('b#b.com', null), (null, 'c#c.com'), (null, 'd#d.com')
go
select * from member
insert member(memEmail, memEmailPartner) values('a#a.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'a#a.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('c#c.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'c#c.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('e#e.com', null) -- should work
go
insert member(memEmail, memEmailPartner) values(null, 'f#f.com') -- should work
go
select * from member
-- Make sure updates still work!
update member set memEmail = memEmail, memEmailPartner = memEmailPartner
I've not tested this extensively but it should be enough to get you started if you want to try this approach.
StuartLC notes the potential for the UDF check constraint to fail in set based updates and/or various other conditions, triggers don't have this problem.
Stuart also suggests reconsidering whether this should really be a database constraint or managed through business logic elsewhere. I'm inclined to agree - my gut feel here is that sooner or later you will come across a situation that requires email addresses to be reused, or in some other way not strictly unique.
TL;DR
The wisdom of applying this kind of business rule logic in the database needs to be reconsidered - this check is likely a better candidate for your application, or a stored procedure which acts as an insert gate keeper instead of direct new row inserts into the table.
Ignoring the Warnings
That said, I do believe that what you want is however possible in a constraint UDF, albeit with potentially atrocious performance consequences*1, and likely prone to race conditions in set based updates
Here's a user defined function which applies the unique email logic across both columns. Note that by the time the constraint is checked, that the row is IN the table already, hence the new row itself needs to be excluded from the duplicate checks.
My code also is depedent on ANSI NULL behaviour, i.e. that the predicates NULL = NULL and X IN (NULL) both return NULL, and hence are excluded from the failure check (in order to meet your requirement that NULLS do not fail the rule).
We also need to check for the insert of BOTH new columns being non-null, but duplicated.
So here's the a UDF doing the checking:
CREATE FUNCTION dbo.CheckUniqueEmails(#id int, #memEmail varchar(50),
#memEmailPartner varchar(50))
RETURNS bit
AS
BEGIN
DECLARE #retval bit;
IF #memEmail = #memEmailPartner
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmail IS NOT NULL
AND memEmail IN(#memEmail, #memEmailPartner) AND idMember <> #id)
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmailPartner IS NOT NULL
AND memEmailPartner IN(#memEmail, #memEmailPartner) AND idMember <> #id)
SET #retval = 0
ELSE
SET #retval = 1;
RETURN #retval;
END;
GO
Which is then enforced in a CHECK constraint:
ALTER TABLE MyTable ADD CHECK (dbo.CheckUniqueEmails(
idMember, memEmail, memEmailPartner) = 1);
I've put a SQLFiddle up here
Uncomment the 'failed' test cases to ensure that the above check constraint is working.
I haven't tested this with updates, and as per Martin's advice on the link, this will likely break on an insert with multiple rows.
*1 - we'll need indexes on BOTH email address columns.

Get IDENTITY value in the same T-SQL statement it is created in?

I was asked if you could have an insert statement, which had an ID field that was an "identity" column, and if the value that was assigned could also be inserted into another field in the same record, in the same insert statement.
Is this possible (SQL Server 2008r2)?
Thanks.
You cannot really do this - because the actual value that will be used for the IDENTITY column really only is fixed and set when the INSERT has completed.
You could however use e.g. a trigger
CREATE TRIGGER trg_YourTableInsertID ON dbo.YourTable
AFTER INSERT
AS
UPDATE dbo.YourTable
SET dbo.YourTable.OtherID = i.ID
FROM dbo.YourTable t2
INNER JOIN INSERTED i ON i.ID = t2.ID
This would fire right after any rows have been inserted, and would set the OtherID column to the values of the IDENTITY columns for the inserted rows. But it's strictly speaking not within the same statement - it's just after your original statement.
You can do this by having a computed column in your table:
DECLARE #QQ TABLE (ID INT IDENTITY(1,1), Computed AS ID PERSISTED, Letter VARCHAR (1))
INSERT INTO #QQ (Letter)
VALUES ('h'),
('e'),
('l'),
('l'),
('o')
SELECT *
FROM #QQ
1 1 h
2 2 e
3 3 l
4 4 l
5 5 o
About the cheked answer:
You cannot really do this - because the actual value that will be used
for the IDENTITY column really only is fixed and set when the INSERT
has completed.
marc_s I suppose, you are not actually right. Yes, He can! ))
The way to solution is IDENT_CURRENT():
CREATE TABLE TemporaryTable(
Id int PRIMARY KEY IDENTITY(1,1) NOT NULL,
FkId int NOT NULL
)
ALTER TABLE TemporaryTable
ADD CONSTRAINT [Fk_const] FOREIGN KEY (FkId) REFERENCES [TemporaryTable] ([Id])
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
UPDATE TemporaryTable
SET [FkId] = 3
WHERE Id = 2
SELECT * FROM TemporaryTable
DROP TABLE TemporaryTable
More over, you can even use IDENT_CURRENT() as DEFAULT CONSTRAINT and it works instead of SCOPE_IDENTITY() for example. Try this:
CREATE TABLE TemporaryTable(
Id int PRIMARY KEY IDENTITY(1,1) NOT NULL,
FkId int NOT NULL DEFAULT IDENT_CURRENT('[TemporaryTable]')
)
ALTER TABLE TemporaryTable
ADD CONSTRAINT [Fk_const] FOREIGN KEY (FkId) REFERENCES [TemporaryTable] ([Id])
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
UPDATE TemporaryTable
SET [FkId] = 3
WHERE Id = 2
SELECT * FROM TemporaryTable
DROP TABLE TemporaryTable
You can do both.
To insert rows with a column "identity", you need to set identity_insert off.
Note that you still can't duplicate values!
You can see the command here.
Be aware to set identity_insert on afterwards.
To create a table with the same record, you simply need to:
create new column;
insert it with null value or other thing;
update that column after inserts with the value of the identity column.
If you need to insert the value at the same time, you can use the ##identity global variable. It'll give you the last inserted. So I think you need to do a ##identity + 1. In this case it can give wrong values because the ##identity is for all tables. So it'll count if the insert occurs in another table with identity.
Another solution is to get the max id and add one :) and you get the needed value!
use this simple code
`SCOPE_IDENTITY()+1
I know the original post was a long while ago. But, the top-most solution is using a trigger to update the field after the record has been inserted and I think there is a more efficient method.
Using a trigger for this has always bugged me. It always has seemed like there must be a better way. That trigger basically makes every insert perform 2 writes to the database, (1) the insert, and then (2) the update of the 2nd int. The trigger is also doing a join back into the table. This is a bit of overhead to have especially for a large database and large tables. And I suspect that as the table gets larger, the overhead of this approach does also. Maybe I'm wrong on that. But, it just doesn't seem like a good solution on a large table.
I wrote a function fn_GetIdent that can be used for this. It's funny how simple it is but really was some work to figure out. I stumbled onto this eventually. It turns out that calling IDENT_CURRENT(#variableTableName) from within a function that is called from the INSERT statements SET value assignment clause acts differently than if you call IDENT_CURRENT(#variableTableName) from the INSERT statement directly. And it makes it where you can get the new identity value for the record that you are inserting.
There is one caveat. When the identity is NULL (ie - an empty table with no records) it acts a little differently since the sys.identity_columns.last_value is NULL. So, you have to handle the very first record entered a little differently. I put code in the function to address that, and now it works.
This works because each call to the function, even within the same INSERT statement, is in it's own new "scope" within the function. (I believe that is the correct explanation). So, you can even insert multiple rows with one INSERT statement using this function. If you call IDENT_CURRENT(#variableTableName) from the INSERT statement directly, it will assign the same value for the newID in all rows. This is because the identity gets updated after the entire INSERT statement finishes processing (within the same scope). But, calling IDENT_CURRENT(#variableTableName) from within a function causes each insert to update the identity value with each row entered. But, it's all done in a function call from the INSERT statement itself. So, it's easy to implement once you have the function created.
This approach is a call to a function (from the INSERT statement) which does one read from the sys.identity_columns.last_value (to see if it is NULL and if a record exists) within the function and then calling IDENT_CURRENT(#variableTableName) and then returning out of the function to the INSERT statement to insert the row. So, it is one small read (for each row INSERTED) and then the one write of the insert which is less overhead than the trigger approach I think. The trigger approach could be rather inefficient if you use that for all tables in a large database with large tables. I haven't done any performance analysis on it compared to the trigger. But, I think this would be a lot more efficient, especially on large tables.
I've been testing it out and this seems to work in all cases. I would welcome feedback as to whether anyone finds where this doesn't work or if there is any problem with this approach. Can anyone can shoot holes in this approach? If so, please let me know. If not, could you vote it up? I think it is a better approach.
So, maybe being holed up due to COVID-19 out there, turned out to be productive for something. Thank you Microsoft for keeping me occupied. Anyone hiring? :) No, seriously, anyone hiring? OK, so now what am I going to do with myself now that I am done with this? :) Wishing everyone safe times out there.
Here is the code below. Wondering if this approach has any holes in it. Feedback welcomed.
IF OBJECT_ID('dbo.fn_GetIdent') IS NOT NULL
DROP FUNCTION dbo.fn_GetIdent;
GO
CREATE FUNCTION dbo.fn_GetIdent(#inTableName AS VARCHAR(MAX))
RETURNS Int
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE #tableHasIdentity AS Int
DECLARE #tableIdentitySeedValue AS Int
/*Check if the tables identity column is null - a special case*/
SELECT
#tableHasIdentity = CASE identity_columns.last_value WHEN NULL THEN 0 ELSE 1 END,
#tableIdentitySeedValue = CONVERT(int, identity_columns.seed_value)
FROM sys.tables
INNER JOIN sys.identity_columns
ON tables.object_id = identity_columns.object_id
WHERE identity_columns.is_identity = 1
AND tables.type = 'U'
AND tables.name = #inTableName;
DECLARE #ReturnValue AS Int;
SET #ReturnValue = CASE #tableHasIdentity WHEN 0 THEN #tableIdentitySeedValue
ELSE IDENT_CURRENT(#inTableName)
END;
RETURN (#ReturnValue);
END
GO
/* The function above only has to be created the one time to be used in the example below */
DECLARE #TableHasRows AS Bit
DROP TABLE IF EXISTS TestTable
CREATE TABLE TestTable (ID INT IDENTITY(1,1),
New INT,
Letter VARCHAR (1))
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'H')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'e')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'o')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), ' '),
(dbo.fn_GetIdent('TestTable'), 'W'),
(dbo.fn_GetIdent('TestTable'), 'o'),
(dbo.fn_GetIdent('TestTable'), 'r'),
(dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'd')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), '!')
SELECT * FROM TestTable
/*
Result
ID New Letter
1 1 H
2 2 e
3 3 l
4 4 l
5 5 o
6 6
7 7 W
8 8 o
9 9 r
10 10 l
11 11 d
12 12 !
*/

Insert into a temporary table and update another table in one SQL query (Oracle)

Here's what I'm trying to do:
1) Insert into a temp table some values from an original table
INSERT INTO temp_table SELECT id FROM original WHERE status='t'
2) Update the original table
UPDATE original SET valid='t' WHERE status='t'
3) Select based on a join between the two tables
SELECT * FROM original WHERE temp_table.id = original.id
Is there a way to combine steps 1 and 2?
You can combine the steps by doing the update in PL/SQL and using the RETURNING clause to get the updated ids into a PL/SQL table.
EDIT:
If you still need to do the final query, you can still use this method to insert into the temp_table; although depending on what that last query is for, there may be other ways of achieving what you want. To illustrate:
DECLARE
id_table_t IS TABLE OF original.id%TYPE INDEX BY PLS_INTEGER;
id_table id_table_t;
BEGIN
UPDATE original SET valid='t' WHERE status='t'
RETURNING id INTO id_table;
FORALL i IN 1..id_table.COUNT
INSERT INTO temp_table
VALUES (id_table(i));
END;
/
SELECT * FROM original WHERE temp_table.id = original.id;
No, DML statements can not be mixed.
There's a MERGE statement, but it's only for operations on a single table.
Maybe create a TRIGGER wich fires after inserting into a temp_table and updates the original
Create a cursor holding the values from insert and then loop through the cursor updating the table. No need to create temp table in the first place.
You can combine steps 1 and 2 using a MERGE statement and DML error logging. Select twice as many rows, update half of them, and force the other half to fail and then be inserted into an error log that you can use as your temporary table.
The solution below assumes that you have a primary key constraint on ID, but there are other ways you could force a failure.
Although I think this is pretty cool, I would recommend you not use it. It looks very weird, has some strange issues (the inserts into TEMP_TABLE are auto-committed), and is probably very slow.
--Create ORIGINAL table for testing.
--Primary key will be intentionally violated later.
create table original (id number, status varchar2(10), valid varchar2(10)
,primary key (id));
--Create TEMP_TABLE as error log. There will be some extra columns generated.
begin
dbms_errlog.create_error_log(dml_table_name => 'ORIGINAL'
,err_log_table_name => 'TEMP_TABLE');
end;
/
--Test data
insert into original values(1, 't', null);
insert into original values(2, 't', null);
insert into original values(3, 's', null);
commit;
--Update rows in ORIGINAL and also insert those updated rows to TEMP_TABLE.
merge into original original1
using
(
--Duplicate the rows. Only choose rows with the relevant status.
select id, status, valid, rownumber
from original
cross join
(select 1 rownumber from dual union all select 2 rownumber from dual)
where status = 't'
) original2
on (original1.id = original2.id and original2.rownumber = 1)
--Only math half the rows, those with rownumber = 1.
when matched then update set valid = 't'
--The other half will be inserted. Inserting ID causes a PK error and will
--insert the data into the error table, TEMP_TABLE.
when not matched then insert(original1.id, original1.status, original1.valid)
values(original2.id, original2.status, original2.valid)
log errors into temp_table reject limit 999999999;
--Expected: ORIGINAL rows 1 and 2 have VALID = 't'.
--TEMP_TABLE has the two original values for ID 1 and 2.
select * from original;
select * from temp_table;