I have been trying to create triggers to lessen the client side code that needs to be written. I have written the following two tSQL triggers and they both seem to produce the same results, I'm just wondering which one is the more proper way to do it. I'm using SQL Server 2012 if that makes any difference.
i.e.
which one uses less resources
executes faster
is more secure against attacks
etc...
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
if (
select count([inserted].[groupID])
from [inserted]
where [inserted].[groupID] = 1
) = 0
begin
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
end
END
GO
OR
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
Where[inserted].[groupID] in
(
select [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
)
END
GO
UPDATE:
Based on some of the comments here are the inserts I am using. The GroupMap table has the same results no matter which trigger I use.
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('root', 'The root of all groups')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('orphans', 'This is where the members of deleted groups go')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SMGMT', 'Support Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST1', 'Support Tier 1')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST2', ' Support Tier 2')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST3', 'Support Tier 3')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSMGMT', 'Express Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSSup', 'Support Express')
Since a comment is a bit too small to put this example in, I'm going to put it in an answer. The reason why people say your triggers are functionally different is that, although your test insert row by row, a trigger will also fire when you insert multiple rows into the table in one single operation. Based on your examples you could try the following:
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription])
SELECT 'root', 'The root of all groups'
UNION ALL
SELECT 'orphans', 'This is where the members of deleted groups go'
UNION ALL
SELECT 'SMGMT', 'Support Management'
When doing this query the inserted table will hold 3 rows and (depending on the data) the result of the 2 trigger-code-examples might give different results.
Don't worry, this is a common misconception. The rule of thumb with SQL is to always think in record-sets, never in 'a single record with fields'.
As for your question (yes, I'm going for a real answer =)
I would suggest a variation on the second one.
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
END
This way the server only needs to run over inserted once, decide which records to 'keep' and then store them right-away into the destination table.
The question now is, if this does what you want it to do...
Related
I've been searching around for the answers to this question, and there's some conflicting or ambiguous information out there, finding it hard to find a for-sure answer.
My context: I'm in node.js using the 'mssql' npm package. My SQL server is Microsoft SQL Server 2014.
I have a record that may or may not exist in a table already -- if it exists I want to update it, otherwise I want to insert it. I'm not sure what the optimal SQL is, or if there's some kind of 'transaction' I should be running in mssql. I've found some options that seem good, but I'm not sure about any of them:
Option 1:
how to update if exists or insert
Problem with this is I'm not even sure this is valid syntax in MSSQL. I do like it though, and it seems to support doing multiple rows at once too which I like.
INSERT INTO table (id, user, date, points)
VALUES (1, 1, '2017-03-03', 25),
(2, 1, '2017-03-04', 25),
(3, 2, '2017-03-03', 100),
(4, 2, '2017-03-04', 150)
ON DUPLICATE KEY UPDATE points = VALUES(points)
Option 2:
don't know if there's any problem with this one, just not sure if it's optimal. Doesn't seem to support multiple simultaneous rows
update test set name='john' where id=3012
IF ##ROWCOUNT=0
insert into test(name) values('john');
Option 3: Merge, https://dba.stackexchange.com/questions/89696/how-to-insert-or-update-using-single-query
Some people say this is a bit buggy or something? This also apparently supports multiple at once which I like.
MERGE dbo.Test WITH (SERIALIZABLE) AS T
USING (VALUES (3012, 'john')) AS U (id, name)
ON U.id = T.id
WHEN MATCHED THEN
UPDATE SET T.name = U.name
WHEN NOT MATCHED THEN
INSERT (id, name)
VALUES (U.id, U.name);
Every one of them has different purpose, pros and cons.
Option 1 is good for multi row inserts/updates. However It only checks primary key constraints.
Option 2 is good for small sets of data. Single record insertion/update. It is more like script.
Option 3 is best for big queries. Lets say, reading from one table and inserting/updating to another accordingly. You can define which condition to be satisfied for insertion and/or update. You are not limited to primary key/unique constraint.
If your system is highly concurrent, and performance is important - you can try following pattern, if updates are more common than inserts:
BEGIN TRANSACTION;
UPDATE dbo.t WITH (UPDLOCK, SERIALIZABLE) SET val = #val WHERE [key] = #key;
IF ##ROWCOUNT = 0
BEGIN
INSERT dbo.t([key], val) VALUES(#key, #val);
END
COMMIT TRANSACTION;
Reference: https://sqlperformance.com/2020/09/locking/upsert-anti-pattern
Also read: https://michaeljswart.com/2017/07/sql-server-upsert-patterns-and-antipatterns/
If inserts are more common:
BEGIN TRY
INSERT INTO dbo.AccountDetails (Email, Etc) VALUES (#Email, #Etc);
END TRY
BEGIN CATCH
-- ignore duplicate key errors, throw the rest.
IF ERROR_NUMBER() IN (2601, 2627)
UPDATE dbo.AccountDetails
SET Etc = #Etc
WHERE Email = #Email;
END CATCH
I wouldn't use merge, while most of the bugs are apparently fixed - we have had major issues with it before in production.
EDIT ---
Yes above answers were for single rows - For multiple rows, you'd do something like this: The idea behind the locking is the same though
BEGIN TRANSACTION;
UPDATE t WITH (UPDLOCK, SERIALIZABLE)
SET val = tvp.val
FROM dbo.t AS t
INNER JOIN #tvp AS tvp
ON t.[key] = tvp.[key];
INSERT dbo.t([key], val)
SELECT [key], val FROM #tvp AS tvp
WHERE NOT EXISTS (SELECT 1 FROM dbo.t WHERE [key] = tvp.[key]);
COMMIT TRANSACTION;
Extending my comment here. There are known problems with MERGE in SQL Server, however, for what you're doing here you will likely be ok. Aaron Bertrand has an article on the subject which you can find here: Use Caution with SQL Server's MERGE Statement.
An alternative, however, for what you could do here would be using an "UPSERT"; UPDATE the existing rows, and then INSERT the ones that don't exist. This involves 2 separate statements, however, was the method used prior to MERGE:
UPDATE T
SET T.Name = U.Name
FROM dbo.Test T
JOIN (VALUES (3012, 'john')) AS U (id, name) ON T.id = U.id;
INSERT INTO dbo.Test (Name) --I'm assuming ID is an `IDENTITY` here
SELECT U.name
FROM (VALUES (3012, 'john')) AS U (id, name)
WHERE NOT EXISTS (SELECT 1
FROM dbo.Test T
WHERE T.ID = U.ID);
Note I have not declared any locking or transactions in this example, but you should in any implemented solution.
I'm trying to combine matching records from a table into a single record of another table. I know this can be done with group by, and sum(), max(), etc..., My difficulty is that the columns that are not part of the group by are varchars that i need to concatenate.
I'm using Sybase ASE 15, so I do not have a function like MySQL's group_concat or similar.
I tried using merge without luck, the target table ended with the same number of records of source table.
create table #source_t(account varchar(10), event varchar(10))
Insert into #source_t(account, event) values ('account1','event 1')
Insert into #source_t(account, event) values ('account1','event 2')
Insert into #source_t(account, event) values ('account1','event 3')
create table #target(account varchar(10), event_list varchar(2048))
merge into #target as t
using #source_t as s
on t.account = s.account
when matched then update set event_list = t.event_list + ' | ' + s.event
when not matched then insert(account, event_list) values (s.account, s.event)
select * from #target
drop table #target
drop table #source_t
Considering the above tables, I wanted to have one record per account, with all the events of the account concatenated in the second column.
account, event_list
'account1', 'event 1 | event 2 | event 3'
However, all I've got is the same records as #source.
It seems to me that the match in merge is attempted against the "state" of the table at the beginning of statement execution, so the when matched never executes. Is there a way of telling the DBMS to match against the updated target table?
I managed to obtain the results I needed by using a cursor, so the merge statement is executed n times, n being the number of records in #source, thus the merge actually executes the when matched part.
The problem with it is the performance, removing duplicates this way takes about 5 minutes to combine 63K records into 42K.
Is there a faster way of achieving this?
There's a little known (poorly documented?) aspect of the UPDATE statement when using it to update a #variable which allows you to accumulate/concatenate values in the #variable as part of a set-based UPDATE operation.
This is easier to 'explain' with an example:
create table source
(account varchar(10)
,event varchar(10)
)
go
insert source values ('account1','event 1')
insert source values ('account1','event 2')
insert source values ('account1','event 3')
insert source values ('account2','event 1')
insert source values ('account3','event 1')
insert source values ('account3','event 2')
go
declare #account varchar(10),
#event_list varchar(40) -- increase the size to your expected max length
select #account = 'account1'
-- allow our UPDATE statement to cycle through the events for 'account1',
-- appending each successive event to #event_list
update source
set #event_list = #event_list +
case when #event_list is not NULL then ' | ' end +
event
from source
where account = #account
-- we'll display as a single-row result set; we could also use a 'print' statement ...
-- just depends on what format the calling process is looking for
select #account as account,
#event_list as event_list
go
account event_list
---------- ----------------------------------------
account1 event 1 | event 2 | event 3
PRO:
single UPDATE statement to process a single account value
CON:
still need a cursor to process a series of account values
if your desired final output is a single result set then you'll need to store intermediate results (eg, #account and #update) in a (temp) table, then run a final SELECT against this (temp) table to produce the desired result set
while you're not actually updating the physical table, you may run into problems if you don't have access to 'update' the table
NOTE: You could put the cursor/UPDATE logic in a stored proc, call the proc through a proxy table, and this would allow the output from a series of 'select #account,#update' statements to be returned to the calling process as a single result set ... but that's a whole 'nother topic on a (somewhat) convoluted coding method.
For your process you'll need a cursor to loop through your unique set of account values, but you'll be able to eliminate the cursor overhead for looping through the list of events for a given account. Net result is that you should see some improvement in the time it takes to run your process.
After applying the given suggestions, and also speaking with our DBA, the winner idea was to ditch the merge and use logical conditions over the loop.
Adding begin/commit seemed to reduce execution time by 1.5 to 3 seconds.
Adding a primary key to the target table did the best reduction, reducing execution time to about 13 seconds.
Converting the merge to conditional logic was the best option in this case, achieving the result in about 8 seconds.
When using conditionals, the primary key in target table is detrimental by a small amount (around 1 sec), but having it drastically reduces time afterwards, since this table is only a previous step for a big join. (That is, the result of this record-merging is latter used in a join with 11+ tables.) So I kept the P.K.
Since there seems to be no solution without a cursor loop, I used the conditionals to merge the values using variables and issuing only the inserts to the target table, thus eliminating the need to seek a record to update or to check its existence.
Here is a simplified example.
create table #source_t(account varchar(10), event varchar(10));
Insert into #source_t(account, event) values ('account1','event 1');
Insert into #source_t(account, event) values ('account1','event 2');
Insert into #source_t(account, event) values ('account1','event 3');
Insert into #source_t(account, event) values ('account2','came');
Insert into #source_t(account, event) values ('account2','saw');
Insert into #source_t(account, event) values ('account2','conquered');
create table #target(
account varchar(10), -- make primary key if the result is to be joined afterwards.
event_list varchar(2048)
);
declare ciclo cursor for
select account, event
from #source_t c
order by account --,...
for read only;
declare #account varchar(10), #event varchar(40), #last_account varchar(10), #event_list varchar(1000)
open ciclo
fetch ciclo into #account, #event
set #last_account = #account, #event_list = null
begin tran
while ##sqlstatus = 0 BEGIN
if #last_account <> #account begin -- if current record's account is different from previous, insert into table the concatenated event string
insert into #target(account, event_list) values (#last_account, #event_list)
set #event_list = null -- Empty the string for the next account
end
set #last_account = #account -- Copy current account to the variable that holds the previous one
set #event_list = case #event_list when null then #event else #event_list + ' | ' + #event end -- Concatenate events with separator
fetch ciclo into #account, #event
END
-- after the last fetch, ##sqlstatus changes to <> 0, the values remain in the variables but the loop ends, leaving the last record unprocessed.
insert into #target(account, event_list) values (#last_account, #event_list)
commit tran
close ciclo
deallocate cursor ciclo;
select * from #target;
drop table #target;
drop table #source_t;
Result:
account |event_list |
--------|---------------------------|
account1|event 1 | event 2 | event 3|
account2|saw | came | conquered |
This code worked fast enough in my real use case. However it could be further optimized by filtering the source table to hold only the values que would be necessary for the join afterward. For that matter I saved the final joined resultset (minus the join with #target) in another temp table, leaving some columns blank. Then #source_t was filled using only the accounts present in the resultset, processed it to #target and finally used #target to update the final result. With all that, execution time for production environment was dropped to around 8 seconds (including all steps).
UDF Solutions have solved this kind of problem for me before but in ASE 15 they have to be table-specific and need to write one function per column. Also, that is only possible in development environment, not authorized for production because of read only privileges.
In conclusion, A cursor loop combined with a merge statement is a simple solution for combining records using concatenation of certain values. A primary key or an index including the columns used for the match is required to boost performance.
Conditional logic results in even better performance but comes at the penalty of more complex code (the more you code, the more prone to error).
I have below 2 insert statements which i took the export from sql developer from dev environment. I have delete those records from dev afterwards. Now i want to run this insert statement again in dev because those are my back up but i am getting error as virtual column which is ORD_DAYID cannot be used inside insert script. So i want to exclude this column and also the respective values using replace function or any tools which i dont know. I didnt know previously that i have virtual column for this table. I would like to know is there any tool or function where i can select ORD_DAYID and also the respective values get selected and then i can delete those and then i can be able to run this insert statement again in test enviornment.
P.S i have mentioned only 2 sample insert statements but there are 1000 insert statements. So its very difficult to manually delete this ORD_DAYID from this insert statements with respective values.
Insert into test_ord (IS_GRP,ORD_DAYID,REF_CAMPA_CODE) values (1,20150813,null);
Insert into test_ord (IS_GRP,ORD_DAYID,REF_CAMPA_CODE) values (1,20150828,null);
You can edit your INSERT statements using regular expressions, in an editor such as Notepad++.
So to change this ...
Insert into test_ord (IS_GRP,ORD_DAYID,REF_CAMPA_CODE) values (1,20150813,null);
... into this ...
Insert into test_ord (IS_GRP,REF_CAMPA_CODE) values (1,null);
You need a search pattern of:
Insert into test_ord \(IS_GRP,ORD_DAYID,REF_CAMPA_CODE\) values \(([0-9]+),([0-9]+),null\);
and a replace pattern of:
Insert into test_ord \(IS_GRP,REF_CAMPA_CODE\) values \(\1,null\);
Obviously you will need to refine the search pattern to cater for all the different values of IS_GRP, and REF_CAMPA_CODE in your 1000 statements.
" is there any way where we can count the place of column and value and replace it with null"
No. The snag with virtual columns is that they cannot be referenced in INSERT or UPDATE statements. So you need to totally exclude it from the projection.
"i am not able to find those option in notepad++"
Really? Search and replace is not an exotic option:
From the menu: Search > Find > Replace [tab] (or [ctrl]+h)
As the search mode select the regular expression radio button
create an auxiliary table without virtual columns.
Restore your data to this auxiliary table.
Transfer the data from the auxiliary table to the original table.
-- this is your table
create table mytab(A number, b number, s as (a+b));
--fill it with data
insert into mytab(a,b) values(1,1);
insert into mytab(a,b) values(1,2);
insert into mytab(a,b) values(2,1);
insert into mytab(a,b) values(2,2);
commit;
-- check its content
select * from mytab;
-- now delete the rows
delete from mytab;
commit;
-- restore your data
--------------------
-- create a table similar the table you want to restore
-- but the virtual colums as regular columns.
create table ctas as
select * from mytab where 1!=0;
-- insert your backup data
insert into ctas(a,b,s) values(1,1,2);
insert into ctas(a,b,s) values(1,2,3);
insert into ctas(a,b,s) values(2,1,3);
insert into ctas(a,b,s) values(2,2,4);
commit;
-- transfer the data to the table you want to restore
insert into mytab(a,b) select a,b from ctas;
Trigger with Insert into (select * ...)
I'm trying it.
INSERT INTO T_ USERS SELECT * FROM USERS WHERE ID = :new.ID;
not working...
this work.
INSERT INTO T_USERS(ID) VALUES(:new.ID);
Trigger
create or replace trigger "TRI_USER"
AFTER
insert on "USER"
for each row
begin
INSERT INTO T_USER SELECT * FROM USER WHERE ID = :new.ID;
end;
this work.
INSERT INTO T_USERS(ID) VALUES(:new.ID);
So if it fits to you then try this:
INSERT INTO T_USER(ID) SELECT ID FROM USER WHERE ID = :new.ID;
If you want to select one or more rows from another table, you have to use this syntax:
insert into <table>(<col1>,<col2>,...,<coln>)
select <col1>,<col2>,...,<coln>
from ...;
Perhaps you could post the actual error you are experiencing?
Also, I suggest that you rethink your approach. Triggers that contain DML introduce all sorts of issues. Keep in mind that Oracle Database may need to restart a trigger, and could therefore execute your DML multiple times for a particular row.
Instead, put all your related DML statements together in a PL/SQL procedure and invoke that.
Its not about your trigger but because of INSERT statement
here insert statement works as below
INSERT INTO <TABLE>(COL1,COL2,COL3) VALUES (VAL1,VAL2,VAL3); --> If trying to populate value 1 by one.
INSERT INTO <TABLE>(COL1,COL2,COL3) --> If trying to insert mult vales at a time
SELECT VAL1,VAL2,VAL3 FROM <TABLE2>;
The number of values should match with number of columsn mentioned.
Hope this helps you to understand
I was asked if you could have an insert statement, which had an ID field that was an "identity" column, and if the value that was assigned could also be inserted into another field in the same record, in the same insert statement.
Is this possible (SQL Server 2008r2)?
Thanks.
You cannot really do this - because the actual value that will be used for the IDENTITY column really only is fixed and set when the INSERT has completed.
You could however use e.g. a trigger
CREATE TRIGGER trg_YourTableInsertID ON dbo.YourTable
AFTER INSERT
AS
UPDATE dbo.YourTable
SET dbo.YourTable.OtherID = i.ID
FROM dbo.YourTable t2
INNER JOIN INSERTED i ON i.ID = t2.ID
This would fire right after any rows have been inserted, and would set the OtherID column to the values of the IDENTITY columns for the inserted rows. But it's strictly speaking not within the same statement - it's just after your original statement.
You can do this by having a computed column in your table:
DECLARE #QQ TABLE (ID INT IDENTITY(1,1), Computed AS ID PERSISTED, Letter VARCHAR (1))
INSERT INTO #QQ (Letter)
VALUES ('h'),
('e'),
('l'),
('l'),
('o')
SELECT *
FROM #QQ
1 1 h
2 2 e
3 3 l
4 4 l
5 5 o
About the cheked answer:
You cannot really do this - because the actual value that will be used
for the IDENTITY column really only is fixed and set when the INSERT
has completed.
marc_s I suppose, you are not actually right. Yes, He can! ))
The way to solution is IDENT_CURRENT():
CREATE TABLE TemporaryTable(
Id int PRIMARY KEY IDENTITY(1,1) NOT NULL,
FkId int NOT NULL
)
ALTER TABLE TemporaryTable
ADD CONSTRAINT [Fk_const] FOREIGN KEY (FkId) REFERENCES [TemporaryTable] ([Id])
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
INSERT INTO TemporaryTable (FkId) VALUES (IDENT_CURRENT('[TemporaryTable]'))
UPDATE TemporaryTable
SET [FkId] = 3
WHERE Id = 2
SELECT * FROM TemporaryTable
DROP TABLE TemporaryTable
More over, you can even use IDENT_CURRENT() as DEFAULT CONSTRAINT and it works instead of SCOPE_IDENTITY() for example. Try this:
CREATE TABLE TemporaryTable(
Id int PRIMARY KEY IDENTITY(1,1) NOT NULL,
FkId int NOT NULL DEFAULT IDENT_CURRENT('[TemporaryTable]')
)
ALTER TABLE TemporaryTable
ADD CONSTRAINT [Fk_const] FOREIGN KEY (FkId) REFERENCES [TemporaryTable] ([Id])
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
INSERT INTO TemporaryTable (FkId) VALUES (DEFAULT)
UPDATE TemporaryTable
SET [FkId] = 3
WHERE Id = 2
SELECT * FROM TemporaryTable
DROP TABLE TemporaryTable
You can do both.
To insert rows with a column "identity", you need to set identity_insert off.
Note that you still can't duplicate values!
You can see the command here.
Be aware to set identity_insert on afterwards.
To create a table with the same record, you simply need to:
create new column;
insert it with null value or other thing;
update that column after inserts with the value of the identity column.
If you need to insert the value at the same time, you can use the ##identity global variable. It'll give you the last inserted. So I think you need to do a ##identity + 1. In this case it can give wrong values because the ##identity is for all tables. So it'll count if the insert occurs in another table with identity.
Another solution is to get the max id and add one :) and you get the needed value!
use this simple code
`SCOPE_IDENTITY()+1
I know the original post was a long while ago. But, the top-most solution is using a trigger to update the field after the record has been inserted and I think there is a more efficient method.
Using a trigger for this has always bugged me. It always has seemed like there must be a better way. That trigger basically makes every insert perform 2 writes to the database, (1) the insert, and then (2) the update of the 2nd int. The trigger is also doing a join back into the table. This is a bit of overhead to have especially for a large database and large tables. And I suspect that as the table gets larger, the overhead of this approach does also. Maybe I'm wrong on that. But, it just doesn't seem like a good solution on a large table.
I wrote a function fn_GetIdent that can be used for this. It's funny how simple it is but really was some work to figure out. I stumbled onto this eventually. It turns out that calling IDENT_CURRENT(#variableTableName) from within a function that is called from the INSERT statements SET value assignment clause acts differently than if you call IDENT_CURRENT(#variableTableName) from the INSERT statement directly. And it makes it where you can get the new identity value for the record that you are inserting.
There is one caveat. When the identity is NULL (ie - an empty table with no records) it acts a little differently since the sys.identity_columns.last_value is NULL. So, you have to handle the very first record entered a little differently. I put code in the function to address that, and now it works.
This works because each call to the function, even within the same INSERT statement, is in it's own new "scope" within the function. (I believe that is the correct explanation). So, you can even insert multiple rows with one INSERT statement using this function. If you call IDENT_CURRENT(#variableTableName) from the INSERT statement directly, it will assign the same value for the newID in all rows. This is because the identity gets updated after the entire INSERT statement finishes processing (within the same scope). But, calling IDENT_CURRENT(#variableTableName) from within a function causes each insert to update the identity value with each row entered. But, it's all done in a function call from the INSERT statement itself. So, it's easy to implement once you have the function created.
This approach is a call to a function (from the INSERT statement) which does one read from the sys.identity_columns.last_value (to see if it is NULL and if a record exists) within the function and then calling IDENT_CURRENT(#variableTableName) and then returning out of the function to the INSERT statement to insert the row. So, it is one small read (for each row INSERTED) and then the one write of the insert which is less overhead than the trigger approach I think. The trigger approach could be rather inefficient if you use that for all tables in a large database with large tables. I haven't done any performance analysis on it compared to the trigger. But, I think this would be a lot more efficient, especially on large tables.
I've been testing it out and this seems to work in all cases. I would welcome feedback as to whether anyone finds where this doesn't work or if there is any problem with this approach. Can anyone can shoot holes in this approach? If so, please let me know. If not, could you vote it up? I think it is a better approach.
So, maybe being holed up due to COVID-19 out there, turned out to be productive for something. Thank you Microsoft for keeping me occupied. Anyone hiring? :) No, seriously, anyone hiring? OK, so now what am I going to do with myself now that I am done with this? :) Wishing everyone safe times out there.
Here is the code below. Wondering if this approach has any holes in it. Feedback welcomed.
IF OBJECT_ID('dbo.fn_GetIdent') IS NOT NULL
DROP FUNCTION dbo.fn_GetIdent;
GO
CREATE FUNCTION dbo.fn_GetIdent(#inTableName AS VARCHAR(MAX))
RETURNS Int
WITH EXECUTE AS CALLER
AS
BEGIN
DECLARE #tableHasIdentity AS Int
DECLARE #tableIdentitySeedValue AS Int
/*Check if the tables identity column is null - a special case*/
SELECT
#tableHasIdentity = CASE identity_columns.last_value WHEN NULL THEN 0 ELSE 1 END,
#tableIdentitySeedValue = CONVERT(int, identity_columns.seed_value)
FROM sys.tables
INNER JOIN sys.identity_columns
ON tables.object_id = identity_columns.object_id
WHERE identity_columns.is_identity = 1
AND tables.type = 'U'
AND tables.name = #inTableName;
DECLARE #ReturnValue AS Int;
SET #ReturnValue = CASE #tableHasIdentity WHEN 0 THEN #tableIdentitySeedValue
ELSE IDENT_CURRENT(#inTableName)
END;
RETURN (#ReturnValue);
END
GO
/* The function above only has to be created the one time to be used in the example below */
DECLARE #TableHasRows AS Bit
DROP TABLE IF EXISTS TestTable
CREATE TABLE TestTable (ID INT IDENTITY(1,1),
New INT,
Letter VARCHAR (1))
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'H')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'e')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'o')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), ' '),
(dbo.fn_GetIdent('TestTable'), 'W'),
(dbo.fn_GetIdent('TestTable'), 'o'),
(dbo.fn_GetIdent('TestTable'), 'r'),
(dbo.fn_GetIdent('TestTable'), 'l'),
(dbo.fn_GetIdent('TestTable'), 'd')
INSERT INTO TestTable (New, Letter)
VALUES (dbo.fn_GetIdent('TestTable'), '!')
SELECT * FROM TestTable
/*
Result
ID New Letter
1 1 H
2 2 e
3 3 l
4 4 l
5 5 o
6 6
7 7 W
8 8 o
9 9 r
10 10 l
11 11 d
12 12 !
*/