SQL cursor performance/Alternative? - sql

I currently have two tables Table1 and Table 2 structure as below. As you can see table 1 contains multiple rows for column FK and the FK column makes the foreign key to the Table 2 ID column which has only one row per ID with the most recent value from table 1 ordered by ID column in Table 1.
Table 1
ID FK END_DTTM
1 1 01/01/2000
2 1 01/01/2005
3 1 01/01/2012
4 1 01/01/2100
5 2 01/01/1999
6 2 01/01/2100
7 3 01/01/2100
Table 2
ID END_DTTM
1 01/01/2100
2 01/01/2100
3 01/01/2100
The business requirement is to track every update in Table 2 so that point in time data can be retrieved. To achieve this I am using SQL 2016 and Temporal tables where every update to the Table 2 creates a version in the history table automatically.
To achieve the insert update process I am currently using cursors which is terribly slow and is processing around 71000 rows in 30 mins and the table has around 60million rows! Cursor query as follows:
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
Declare #ID as int;
Declare #FK as int;
Declare #END_DTTM as datetime2;
DECLARE #SelectCursor as CURSOR;
SET #SelectCursor = CURSOR LOCAL STATIC READ_ONLY FORWARD_ONLY FOR
SELECT [ID],[FK],[END_DTTM] from TABLE1 order by FK,ID;
OPEN #SelectCursor ;
FETCH NEXT FROM #SelectCursor INTO #ID,#FK,#END_DTTM;
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE TABLE2
set
END_DTTM = #END_DTTM
where ID = #FK
IF ##ROWCOUNT = 0
BEGIN
INSERT Table2
(
ID,END_DTTM
)
VALUES (
#FK,#END_DTTM
)
END
FETCH NEXT FROM #SelectCursor INTO #ID,#FK,#END_DTTM;
END
CLOSE #SelectCursor;
DEALLOCATE #SelectCursor;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
DECLARE #ErrorNumber INT = ERROR_NUMBER();
DECLARE #ErrorLine INT = ERROR_LINE();
DECLARE #ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE();
DECLARE #ErrorSeverity INT = ERROR_SEVERITY();
DECLARE #ErrorState INT = ERROR_STATE();
PRINT 'Actual error number: ' + CAST(#ErrorNumber AS VARCHAR(10));
PRINT 'Actual line number: ' + CAST(#ErrorLine AS VARCHAR(10));
PRINT 'Actual message: ' + CAST(#ErrorMessage AS VARCHAR(4000));
PRINT 'Actual severity: ' + CAST(#ErrorSeverity AS VARCHAR(10));
PRINT 'Actual state: ' + CAST(#ErrorState AS VARCHAR(10));
Insert into ERROR_LOG
(
SOURCE_PRIMARY_KEY
,ERROR_CODE
,ERROR_COLUMN
,ERROR_DESCRIPTION
)
VALUES
(
null,
#ErrorNumber,
#ErrorState,
#ErrorMessage,
'Error!'
);
Throw;
-- RAISERROR(#ErrorMessage, #ErrorSeverity, #ErrorState);
END CATCH
END;
I tried using cte but I didn't see any performance gain with it, in fact it was tad slower than the cursors themselves.
Is there a better way to achieve the above using set based operation still process every row from table 1 and update 2 so that temporal table picks up the update and tracks the changes?

I am not sure how you are running the update SQL, but I'll describe the process that I use to track changes in Oracle.
I setup a trigger on the table that I want to audit. I create another table with the same columns, one set prefixed with OLD_ and the other prefixed NEW_. In the Oracle trigger, you can reference the new row and old row. I run an insert into the audit table with the old and new value, DML action type, and the TIMESTAMP. Additionally, I'll add the database user and if possible the application user that requested the change.
In the current RAC cluster and in our ancient 9i AIX server, I could never notice any performance degradation.
Additionally, if the transaction is rolled back, it won't insert the audit record as the trigger is inside the transaction.
Don't let people tell you NOT to use SQL triggers. While you don't want to do "crazy" things with triggers (like running queries or calling Web services), it is the perfect application for a trigger (I normally use them to add a last updated date to a row. I don't trust the application layer for accurate information).

Oh, is that all you want?
insert into table2(ID, END_DTTM)
select fk, max(END_DTTM)
from table1 t1
group by fk;

Related

MS SQL trigger not inserting records for bulk insert

One of the external application is inserting more than 40K records in the SQL Azure table.
I have a trigger to process all of the required rows which matches the column value for the unique column value for the distinct record
Whenever the 40K+ record in inserted the trigger is not able to fetch or trigger for all of the records and just getting 1 and 2 records sometimes.
In trigger how can I get distinct column value and order by.
inserting into temptables insert fewer columns only and random
How can i do batch processing from the trigger for the bulk insert
/****** Object: Trigger [dbo].[PriceStagingInsertTrigger] Script Date:
29/09/2020 13:46:24 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[PriceStagingInsertTrigger] on [dbo].
[SalesPriceStaging]
AFTER INSERT
AS DECLARE #ItemNumber NVARCHAR(20),
#applicablefromdate datetime,
#partycodetype int
SELECT #ItemNumber = ins.ITEMNUMBER FROM INSERTED ins;
SELECT #applicablefromdate = ins.PRICEAPPLICABLEFROMDATE FROM INSERTED ins;
SELECT #partycodetype = ins.PARTYCODETYPE FROM INSERTED ins;
SELECT * into #temptables from inserted
EXEC dbo.spSalesPriceStaging #ItemNumber,#applicablefromdate,#partycodetype
PRINT 'Stored procedure spSalesPriceStaging executed and completed'
The (BAD) solution is :
ALTER TRIGGER [dbo].[PriceStagingInsertTrigger]
ON [dbo].[SalesPriceStaging]
AFTER INSERT
AS
SET NOCOUNT ON;
DECLARE #ItemNumber NVARCHAR(20),
#applicablefromdate datetime,
#partycodetype int;
DECLARE C CURSOR
FOR
SELECT ITEMNUMBER, PRICEAPPLICABLEFROMDATE, PARTYCODETYPE
FROM inserted;
OPEN C;
FETCH C INTO #ItemNumber, #applicablefromdate, #partycodetype;
WHILE ##fetch_status = 0
BEGIN
EXEC dbo.spSalesPriceStaging #ItemNumber,#applicablefromdate,#partycodetype;
FETCH C INTO #ItemNumber, #applicablefromdate, #partycodetype;
END;
CLOSE C;
DEALLOCATE C;
GO
As they say, trigger in SQL Server have a set based logic and fires only one time even if there is 3657435435143213213 rows that are inserted.
The presence of a variable in the code is generally a pattern a bad code design....

Insert Large data row by row in multiple reference tables

I went through a lot of posts on SO. However, they do not fit my situation.
We have a situation where we want to store a large dataset on sqlserver 2017 into multiple reference tables.
We have tried with cursor and it is working fine. However, we are concerned about the performance issue of loading large data(1+ million rows)
Example
T_Bulk is a input table, T_Bulk_Orignal is destination table and T_Bulk_reference is a reference table for t_Bulk_orignal
create table T_Bulk
(
Id uniqueidentifier,
ElementType nvarchar(max),
[Description] nvarchar(max)
)
create table T_Bulk_orignal
(
Id uniqueidentifier,
ElementType nvarchar(max),
[Description] nvarchar(max)
)
create table T_Bulk_reference
(
Id uniqueidentifier,
Description2 nvarchar(max)
)
create proc UseCursor
(
#udtT_Bulk as dbo.udt_T_Bulk READONLY
)
as
begin
DECLARE #Id uniqueidentifier, #ElementType varchar(500), #Description varchar(500),#Description2 varchar(500)
DECLARE MY_CURSOR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT Id, ElementType, [Description]
FROM dbo.T_BULK
OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO #Id, #ElementType, #Description,#Description2
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN Transaction Trans1
BEgin TRy
IF EXISTS (select Id from T_Bulk_orignal where ElementType=#ElementType and Description=#Description)
select #Id = Id from T_Bulk_orignal where ElementType=#ElementType and Description=#Description
ELSE
BEGIN
insert T_Bulk_orignal(Id,ElementType,Description) values (#id, #ElementType,#Description)
END
INSERT T_Bulk_reference(Id,description2)
SELECT Id, Description2
FROM (select #Id as Id, #Description2 as Description2) F
WHERE NOT EXISTS (SELECT * FROM T_Bulk_reference C WHERE C.Id = F.Id and C.Description2 = F.Description2);
COMMIT TRANSACTION [DeleteTransaction]
FETCH NEXT FROM MY_CURSOR INTO #Id, #ElementType, #Description,#Description2
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Trans1]
SELECT ##Error
END CATCH
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
end
We want this operation to execute in one go like bulk insertion however we also need to crosscheck any data discrepancy and if one row is not able to insert we need to rollback only that specific record
The only catch for bulk insertion is as there are reference table data present.
Please suggest best approach on this
This sounds like a job for SSIS (SQL Server Integration Services).
https://learn.microsoft.com/en-us/sql/integration-services/ssis-how-to-create-an-etl-package
In SSIS you can create a data migration job that can do reference checks. You can set it up to fail ,warn or ignore errors at each stage. To find resources on this google for ETL and SSIS.
I have done jobs like yours on 50+ million rows.
Sure it takes a while, and it rolls back everything (if set up like that) on an error, but it is the best tool for this kind of job.
I got a solution to upload large file with a go like bulk insert.
There is a Merge statement present in SQL.
The MERGE statement is used to make changes in one table based on
values matched from anther. It can be used to combine insert,
update, and delete operations into one statement
So we can pass the data using DataTable to StoredProcedure and then Source will be your UserDefinedDataTable and Target will be your actual SQL Table

Multiple rows are getting inserted into a table (which is not desired) as part of a stored procedure

Update: This still remain a mystery. Checked the calling code and we did not find anything that would make the SP run in a loop.
For now we have split the SP into two which seems to have arrested the issue although not able to reason how that has helped out.
Database: MS SQL Server.
I have a SP which performs few operations - i.e inserts a row into 3 tables based on certain status as part of that SP being called.
It is getting called from our web application based on a user action.
We have cases, few times a day where the same row gets inserted multiple times (sometime more than 50+) with the same values in each row except that if you look at the datetime when the row was inserted there is a difference of few milliseconds. So it is unlikely that the user is initiating that action.
This SP is not running in a Transaction or with any locks however it is getting called probably concurrently multiple times as we have many concurrent users on the web application invoking this action.
My question is what is causing the same row to insert so many times? If concurrent execution of SP was an issue where we are updating same row then it is understood one may overwrite the other. However in this case each user calls in the SP with different parameters.
I have put the said operation in a Transaction to monitor the behavior however was looking to find out what exactly causes these kind of multiple inserts with same value just a few milliseconds apart?
USE [ABC]
GO
/****** Object: StoredProcedure [dbo].[AddProcessAdmittedDocUploadScrutinyWithLog] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[AddProcessAdmittedDocUploadScrutinyWithLog]
(
--Insert using bulk
#stdfrm_id int,
#course_id int,
#stdfrm_scrt_apprvby int,
#stdfrm_scrt_apprvcomment varchar(max),
#sRemainingDocs varchar(max),
#DTProcessAdmittedDocUploadScrutiny AS dbo.MyDTProcessAdmittedDocUploadScrutiny READONLY
)
AS
BEGIN
DECLARE #result char
SET #result='N'
--New
declare #AuditCount int=0;
select #AuditCount=count(scrtaudit_id) from tbl_ProcessAdmittedScrutinyAuditLog
where stdfrm_id=#stdfrm_id and stdfrm_scrt_apprvby=#stdfrm_scrt_apprvby
and stdfrm_scrt_apprvcomment=#stdfrm_scrt_apprvcomment and convert(date,stdfrm_scrt_apprvon,103)=convert(date,getdate(),103)
--Checked extra conditon to avoid repeatation
if(#AuditCount=0)
BEGIN
--Call Insert
BEGIN TRY
/*Remaining Documents----------*/
DECLARE #sdtdoc_id Table (n int primary key identity(1,1), id int)
if(#sRemainingDocs is not null)
begin
--INSERT INTO #sdtdoc_id (id) SELECT Name from splitstring(#sRemainingDocs)
INSERT INTO #sdtdoc_id (id) SELECT [Value] from dbo.FN_ListToTable(#sRemainingDocs,',')
end
Declare #isRemaining int=0;
SELECT #isRemaining=Count(*) FROM #sdtdoc_id
/*Calculate stdfrm_scrt_apprvstatus*/
Declare #stdfrm_scrt_apprvstatus char(1)='A';--Approved
Declare #TotalDescripancies int;
select #TotalDescripancies=count(doc_id) from #DTProcessAdmittedDocUploadScrutiny where doc_id_scrtyn='Y'
if(#isRemaining>0)
begin
set #stdfrm_scrt_apprvstatus='H';--Discrepancies Found
end
else if exists (select count(doc_id) from #DTProcessAdmittedDocUploadScrutiny where doc_id_scrtyn='Y')
begin
if(#TotalDescripancies>0)
begin
set #stdfrm_scrt_apprvstatus='H';--Discrepancies Found
end
end
/* Check if Discrepancies Found first time then assign to Checker o.w assign to direct college like grievance*/
if(#stdfrm_scrt_apprvstatus='H')
begin
declare #countAuditLog int=0;
select #countAuditLog=count(stdfrm_id) from tbl_ProcessAdmittedScrutinyAuditLog where stdfrm_id =#stdfrm_id
if (#countAuditLog=0)
begin
set #stdfrm_scrt_apprvstatus='G'--'E';--Discrepancies Found set Edit request assign to Checker
end
--else if (#countAuditLog=1)
-- begin
--set #stdfrm_scrt_apprvstatus='G';--Discrepancies Found set Grievance assign to college
-- end
end
/*----------------------*/
/*Update status in original table-----*/
Update tbl_ProcessAdmitted set stdfrm_scrt_apprvstatus=#stdfrm_scrt_apprvstatus
,stdfrm_scrt_apprvon=getdate(),stdfrm_scrt_apprvby=#stdfrm_scrt_apprvby
,stdfrm_scrt_apprvcomment=#stdfrm_scrt_apprvcomment
where stdfrm_id =#stdfrm_id
/*Add in Main Student Log-----------*/
/********* The row here gets inserted multiple times *******************/
INSERT into tbl_ProcessAdmittedScrutinyAuditLog
(stdfrm_id, stdfrm_scrt_apprvstatus, stdfrm_scrt_apprvon, stdfrm_scrt_apprvby, stdfrm_scrt_apprvcomment )
values
(#stdfrm_id, #stdfrm_scrt_apprvstatus, getdate(), #stdfrm_scrt_apprvby, #stdfrm_scrt_apprvcomment)
DECLARE #scrtaudit_id int =##identity
/*Completed -------------------------*/
DELETE FROM tbl_ProcessAdmittedDocUploadScrutiny WHERE stdfrm_id =#stdfrm_id
SET NOCOUNT ON;
/********* The row here gets inserted multiple times *******************/
INSERT tbl_ProcessAdmittedDocUploadScrutiny
(stdfrm_id, course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment)
SELECT #stdfrm_id, #course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment
FROM #DTProcessAdmittedDocUploadScrutiny;
/*Scrutiny Document Log -------------------------*/
/********* The row here gets inserted multiple times *******************/
INSERT tbl_ProcessAdmittedDocUploadScrutinyAuditLog
(scrtaudit_id,stdfrm_id, course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment)
SELECT #scrtaudit_id,#stdfrm_id, #course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment
FROM #DTProcessAdmittedDocUploadScrutiny;
/*Remaining Documents Insert into table*/
DELETE FROM tbl_ProcessAdmittedDocUploadScrutinyRemiaing WHERE stdfrm_id =#stdfrm_id
DECLARE #Id int,#doc_id int
WHILE (SELECT Count(*) FROM #sdtdoc_id) > 0
BEGIN
Select Top 1 #Id = n,#doc_id=id From #sdtdoc_id
--Do some processing here
insert into tbl_ProcessAdmittedDocUploadScrutinyRemiaing(stdfrm_id, doc_id )
values (#stdfrm_id,#doc_id)
insert into tbl_ProcessAdmittedDocUploadScrutinyRemiaingAuditLog
(scrtaudit_id, stdfrm_id, doc_id )
values (#scrtaudit_id,#stdfrm_id,#doc_id)
DELETE FROM #sdtdoc_id WHERE n = #Id
END --Begin end While
/*End Remaining Documents-----------*/
SET #result=#stdfrm_scrt_apprvstatus
END TRY
BEGIN CATCH
SET #result='N'
insert into tbl_ErrorSql( ErrorMessage, stdfrm_id)
values(coalesce(Error_Message(),ERROR_LINE()),#stdfrm_id)
END CATCH;
--End of Call Insert
END
SELECT #result
END

SPROC that returns unique calculated INT for each call

I'm implementing in my application an event logging system to save some event types from my code, so I've created a table to store the log type and an Incremental ID:
|LogType|CurrentId|
|info | 1 |
|error | 5 |
And also a table to save the concrete log record
|LogType|IdLog|Message |
|info |1 |Process started|
|error |5 |some error |
So, every time I need to save a new record I call a SPROC to calculate the new id for the log type, basically: newId = (currentId + 1). But I am facing an issue with that calculation because if multiple processes calls the SPROC at the same time the "generated Id" is the same, so I'm getting log records with the same Id, and every record must be Id-unique.
This is my SPROC written for SQL Server 2005:
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
DECLARE #CurrentId BIGINT
SET #CurrentId = (SELECT CurrentId FROM TBL_ApplicationLogId WHERE LogType = #LogType)
DECLARE #NewId BIGINT
SET #NewId = (#CurrentId + 1)
UPDATE TBL_ApplicationLogId
SET CurrentId = #NewId
WHERE LogType = #LogType
SET #IdCreated = CONVERT(VARCHAR, #NewId)
END
ELSE
BEGIN
INSERT INTO TBL_ApplicationLogId VALUES(#LogType, 0)
EXEC #IdCreated = usp_GetLogId #LogType
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
RAISERROR (#ErrorMessage, 16, 1)
END CATCH
IF ##TRANCOUNT > 0
COMMIT TRANSACTION
SELECT #IdCreated
END
I would appreciate your help to fix the sproc to return an unique id on every call.
It has to work on SQL Server 2005. Thanks
Can you achieve what you want with an identity column?
Then you can just let SQL Server guarantee uniqueness.
Example:
create table my_test_table
(
ID int identity
,SOMEVALUE nvarchar(100)
);
insert into my_test_table(somevalue)values('value1');
insert into my_test_table(somevalue)values('value2');
select * from my_test_table
If you must issue the new ID values yourself for some reason, try using a sequence, as shown here:
if object_id('my_test_table') is not null
begin
drop table my_test_table;
end;
go
create table my_test_table
(
ID int
,SOMEVALUE nvarchar(100)
);
go
if object_id('my_test_sequence') is not null
begin
drop sequence my_test_sequence;
end;
go
CREATE SEQUENCE my_test_sequence
AS INT --other options are here: https://msdn.microsoft.com/en-us/library/ff878091.aspx
START WITH 1
INCREMENT BY 1
MINVALUE 0
NO MAXVALUE;
go
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value1');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value2');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value3');
select * from my_test_table
One more edit: I think this is an improvement to the existing stored procedure, given the requirements. Include the new value calculation directly in the UPDATE, ultimately return the value directly from the table (not from a variable which could be out of date) and avoid recursion.
A full test script is below.
if object_id('STACKOVERFLOW_usp_getlogid') is not null
begin
drop procedure STACKOVERFLOW_usp_getlogid;
end
go
if object_id('STACKOVERFLOW_TBL_ApplicationLogId') is not null
begin
drop table STACKOVERFLOW_TBL_ApplicationLogId;
end
go
create table STACKOVERFLOW_TBL_ApplicationLogId(CurrentID int, LogType nvarchar(max));
go
create PROCEDURE [dbo].[STACKOVERFLOW_USP_GETLOGID](#LogType VARCHAR(MAX))
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM STACKOVERFLOW_TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
UPDATE STACKOVERFLOW_TBL_APPLICATIONLOGID
SET CurrentId = CurrentID + 1
WHERE LogType = #LogType
END
ELSE
BEGIN
--first time: insert 0.
INSERT INTO STACKOVERFLOW_TBL_ApplicationLogId(CurrentID,LogType) VALUES(0,#LogType);
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
begin
ROLLBACK TRANSACTION;
end
RAISERROR(#ErrorMessage, 16, 1);
END CATCH
select CurrentID from STACKOVERFLOW_TBL_APPLICATIONLOGID where LogType = #LogType;
IF ##TRANCOUNT > 0
begin
COMMIT TRANSACTION
END
end
go
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
You want your increment and read to be atomic, with a guarantee that no other process can increment in between.
You also need to ensure that the log type exists, and again for it to be thread-safe.
Here's how I would go about that, but you would be advised to read up on how it all works in SQL Server 2005 as I have not had to deal with these things in nearly 8 years.
This should complete the two actions atomically, and also without transactions, in order to prevent threads blocking each other. (Not just performance, but also to avoid DEADLOCKs when interacting with other code.)
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
-- Hold our newly created id in a temp table, so we can use OUTPUT
DECLARE #new_id TABLE (id BIGINT);
-- I think this is thread safe, doing all things in a single statement
----> Check that the log-type has no records
----> If so, then insert an initialising row
----> Output the newly created id into our temp table
INSERT INTO
TBL_ApplicationLogId (
LogType,
CurrentId
)
OUTPUT
INSERTED.CurrentID
INTO
#new_id
SELECT
#LogType, 1
FROM
TBL_ApplicationLogId
WHERE
LogType = #LogType
GROUP BY
LogType
HAVING
COUNT(*) = 0
;
-- I think this is thread safe, doing all things in a single statement
----> Ensure we don't already have a new id created
----> Increment the current id
----> Output it to our temp table
UPDATE
TBL_ApplicationLogId
SET
CurrentId = CurrentId + 1
OUTPUT
INSERTED.CurrentID
INTO
#new_id
WHERE
LogType = #LogType
AND NOT EXISTS (SELECT * FROM #new_id)
;
-- Select the result from our temp table
----> It must be populated either from the INSERT or the UPDATE
SELECT
MAX(id) -- MAX used to tell the system that it's returning a scalar
FROM
#new_id
;
END
Not much you can do here, but validate that:
table TBL_ApplicationLogId is indexed by column LogType.
#LogType sp parameter is the same data type as column LogType in table TBL_ApplicationLogId, so it can actually use the index if/when it exists.
If you have a concurrency issue, maybe forcing the lock level on table TBL_ApplicationLogId during select and update can help. Just add (ROWLOCK) after the table name, Eg: TBL_ApplicationLogId (ROWLOCK)

Trigger to compare a line from inserted table with a table before insertion

Hi guys I wanna create trigger to compare a line from inserted table with a table before insertion i have 1 table called "cv_langues " with 2 columns "id_cv,id_langues" here is my trigger :
alter trigger insertion_cv_langue
on cv_langues
for insert
as
begin
declare #id_langue int,#id_cv int, #inserted_langue int,#inserted_cv int,#comp int
set #comp=0
declare cv_langues cursor
for select id_langue,id_cv from cv_langues
declare inserted_cv_langues cursor
for select id_langue,id_cv from inserted
open inserted_cv_langues
fetch inserted_cv_langue into #inserted_langue,#inserted_cv //1st line to compare
close inserted_cv_langue
open cv_langues
while ##fetch_status = 0
begin
fetch cv_langues_1 into #id_langue,#id_cv // multi lines from cv_langues
if #id_langue = #inserted_langue and #id_cv = #inserted_cv
begin
set #comp = #comp+1
end
end
if #comp =2
rollback
close cv_langues_1
end
deallocate inserted_cv_langue
deallocate cv_langues_1
Instead of a trigger, try a unique constraint:
ALTER TABLE cv_langues ADD CONSTRAINT u_IdLangues UNIQUE (id_cv, id_langues)
This will prevent any duplicate entries from being inserted for the same user and will be a hell of a lot easier and more efficient than a trigger.