A lot of queries get suspended. What should I do? - sql

I'm handling an application used by around 2000 people. Everyday users insert around 300 rows/user at the same time (around 7 am to 11 am). I handle it using stored procedures that contain only insert statements, but I use begin tran to prevent the primary key getting duplicate.
Currently suspended transactions frequently happen, so my stored procedure takes around 1-2minutes to be done and this causes our users to wait for a long time to insert every data.
I already checked:
Disk speed normal around read : 600mb/s write 744mb/s.
Procesor usage between 20 - 40 % with 10 cores.
Memory usage only 6gb, I used 12gb of memory.
Check from sys.dm_exec_requests,sp_who2, and sys.dm_os_waiting_tasks.
Result from number 3 is I found that my stored procedure suspends each other (same stored procedures different executor)
This is my stored procedures (Sorry, for the naming, because it is confidential for my company):
ALTER PROC [dbo].[SP_DESTIONATION_TABLE_INSERT]
{params}
WITH RECOMPILE
AS
BEGIN
BEGIN TRAN InsertT
DECLARE #ERRORNMBR INT
SET #ERRORNMBR = 0
IF #T = ''
BEGIN
-------------------------------------------------
DECLARE #TCID VARCHAR(15)
SELECT #TCID = ID
FROM DESTIONATION_TABLE
WHERE NIK = #NIK AND
CUSTOMERID = #CUSTOMERID AND
CUSTOMERTYPE = #CUSTOMERTYPE AND --edit NvA 20180111
DATEDIFF(day,DATE,#DATE) = 0
--IF THERE IS ALREADY A CALL IN SERVER
IF #TCID IS NOT NULL
BEGIN
IF #INTERFACE <> 'WEB' BEGIN
--GET EXISTING CALL ID
SET #ID = #TCID
BEGIN TRAN UBAH
UPDATE DESTIONATION_TABLE
SET
columns=value
WHERE ID = #ID
AND employeeid = #employeeid
AND CUSTOMERID = #CUSTOMERID
SET #ERRORNMBR = #ERRORNMBR + ##ERROR
IF #ERRORNMBR = 0
BEGIN
COMMIT TRAN UBAH
SELECT
columns
FROM DESTIONATION_TABLE WHERE ID = #ID
END
ELSE
BEGIN
ROLLBACK TRAN UBAH
END
END
COMMIT TRAN InsertT
RETURN
END
--------------------------------------------------
-- CHECK #DEVICECONTROLID
IF #DEVICECONTROLID IS NOT NULL
AND #INTERFACE <> 'WEB'
AND EXISTS(SELECT 1 FROM DESTIONATION_TABLE WHERE DEVICECONTROLID = #DEVICECONTROLID)
BEGIN
IF NOT EXISTS(SELECT 1 FROM DESTIONATION_TABLE_TEMP WHERE DEVICECONTROLID = #DEVICECONTROLID)
BEGIN
INSERT INTO DESTIONATION_TABLE_TEMP
(COLUMNS)
VALUES
(VALUES)
END
SELECT * FROM DESTIONATION_TABLE WHERE _DEVICECONTROLID = #_DEVICECONTROLID
END
ELSE
BEGIN
some logic to make primary key formula{string+date+employeeid+increment}
END
END
ELSE
BEGIN
BEGIN TRAN UBAH
IF #PARAMS = 'WEB'
BEGIN
UPDATE DESTIONATION_TABLE
SET
COLUMNS = PARAMS
WHERE ID = #ID
END
ELSE IF PARAMS = 'MOBILE'
BEGIN
UPDATE DESTIONATION_TABLE
SET
COLUMNS = PARAMS
WHERE ID = ID
END
SET #ERRORNMBR = #ERRORNMBR + ##ERROR
IF #ERRORNMBR = 0
BEGIN
COMMIT TRAN UBAH
SELECT
COLUMNS
FROM DESTIONATION_TABLE WHERE ID = ID
END
ELSE
BEGIN
ROLLBACK TRAN UBAH
END
END
COMMIT TRAN InsertT
END
I need a suggestion, what should I check next to get what's wrong with my server is.
Is begin tran is the issue here?

I'm not an expert but a googling of "BEGIN TRAN" seems to show that it "locks a table until the transaction is committed with a "COMMIT TRAN" ... meaning that it is a blocking process. So if you're table is locked when all these inserts are attempting to execute, some of them would obviously not be successful.
https://www.mssqltips.com/sqlservertutorial/3305/what-does-begin-tran-rollback-tran-and-commit-tran-mean/
I would suggest building a work queue on its own thread that listens for INSERTS/UPDATES. Then the queue can empty into the DB at it's leisure.

Related

SPROC that returns unique calculated INT for each call

I'm implementing in my application an event logging system to save some event types from my code, so I've created a table to store the log type and an Incremental ID:
|LogType|CurrentId|
|info | 1 |
|error | 5 |
And also a table to save the concrete log record
|LogType|IdLog|Message |
|info |1 |Process started|
|error |5 |some error |
So, every time I need to save a new record I call a SPROC to calculate the new id for the log type, basically: newId = (currentId + 1). But I am facing an issue with that calculation because if multiple processes calls the SPROC at the same time the "generated Id" is the same, so I'm getting log records with the same Id, and every record must be Id-unique.
This is my SPROC written for SQL Server 2005:
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
DECLARE #CurrentId BIGINT
SET #CurrentId = (SELECT CurrentId FROM TBL_ApplicationLogId WHERE LogType = #LogType)
DECLARE #NewId BIGINT
SET #NewId = (#CurrentId + 1)
UPDATE TBL_ApplicationLogId
SET CurrentId = #NewId
WHERE LogType = #LogType
SET #IdCreated = CONVERT(VARCHAR, #NewId)
END
ELSE
BEGIN
INSERT INTO TBL_ApplicationLogId VALUES(#LogType, 0)
EXEC #IdCreated = usp_GetLogId #LogType
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
RAISERROR (#ErrorMessage, 16, 1)
END CATCH
IF ##TRANCOUNT > 0
COMMIT TRANSACTION
SELECT #IdCreated
END
I would appreciate your help to fix the sproc to return an unique id on every call.
It has to work on SQL Server 2005. Thanks
Can you achieve what you want with an identity column?
Then you can just let SQL Server guarantee uniqueness.
Example:
create table my_test_table
(
ID int identity
,SOMEVALUE nvarchar(100)
);
insert into my_test_table(somevalue)values('value1');
insert into my_test_table(somevalue)values('value2');
select * from my_test_table
If you must issue the new ID values yourself for some reason, try using a sequence, as shown here:
if object_id('my_test_table') is not null
begin
drop table my_test_table;
end;
go
create table my_test_table
(
ID int
,SOMEVALUE nvarchar(100)
);
go
if object_id('my_test_sequence') is not null
begin
drop sequence my_test_sequence;
end;
go
CREATE SEQUENCE my_test_sequence
AS INT --other options are here: https://msdn.microsoft.com/en-us/library/ff878091.aspx
START WITH 1
INCREMENT BY 1
MINVALUE 0
NO MAXVALUE;
go
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value1');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value2');
insert into my_test_table(id,somevalue)values(next value for my_test_sequence,'value3');
select * from my_test_table
One more edit: I think this is an improvement to the existing stored procedure, given the requirements. Include the new value calculation directly in the UPDATE, ultimately return the value directly from the table (not from a variable which could be out of date) and avoid recursion.
A full test script is below.
if object_id('STACKOVERFLOW_usp_getlogid') is not null
begin
drop procedure STACKOVERFLOW_usp_getlogid;
end
go
if object_id('STACKOVERFLOW_TBL_ApplicationLogId') is not null
begin
drop table STACKOVERFLOW_TBL_ApplicationLogId;
end
go
create table STACKOVERFLOW_TBL_ApplicationLogId(CurrentID int, LogType nvarchar(max));
go
create PROCEDURE [dbo].[STACKOVERFLOW_USP_GETLOGID](#LogType VARCHAR(MAX))
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION
BEGIN TRY
DECLARE #IdCreated VARCHAR(MAX)
IF EXISTS (SELECT * FROM STACKOVERFLOW_TBL_ApplicationLogId WHERE LogType = #LogType)
BEGIN
UPDATE STACKOVERFLOW_TBL_APPLICATIONLOGID
SET CurrentId = CurrentID + 1
WHERE LogType = #LogType
END
ELSE
BEGIN
--first time: insert 0.
INSERT INTO STACKOVERFLOW_TBL_ApplicationLogId(CurrentID,LogType) VALUES(0,#LogType);
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(MAX)
SET #ErrorMessage = ERROR_MESSAGE()
IF ##TRANCOUNT > 0
begin
ROLLBACK TRANSACTION;
end
RAISERROR(#ErrorMessage, 16, 1);
END CATCH
select CurrentID from STACKOVERFLOW_TBL_APPLICATIONLOGID where LogType = #LogType;
IF ##TRANCOUNT > 0
begin
COMMIT TRANSACTION
END
end
go
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType1';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
exec STACKOVERFLOW_USP_GETLOGID 'TestLogType2';
You want your increment and read to be atomic, with a guarantee that no other process can increment in between.
You also need to ensure that the log type exists, and again for it to be thread-safe.
Here's how I would go about that, but you would be advised to read up on how it all works in SQL Server 2005 as I have not had to deal with these things in nearly 8 years.
This should complete the two actions atomically, and also without transactions, in order to prevent threads blocking each other. (Not just performance, but also to avoid DEADLOCKs when interacting with other code.)
ALTER PROCEDURE [dbo].[usp_GetLogId]
#LogType VARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
-- Hold our newly created id in a temp table, so we can use OUTPUT
DECLARE #new_id TABLE (id BIGINT);
-- I think this is thread safe, doing all things in a single statement
----> Check that the log-type has no records
----> If so, then insert an initialising row
----> Output the newly created id into our temp table
INSERT INTO
TBL_ApplicationLogId (
LogType,
CurrentId
)
OUTPUT
INSERTED.CurrentID
INTO
#new_id
SELECT
#LogType, 1
FROM
TBL_ApplicationLogId
WHERE
LogType = #LogType
GROUP BY
LogType
HAVING
COUNT(*) = 0
;
-- I think this is thread safe, doing all things in a single statement
----> Ensure we don't already have a new id created
----> Increment the current id
----> Output it to our temp table
UPDATE
TBL_ApplicationLogId
SET
CurrentId = CurrentId + 1
OUTPUT
INSERTED.CurrentID
INTO
#new_id
WHERE
LogType = #LogType
AND NOT EXISTS (SELECT * FROM #new_id)
;
-- Select the result from our temp table
----> It must be populated either from the INSERT or the UPDATE
SELECT
MAX(id) -- MAX used to tell the system that it's returning a scalar
FROM
#new_id
;
END
Not much you can do here, but validate that:
table TBL_ApplicationLogId is indexed by column LogType.
#LogType sp parameter is the same data type as column LogType in table TBL_ApplicationLogId, so it can actually use the index if/when it exists.
If you have a concurrency issue, maybe forcing the lock level on table TBL_ApplicationLogId during select and update can help. Just add (ROWLOCK) after the table name, Eg: TBL_ApplicationLogId (ROWLOCK)

Concurrent table creation

Have a situation when table can be created from different places.
So, I have ~10 working applications, that can simultaneously try to create a same table.
Question. How can I synchronize them ? So I don't have any exceptions or errors ?
All instances of the application are trying to create a new table when the day ends, so when there is something like 00:00:00 they all will try to create it.
Sorry, for possible 'stupid question', have been googling for a while, no results.
Thank you.
You can use sp_getapplock to take arbitrary locks. You could make your app take such a lock before creating the table. Like that:
exec sp_getapplock
if tabledoesnotexist
create table ...
As alluded to in the comments, your first step is to perform an existence check. Then, on the off chance that there are two simultaneous creations you can use TRY...CATCH.
IF Object_ID('test', 'U') IS NULL
BEGIN
BEGIN TRY
CREATE TABLE test ( a int )
END TRY
BEGIN CATCH
SELECT Error_Message()
END CATCH
END
UPDATE
You do not want to create a table every day. Seriously. This is very poor database design.
Instead you want to add a datetime column to your table that indicates when each record was created.
Can you please go through the following code ... Concurrent execution is handled using ISOLATION LEVEL SERIALIZABLE.
CREATE PROCEDURE [dbo].[GetNextID](
#IDName nvarchar(255)
)
AS
BEGIN
/*
Description: Increments and returns the LastID value from tblIDs
for a given IDName
Author: Max Vernon
Date: 2012-07-19
*/
DECLARE #Retry int;
DECLARE #EN int, #ES int, #ET int;
SET #Retry = 5;
DECLARE #NewID int;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET NOCOUNT ON;
WHILE #Retry > 0
BEGIN
BEGIN TRY
BEGIN TRANSACTION;
SET #NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = #IDName),0)+1;
IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = #IDName) = 0
INSERT INTO tblIDs (IDName, LastID) VALUES (#IDName, #NewID)
ELSE
UPDATE tblIDs SET LastID = #NewID WHERE IDName = #IDName;
COMMIT TRANSACTION;
SET #Retry = -2; /* no need to retry since the operation completed */
END TRY
BEGIN CATCH
IF (ERROR_NUMBER() = 1205) /* DEADLOCK */
SET #Retry = #Retry - 1;
ELSE
BEGIN
SET #Retry = -1;
SET #EN = ERROR_NUMBER();
SET #ES = ERROR_SEVERITY();
SET #ET = ERROR_STATE()
RAISERROR (#EN,#ES,#ET);
END
ROLLBACK TRANSACTION;
END CATCH
END
IF #Retry = 0 /* must have deadlock'd 5 times. */
BEGIN
SET #EN = 1205;
SET #ES = 13;
SET #ET = 1
RAISERROR (#EN,#ES,#ET);
END
ELSE
SELECT #NewID AS NewID;
END
GO

Set output parameter when commit transaction fails

I need to roll back the transaction whenever any one of the query fails. If all the transaction is OK, it has to set the output parameter.
So far I have done this.
create PROCEDURE [dbo].[sp_InsertAll]
-- Add the parameters for the stored procedure here
#WO_Type varchar(25),
#WO_Operation varchar(25),
#WO_Source varchar(25),
#RETVAL BIT OUT
AS
BEGIN
SET NOCOUNT ON;
SET #RETVAL = 0
SET XACT_ABORT ON
BEGIN TRAN;
INSERT INTO tblTabl1(WO_Type , WO_Operation , WO_Source )
VALUES (#WO_Type, #WO_Operation, #WO_Source, )
IF #UPDATESOURCE = 1
BEGIN
UPDATE tblT2
SET SM_SaddleStatus = #SOURCESTATUS
WHERE SM_SaddleID = #WO_SourceID
END
IF #UPDATEDESTINATION = 1
BEGIN
UPDATE tblT3
SET SM_SaddleStatus = #DESTINATIONSTATUS
WHERE SM_SaddleID = #WO_DestinationID
END
SET #RETVAL = 1
COMMIT TRAN;
END
Is this the right way to return value? Is there any problem with the method. So far it is working fine for me. Before moving to production I need to cross check.
This is what I'd suggest, using a TSQL Try Catch block to roll back the transaction and give you a different return value:
create PROCEDURE [dbo].[sp_InsertAll]
-- Add the parameters for the stored procedure here
#WO_Type varchar(25),
#WO_Operation varchar(25),
#WO_Source varchar(25),
--#RETVAL BIT OUT
AS
BEGIN
SET NOCOUNT ON;
SET #RETVAL = 0
SET XACT_ABORT ON
BEGIN TRAN;
BEGIN TRY
INSERT INTO tblTabl1(WO_Type , WO_Operation , WO_Source )
VALUES (#WO_Type, #WO_Operation, #WO_Source, )
IF #UPDATESOURCE = 1
BEGIN
UPDATE tblT2
SET SM_SaddleStatus = #SOURCESTATUS
WHERE SM_SaddleID = #WO_SourceID
END
IF #UPDATEDESTINATION = 1
BEGIN
UPDATE tblT3
SET SM_SaddleStatus = #DESTINATIONSTATUS
WHERE SM_SaddleID = #WO_DestinationID
END
return 0
COMMIT TRAN;
END TRY
BEGIN CATCH
rollback Tran;
return -1
END CATCH
END

With(XLock,RowLock) does not lock row exclusively

I have a table that has a column named "Is_Locked".
I open 2 SSMS and in every one create a new Query with this script:
BEGIN TRAN Nima1
BEGIN TRY
DECLARE #a INT
SELECT #a=COUNT(*)
FROM dbo.Siahe WITH(XLOCK,ROWLOCK)
WHERE TedadDaryaii=8
AND Is_Locked=1
IF #a = 0
BEGIN
UPDATE Siahe
SET Is_Locked = 1
WHERE ShMarja = 9999
END
COMMIT TRAN Nima1
END TRY
BEGIN CATCH
ROLLBACK TRAN Nima1
END CATCH
but if all Is_Lock field Is false then both query execute and Select Statement does not lock the rows exclusively.
Why?
If #a = 0 then there were 0 matching rows from your first query. All 0 of those rows are exclusively locked. I'm a bit confused by your different where conditions in your select and update statements. If the same where conditions were used in both, I'd suggest something like:
UPDATE Siahe
SET Is_Locked = 1
WHERE
Is_Locked = 0 and
/* Other Conditions */
IF ##ROWCOUNT = 1
BEGIN
PRINT 'We got the lock'
END
ELSE
BEGIN
PRINT 'Someone else has the lock'
END

Timeout in SQL Procedure

I am using the below sql to import some data from a file from the intranet. However every once a while, there will be a timeout error and the proc would fail, which is why I am using a transaction. If the transaction fails, I want the ImportedTable to get cleared. However this does not seem to happen. Is there anything I am missing here?
ALTER PROCEDURE [dbo].[pr_ImportData]
#StoreCode varchar(10),
#UserId varchar(100)
AS
BEGIN TRANSACTION
-- 1) Clear the data
exec pr_INTRANET_ClearData #StoreCode, #UserId
IF ##ERROR <> 0
BEGIN
ROLLBACK TRANSACTION
GOTO EXIT1
END
-- 2) Add the new data to the history Table
INSERT INTO data_History (...)
SELECT ... from ImportedTable WHERE StoreCode = #StoreCode and UserId = #UserId
IF ##ERROR <> 0
BEGIN
ROLLBACK TRANSACTION
GOTO EXIT1
END
-- 3) Add the data to the live table
INSERT INTO data_Live (...)
SELECT ... from ImportedTable WHERE StoreCode = #StoreCode and UserId = #UserId
IF ##ERROR <> 0
BEGIN
ROLLBACK TRANSACTION
GOTO EXIT1
END
IF ##ERROR <> 0
BEGIN
ROLLBACK TRANSACTION
GOTO EXIT1
END
EXIT1:
-- 4) Delete the rows from the temp table
DELETE FROM ImportedTable WHERE StoreCode = #StoreCode and UserId = #UserId
COMMIT TRANSACTION
Update 1: I am running this against SQL 2000 and SQL2005.
Update 2: To clarify: The ImportedTable never gets cleared at Exit1.
SET XACT_ABORT ON will make any error to rollback the transaction, removing the need to explicitly rollback in case of error. You should also consider using BEGIN TRY/BEGIN CATCH, as is significantly easier to program than checking for ##ERROR after every statement.