SQL Server : update rows takes a long time - sql

We use SQL Server 2017. We need to a 1 time only update on a table that has about 1.5 million rows.
Before the update, we changed the database recovery model temporarily from "Full" to "Simple", and run this query to commit transaction every 50,000 rows:
declare #LastCount int
set ROWCOUNT 50000
set #LastCount = 1
while (#LastCount > 0)
begin
begin tran
update myTbl
set SendPath = replace(SendPath,'\\Server1\myFolder\','\\Server2\myFolder') ,
ReceivePath = replace(ReceivePath,'\\Server1\myFolder\','\\Server2\myFolder')
set #LastCount = ##ROWCOUNT
commit tran
end
set ROWCOUNT 0
The above statement takes a long time to execute (when I stopped it after 2 hours, it's still not finished yet).
Since the database is now a Simple Recovery Model, do I still need to do the commit tran for every x number of records ?
Or, can I simply just execute this query without the begin tran and commit tran ?
update myTbl
set SendPath = replace(SendPath,'\\Server1\myFolder\','\\Server2\myFolder') ,
ReceivePath = replace(ReceivePath,'\\Server1\myFolder\','\\Server2\myFolder')
Based on the recommendations, I updated the query to this, and now it runs fast:
declare #LastCount int
set ROWCOUNT 50000
set #LastCount = 1
while (#LastCount > 0)
begin
begin tran
update myTbl
set SendPath = replace(SendPath,'\\Server1\myFolder\','\\Server2\myFolder') ,
ReceivePath = replace(ReceivePath,'\\Server1\myFolder\','\\Server2\myFolder')
where left(SendPath,31) = '\\Server1\myFolder\'
set #LastCount = ##ROWCOUNT
commit tran
end
set ROWCOUNT 0
Thank you

Related

Transactions not being committed although I have "Commit Transaction" statement

I'm using SQL Azure and trying to do conditional delete in batches for a large table, sample:
DECLARE
#LargestKeyProcessed BIGINT =1,
#NextBatchMax BIGINT,
#msg varchar(max) ='';
WHILE (#LargestKeyProcessed <= 1000000)
BEGIN
Begin Transaction
SET #NextBatchMax = #LargestKeyProcessed + 50000;
DELETE From mytable
WHERE Id > #LargestKeyProcessed AND Id <= #NextBatchMax And some logic
SET #LargestKeyProcessed = #NextBatchMax;
set #msg=''+#LargestKeyProcessed;
RAISERROR(#msg, 0, 1) WITH NOWAIT
Commit Transaction
END
After the command gets executed successfully I close the tab but SSMS says there are uncomitted transactions although the commit statement is in every iteration. Also the database size seems to remain the same.
I kindly seek your support in explaining why this happens
Thank you very much
I think you can try to add SET IMPLICIT_TRANSACTIONS OFF to the sql. As follows to see if solved your issue.
DECLARE
#LargestKeyProcessed BIGINT =1,
#NextBatchMax BIGINT,
#msg varchar(max) ='';
WHILE (#LargestKeyProcessed <= 1000000)
BEGIN
SET IMPLICIT_TRANSACTIONS OFF
Begin Transaction
SET #NextBatchMax = #LargestKeyProcessed + 50000;
DELETE From mytable
WHERE Id > #LargestKeyProcessed AND Id <= #NextBatchMax And some logic
SET #LargestKeyProcessed = #NextBatchMax;
set #msg=''+#LargestKeyProcessed;
RAISERROR(#msg, 0, 1) WITH NOWAIT
Commit Transaction
END

A lot of queries get suspended. What should I do?

I'm handling an application used by around 2000 people. Everyday users insert around 300 rows/user at the same time (around 7 am to 11 am). I handle it using stored procedures that contain only insert statements, but I use begin tran to prevent the primary key getting duplicate.
Currently suspended transactions frequently happen, so my stored procedure takes around 1-2minutes to be done and this causes our users to wait for a long time to insert every data.
I already checked:
Disk speed normal around read : 600mb/s write 744mb/s.
Procesor usage between 20 - 40 % with 10 cores.
Memory usage only 6gb, I used 12gb of memory.
Check from sys.dm_exec_requests,sp_who2, and sys.dm_os_waiting_tasks.
Result from number 3 is I found that my stored procedure suspends each other (same stored procedures different executor)
This is my stored procedures (Sorry, for the naming, because it is confidential for my company):
ALTER PROC [dbo].[SP_DESTIONATION_TABLE_INSERT]
{params}
WITH RECOMPILE
AS
BEGIN
BEGIN TRAN InsertT
DECLARE #ERRORNMBR INT
SET #ERRORNMBR = 0
IF #T = ''
BEGIN
-------------------------------------------------
DECLARE #TCID VARCHAR(15)
SELECT #TCID = ID
FROM DESTIONATION_TABLE
WHERE NIK = #NIK AND
CUSTOMERID = #CUSTOMERID AND
CUSTOMERTYPE = #CUSTOMERTYPE AND --edit NvA 20180111
DATEDIFF(day,DATE,#DATE) = 0
--IF THERE IS ALREADY A CALL IN SERVER
IF #TCID IS NOT NULL
BEGIN
IF #INTERFACE <> 'WEB' BEGIN
--GET EXISTING CALL ID
SET #ID = #TCID
BEGIN TRAN UBAH
UPDATE DESTIONATION_TABLE
SET
columns=value
WHERE ID = #ID
AND employeeid = #employeeid
AND CUSTOMERID = #CUSTOMERID
SET #ERRORNMBR = #ERRORNMBR + ##ERROR
IF #ERRORNMBR = 0
BEGIN
COMMIT TRAN UBAH
SELECT
columns
FROM DESTIONATION_TABLE WHERE ID = #ID
END
ELSE
BEGIN
ROLLBACK TRAN UBAH
END
END
COMMIT TRAN InsertT
RETURN
END
--------------------------------------------------
-- CHECK #DEVICECONTROLID
IF #DEVICECONTROLID IS NOT NULL
AND #INTERFACE <> 'WEB'
AND EXISTS(SELECT 1 FROM DESTIONATION_TABLE WHERE DEVICECONTROLID = #DEVICECONTROLID)
BEGIN
IF NOT EXISTS(SELECT 1 FROM DESTIONATION_TABLE_TEMP WHERE DEVICECONTROLID = #DEVICECONTROLID)
BEGIN
INSERT INTO DESTIONATION_TABLE_TEMP
(COLUMNS)
VALUES
(VALUES)
END
SELECT * FROM DESTIONATION_TABLE WHERE _DEVICECONTROLID = #_DEVICECONTROLID
END
ELSE
BEGIN
some logic to make primary key formula{string+date+employeeid+increment}
END
END
ELSE
BEGIN
BEGIN TRAN UBAH
IF #PARAMS = 'WEB'
BEGIN
UPDATE DESTIONATION_TABLE
SET
COLUMNS = PARAMS
WHERE ID = #ID
END
ELSE IF PARAMS = 'MOBILE'
BEGIN
UPDATE DESTIONATION_TABLE
SET
COLUMNS = PARAMS
WHERE ID = ID
END
SET #ERRORNMBR = #ERRORNMBR + ##ERROR
IF #ERRORNMBR = 0
BEGIN
COMMIT TRAN UBAH
SELECT
COLUMNS
FROM DESTIONATION_TABLE WHERE ID = ID
END
ELSE
BEGIN
ROLLBACK TRAN UBAH
END
END
COMMIT TRAN InsertT
END
I need a suggestion, what should I check next to get what's wrong with my server is.
Is begin tran is the issue here?
I'm not an expert but a googling of "BEGIN TRAN" seems to show that it "locks a table until the transaction is committed with a "COMMIT TRAN" ... meaning that it is a blocking process. So if you're table is locked when all these inserts are attempting to execute, some of them would obviously not be successful.
https://www.mssqltips.com/sqlservertutorial/3305/what-does-begin-tran-rollback-tran-and-commit-tran-mean/
I would suggest building a work queue on its own thread that listens for INSERTS/UPDATES. Then the queue can empty into the DB at it's leisure.

Speed up simple update statement in postgres for 1 million rows

I have a very simple sql update statement in postgres.
UPDATE p2sa.observation SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
The observation table has 1513128 rows. The query so far has been running for around 18 hours with no end in sight.
The file_path column is not indexed so I guess it is doing a top to bottom scan but it seems a bit excessive the time. Probably replace is also a slow operation.
Is there some alternative or better approach for doing this one off kind of update which affects all rows. It is essentially updating an old file path to a new location. It only needs to be updated once or maybe again in the future.
Thanks.
In SQL you could do a while loop to update in batches.
Try this to see how it performs.
Declare #counter int
Declare #RowsEffected int
Declare #RowsCnt int
Declare #CodeId int
Declare #Err int
DECLARE #MaxNumber int = (select COUNT(*) from p2sa.observation)
SELECT #COUNTER = 1
SELECT #RowsEffected = 0
WHILE ( #RowsEffected < #MaxNumber)
BEGIN
SET ROWCOUNT 10000
UPDATE p2sa.observation
SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
where file_path != 'newpath/p2s'
SELECT #RowsCnt = ##ROWCOUNT ,#Err = ##error
IF #Err <> 0
BEGIN
Print 'Problem Updating the records'
BREAK
END
ELSE
SELECT #RowsEffected = #RowsEffected + #RowsCnt
PRINT 'The total number of rows effected :'+convert(varchar,#RowsEffected)
/*delaying the Loop for 10 secs , so that Update is completed*/
WAITFOR DELAY '00:00:10'
END
SET ROWCOUNT 0

sql locking for batch updates

I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!

With(XLock,RowLock) does not lock row exclusively

I have a table that has a column named "Is_Locked".
I open 2 SSMS and in every one create a new Query with this script:
BEGIN TRAN Nima1
BEGIN TRY
DECLARE #a INT
SELECT #a=COUNT(*)
FROM dbo.Siahe WITH(XLOCK,ROWLOCK)
WHERE TedadDaryaii=8
AND Is_Locked=1
IF #a = 0
BEGIN
UPDATE Siahe
SET Is_Locked = 1
WHERE ShMarja = 9999
END
COMMIT TRAN Nima1
END TRY
BEGIN CATCH
ROLLBACK TRAN Nima1
END CATCH
but if all Is_Lock field Is false then both query execute and Select Statement does not lock the rows exclusively.
Why?
If #a = 0 then there were 0 matching rows from your first query. All 0 of those rows are exclusively locked. I'm a bit confused by your different where conditions in your select and update statements. If the same where conditions were used in both, I'd suggest something like:
UPDATE Siahe
SET Is_Locked = 1
WHERE
Is_Locked = 0 and
/* Other Conditions */
IF ##ROWCOUNT = 1
BEGIN
PRINT 'We got the lock'
END
ELSE
BEGIN
PRINT 'Someone else has the lock'
END