Lock table while inserting - sql

I have a large table that get populated from a view. This is done because the view takes a long time to run and it is easier to have the data readily available in a table. A procedure is run every so often that updates the table.
TRUNCATE TABLE LargeTable
INSERT INTO LargeTable
SELECT *
FROM viewLargeView
WITH (HOLDLOCK)
I would like to lock this table when inserting so if someone tries to select a record they will not receive none after the truncate. The lock I am using seems to lock the view and not the table.
Is there a better way to approach this problem?

It's true that your correct locking hint affects the source view.
To make it so that nobody can read from the table while you're inserting:
insert into LargeTable with (tablockx)
...
You don't have to do anything to make the table look empty until after the insert completes. An insert always runs in a transaction, and no other process can read uncommitted rows, unless they explicitly specify with (nolock) or set transaction isolation level read uncommitted. There is no way to protect from that as far as I know.

BEGIN TRY
BEGIN TRANSACTION t_Transaction
TRUNCATE TABLE LargeTable
INSERT INTO LargeTable
SELECT *
FROM viewLargeView
WITH (HOLDLOCK)
COMMIT TRANSACTION t_Transaction
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION t_Transaction
END CATCH

Related

How to skip rows from getting selected in SQL

How to skip rows from getting selected which are holding locks ?
Begin tran
Select *
From table with(holdlock)
Where id = 2
In second session, when query gets executed, the row which is has id value of 2 should be skipped in the result.
The (holdlock) holds the lock until the end of the transaction - but this is a shared lock - e.g. other readers aren't blocked by that lock ...
If you really must do this, you need to establish (and hold on to) an exclusive lock
Begin tran
Select *
From table with (updlock, holdlock)
Where id = 2
and use the WITH (READPAST) clause in the second session, not to be stopped by the exclusive lock.
PS: updated to use updlock, based on #charlieface's recommendation - thanks!
I guess, you can try READPAST-hint
https://www.sqlshack.com/explore-the-sql-query-table-hint-readpast/
READPAST will skip existing locks. But you need another lock hint, otherwise SELECT will not keep the lock.
XLOCK, HOLDLOCK (aka SERIALIZABLE) is an option, but can be restrictive.
You can use UPDLOCK to keep a lock until the end of the transaction. UPDLOCK generates a U lock, which blocks other U or X (exclusive writer) locks, but still allows S reader locks. So the data can still be read by a read-only process, but another process executing this same code would still be blocked.
Begin tran
Select *
From table WITH (UPDLOCK, READPAST)
Where id = 2
-- etc
I achieved this with following code in second session.
SET TRANSACTION ISOLATION LEVEL REPEATABLE READS
Select * from table WITH (Readpast)
Thank you all for your helps.

How to lock the transaction untill single query completes its execution for not getting deadlock error

I have the following code in which I have doubt.
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
Now above code will return to the application. means it is get all function for the application.
I am getting deadlock error in the application frequently.
I have hundred of users which is fetching the same table at a time.
So I have to make sure that untill the completion of update statement select statement will not fire OR how to lock the update statement.
One more doubt that if suppose I am updating the one row & another user has tried to select that table then will he get the deadlock.
(User was trying to select another row which was not in the update statement.)
what will happen for this scenario.
Please help me.
Thanks in advance
You should use transaction,
BEGIN TRANSACTION [Tran1]
BEGIN TRY
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
COMMIT TRANSACTION [Tran1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Tran1]
END CATCH
GO
If you want nobody to update/delete the row, I would go with the UPDLOCK on the SELECT statement. This is an indication that you will update the same row shortly, e.g.
select #Bar = Bar from oFoo WITH (UPDLOCK) where Foo = #Foo;

sql identity insert interruption

This might be an obvious question.
I have a sql sproc which runs a cursor.
The cursor will insert a certain amount of records into a table.
The problem is that, say the cursor runs 1000 insert statements, these records identity must follow on each other. if someone else runs an insert while the cursor is running, it will take up a record between two of the records the cursor is running.
can anyone give me some tips to ensure that all the cursor inserts follow each other.
Please note that my cursor might do 50 000 inserts. which means it will take a while to do all the inserts. so that table must not be interrupted while performing inserts.
You can try this:
INSERT INTO YourTable WITH (TABLOCK)
...
...
...
BEGIN TRANSACTION t_Transaction
BEGIN TRY
INSERT INTO Table
SELECT *
FROM tablx
WITH (HOLDLOCK)
COMMIT t_Transaction
END TRY
BEGIN CATCH
ROLLBACK t_Transaction
END CATCH

sql server a simple query takes forever to run due to transaction isolation level

I've come across a problem while learning transaction isolation levels in SQL server.
The problem is that after I run this code (and it finishes without errors):
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT (...)
WAITFOR DELAY '00:00:5'
SELECT (...)
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
I want to run this query:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT (...)
INSERT (...)
COMMIT TRANSACTION T2;
But it just says "Executing query", and does nothing.
I think it's because the lock on the tables somehow continues after the first transaction has been finished. Can someone help?
Of course the selects and the inserts refer to the same tables.
Either the first tran is still open (close the window to make sure it is not), or some other tran is open (exec sp_who2). You can't suppress X-locks taken by DML because SQL Server needs those locks during rollback.
#usr offers good possibilities.
A related specific possibility is that you selected only part of the first transaction to execute while tinkering - i.e. executed BEGIN TRAN T1 and never executed COMMIT TRAN T1. It happens - part of Murphy's Law I think. Try executing just COMMIT TRAN T1, then re-trying the second snippet.
The following worked just fine for me on repeated, complete executions in a single session:
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN T1;
SELECT * from tbl_A
WAITFOR DELAY '00:00:5'
SELECT * from tbl_B
WAITFOR DELAY '00:00:3'
COMMIT TRAN T1;
set implicit_transactions off;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION T2;
INSERT tbl_A (ModifiedDate) values (GETDATE())
INSERT tbl_B (ModifiedDate) values (GETDATE())
INSERT tbl_A (ModifiedDate) select top 1 ModifiedDate from tbl_A
INSERT tbl_B (ModifiedDate) select top 1 ModifiedDate from tbl_B
COMMIT TRANSACTION T2;
1 - SET IMPLICT_TRANSACTIONS is usually OFF unless you SET ANSI_DEFAULTS to ON. Then it will be ON. Thus, you can remove this extra statement if it is not needed.
2 - I agree with Aaron. Read uncommitted (no lock) should be used with a SELECT statement. However, this can lead to invalid results. It is prone to missing data, reading data twice, or scan errors.
Read Committed Snap Shot Isolation (RCSI) is a better option at the expense of tempdb (version store space). This will allow your reports (readers) not to be blocked by transactions (writers).
3 - Setting, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, will use the most amount of locks. Therefore, increase the chances of blocking .
Why use this low concurrency isolation level with two INSERT statements?
I can understand using this level to UPDATE multiple tables. For instance, a bank transaction. Debit one row and Credit another row. Two tables. No one has access to the records until the transaction is complete.
In short, I would use READ COMMITTED isolation level for the insert statements. More than likely, the data being inserted is different.
However, the whole picture is not here.
There is some type of blocking that is occurring. You need to find the root cause.
Here is a code snippet to look at locks and objects that are locked.
--
-- Locked object details
--
-- Old school technique
EXEC sp_lock
GO
-- Lock details
SELECT
resource_type, resource_associated_entity_id,
request_status, request_mode,request_session_id,
resource_description
FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID('AdventureWorks2012')
GO
-- Page/Key details
SELECT object_name(object_id) as object_nm, *
FROM sys.partitions
WHERE hobt_id = 72057594047037440
GO
-- Object details
SELECT object_name(1266103551)
GO
If you still need help, please identify the two blocking transactions and the locks. Please post this information.

Which SQL Read TRANSACTION ISOLATION LEVEL do I want for long running insert?

I have a long running insert transaction that inserts data into several related tables.
When this insert is running, I cannot perform a select * from MainTable. The select just spins its wheels until the insert is done.
I will be performing several of these inserts at the same/overlapping time. To check that the information is not inserted twice, I query the MainTable first to see if an entry is there and that its processed bit is not set.
During the insert transaction, it flips the MainTable processed bit for that row.
So I need to be able to read the table and also be able to tell if the specific row is currently being updated.
Any ideas on how to set this up in Microsoft SQL 2005? I am looking through the SET TRANSACTION ISOLATION LEVEL documentation.
Thank you,
Keith
EDIT: I do not think that the same insert batch will happen at the same time. These are binary files that are being processed and their data inserted into the database. I check that the file has not been processed before I parse and insert the data. When I do the check, if the file has not been seen before I do a quick insert into the MainTable with the processed bit set false.
Is there a way to lock the row being updated instead of the entire table?
You may want to rethink your process before you use READ UNCOMMITTED. There are many good reasons for isolated transactions. If you use READ UNCOMMITTED you may still get duplicates because there is a chance both of the inserts will check for updates at the same time and both not finding them creating duplicates. Try breaking it up into smaller batches or issue periodic COMMITS
EDIT
You can wrap the MainTable update in a transaction that will free up that table quicker but you still may get conflicts with the other tables.
ie
BEGIN TRANSACTION
SELECT #ProcessedBit = ProcessedBit FROM MainTable WHERE ID = XXX
IF #ProcessedBit = False
UPDATE MainTable SET ProcessedBit = True WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedBit = False
BEGIN
BEGIN TRANSACTION
-- start long running process
...
COMMIT TRANSACTION
END
EDIT to enable error recovery
BEGIN TRANSACTION
SELECT #ProcessedStatus = ProcessedStatus FROM MainTable WHERE ID = XXX
IF #ProcessedStatus = 'Not Processed'
UPDATE MainTable SET ProcessedBit = 'Processing' WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedStatus = 'Not Processed'
BEGIN
BEGIN TRANSACTION
-- start long running process
...
IF No Errors
BEGIN
UPDATE MainTable SET ProcessedStatus = 'Processed' WHERE ID = XXX
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
END
The only isolation level that allows one transaction to read changes executed by another transaction in progress (before it commits) is:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED