This might be an obvious question.
I have a sql sproc which runs a cursor.
The cursor will insert a certain amount of records into a table.
The problem is that, say the cursor runs 1000 insert statements, these records identity must follow on each other. if someone else runs an insert while the cursor is running, it will take up a record between two of the records the cursor is running.
can anyone give me some tips to ensure that all the cursor inserts follow each other.
Please note that my cursor might do 50 000 inserts. which means it will take a while to do all the inserts. so that table must not be interrupted while performing inserts.
You can try this:
INSERT INTO YourTable WITH (TABLOCK)
...
...
...
BEGIN TRANSACTION t_Transaction
BEGIN TRY
INSERT INTO Table
SELECT *
FROM tablx
WITH (HOLDLOCK)
COMMIT t_Transaction
END TRY
BEGIN CATCH
ROLLBACK t_Transaction
END CATCH
Related
I would like to know if it's possible that a select is blocking a table if it's inside a transaction.
It's something like this:
CREATE PROCEDURE InsertClient (#name NVARCHAR(256))
AS
BEGIN
DECLARE #id INT = 0;
BEGIN TRY
BEGIN TRAN InsertingClient
SELECT #id = MAX(ID) + 1 FROM Clients;
INSERT INTO Clients (Id, Name)
VALUES (#id, #name);
SELECT id, name
FROM Clients;
COMMIT TRAN InsertingClient
END TRY
BEGIN CATCH
ROLLBACK TRAN InsertingClient
END CATCH;
END
It's a dummy example, but if there's a lot of records in that table, and an API is receiving a lot of requests and calling this stored procedure, could be blocked by the initial and final select? Should I use the begin and commit only in the insert to avoid the block?
Thanks!
Based on the sample code you have provided it is critical that the first select is within the transaction because it appears you are manually creating an id based on the max id in the table, and without locking the table you could end up with duplicates. One assumes your actual code has some locking hints (e.g. with (updlock,holdlock)) to ensure that.
However your second select should not be in your transaction because all it will serve to do is make the locks acquired earlier in the transaction last the additional time of the select, when (again based on the sample code) there is no need to do that.
As an aside there are much better ways to generate an id such as using an identity column.
I'm hoping someone can give me an idea on how to handle this situation. I have a stored procedure that updates various tables. Some queries require connecting to different linked servers. Sometimes those linked servers are down and i need the procedure to still run the next statements regardless. Below is an example:
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z2;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z3;
END
You can probably do what you want with TRY/CATCH blocks:
BEGIN
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z1;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z2;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z3;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
END;
This handles runtime errors. If you have compile time errors -- such as tables not existing or the columns not matching between the tables, then this doesn't help.
If this were run from say ssms as a bunch of ordinary queries, I would've put batch separators between each of them to treat them separately. However since this is a stored procedure you can't do that. One way to get around that could be to make one stored procedure of each query and put all of them as steps inside a SQL Server Agent job. You run the job and each step runs in order from top to bottom even if some in the middle fail.
even this will also work: ##ROWCOUNT is oracle's equivalent of sql%rowcount
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z2;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z3;
END
I have the following code in which I have doubt.
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
Now above code will return to the application. means it is get all function for the application.
I am getting deadlock error in the application frequently.
I have hundred of users which is fetching the same table at a time.
So I have to make sure that untill the completion of update statement select statement will not fire OR how to lock the update statement.
One more doubt that if suppose I am updating the one row & another user has tried to select that table then will he get the deadlock.
(User was trying to select another row which was not in the update statement.)
what will happen for this scenario.
Please help me.
Thanks in advance
You should use transaction,
BEGIN TRANSACTION [Tran1]
BEGIN TRY
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
COMMIT TRANSACTION [Tran1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Tran1]
END CATCH
GO
If you want nobody to update/delete the row, I would go with the UPDLOCK on the SELECT statement. This is an indication that you will update the same row shortly, e.g.
select #Bar = Bar from oFoo WITH (UPDLOCK) where Foo = #Foo;
do i need to use transaction to provide all or not proposition for the following insert process?
INSERT INTO table1 ( column1 , column2)
SELECT col1, col2
FROM table2
expecting average row-count from table2 is around 150 and target database is ms sql server 2008 r2.
No, you don't need to. A single SQL statement is already in a transaction by default so there is no way that you will partually insert results or that results will be meanwhile moderated by another transaction. The fact that 2 tables are involved doesn't change the fact that a single SQL statement is used.
As your simple insert will not needed.
By default sqlserver manage this thing and at the end commit whatever you did.
If you explicitly want when multiple statement executed of insert/update or when you want to parent/child inserted in single unit of work, then you use transaction as
tran
declare #parentId int =0;
insert statement ---parent
set #parentId= ##identity
insert statement --child entry
values ( #parentId,...)
If ##ERROR > 0 then
ROLLBACK
else
COMMIT
http://www.codeproject.com/Articles/4451/SQL-Server-Transactions-and-Error-Handling
or you can use try catch block as c# in sqlserver side too.
http://msdn.microsoft.com/en-IN/library/ms175976.aspx
I have a large table that get populated from a view. This is done because the view takes a long time to run and it is easier to have the data readily available in a table. A procedure is run every so often that updates the table.
TRUNCATE TABLE LargeTable
INSERT INTO LargeTable
SELECT *
FROM viewLargeView
WITH (HOLDLOCK)
I would like to lock this table when inserting so if someone tries to select a record they will not receive none after the truncate. The lock I am using seems to lock the view and not the table.
Is there a better way to approach this problem?
It's true that your correct locking hint affects the source view.
To make it so that nobody can read from the table while you're inserting:
insert into LargeTable with (tablockx)
...
You don't have to do anything to make the table look empty until after the insert completes. An insert always runs in a transaction, and no other process can read uncommitted rows, unless they explicitly specify with (nolock) or set transaction isolation level read uncommitted. There is no way to protect from that as far as I know.
BEGIN TRY
BEGIN TRANSACTION t_Transaction
TRUNCATE TABLE LargeTable
INSERT INTO LargeTable
SELECT *
FROM viewLargeView
WITH (HOLDLOCK)
COMMIT TRANSACTION t_Transaction
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION t_Transaction
END CATCH