Deadlock in SQL Server 2008 - sql

I have two stored procedures, both insert rows into the same table.
Once stored procedure call regular time interval and another stored procedure call by user event. Sometimes both stored procedure are called together and at this time deadlock occurs.
How can I solve this problem?

Lock at the beginning of the SP and unlock at the end.
http://msdn.microsoft.com/en-us/library/ms187749.aspx
and
http://msdn.microsoft.com/en-us/library/ms190345.aspx

You can lock the table at the beginning of your both sprocs. This way there will be no deadlocks because data modification will have to wait until the other sproc finishes. See the following command:
select 1 from theTable with (tablock, holdlock) where 1=0;
It also needs to be done inside a transaction. The table will be editable when the transaction finishes.

You might also consider detecting the condition and retrying. Have each procedure back off for a short random amount of time

Related

Is it possible to release Transaction-log locks from within an active (massive data movement) stored procedure?

edited terminology for accuracy:
We have large, daily flows of data within our data-mart. Some of the largest, done with Stored procedures managed by SSIS, take several hours. These long-running stored procedures are preventing the transaction-log from clearing (which compounds the issue because we have numerous SP's running at once, which are then all writing to the T-log with no truncate). Eventually this breaks our database and we're forced to recover from the morning snapshot.
We have explored doing "sub"-commits within the SP, but as I understand it you can't fully release the transaction log within an active stored procedure, because it is itself a transaction.
Without refactoring our large SP's to run in batches, or something to that effect, is it possible to commit to the transaction log periodically within an active SP, so that we release the lock on the transaction log?
edit / extension:
Perhaps I was wrong above:
Will committing intermittently within the SP allow the transaction-log to truncate?
Will committing intermittently within the SP allow the transaction-log to truncate?
If the client starts a transaction, it's not recommended to COMMIT that transaction inside a stored procedure. It's not allowed to exit the stored procedure with a different ##trancount than it was entered with.
The following pattern is technically allowed, although I have never seen it used in the real world:
use tempdb
if ##trancount > 0 rollback
go
drop table if exists T
create table T(id int identity)
go
create or alter procedure tranTest
as
begin
insert into T default values
commit transaction
begin transaction
end
go
begin transaction
exec tranTest
select * from T
rollback
go 5
It would be deeply confusing for client code to rollback a transaction and not have the stored procedure's work rolled back.
If the client doesn't start a transaction, you can have multiple transactions inside a stored procedure, but the smallest granularity for a transaction is a single DML statement. So each INSERT, UPDATE, DELETE, or MERGE would be run in a single transaction.
The practical solutions to this are, in descending order of goodness:
1) Increase the storage available to the log file to accommodate the transactions.
2) Refactor the ETL to use shorter transactions, possibly readying data in stating tables and loading or switching it in in a single, final transaction
3) Refactor the ETL to run in smaller batches.

Oracle 10g - Lock table where two procedures might update this same table synchronously

I want to lock a table in Oracle 10g, so that e.g. procedure A has to wait till procedure B is finished with updating. I read a about the Command LOCK TABLE, but I am not sure, if the other procedure is waiting for the lock to be aquired.
What's also possible is that another thread is calling the same stored procedure B during the update-process, I guess that since stored procedure is running in a single thread, this would be also a problem?
You wouldn't normally want to lock a whole table in Oracle, though of course locking in general is an important issue. By default if 2 sessions try to update the same row then the second will be "blocked" and will have to wait for the first to commit or rollback its change. You can use a select with for update clause to lock a row without updating it.
Instead of using a single regular table shared by all sessions, you could use a Global Temporary Table: then each session has its own copy.

Transaction starting in stored procedure 1 and ending in SP3

I have several stored procedures in a job, and In one of them I Begin a transaction to delete some rows and if rows are greater than 10 then I Roll back. however if there are not I don't want to commit straight away, because 2 stored procedure later I do something similar. however if count is greater than 10 in this instance I want it rolled back all the way to when I stared the transaction (two stored procedures ago)
Is it possible to start a transaction in a store procedure and have multiple roll backs and Commit right at the end somewhere or do I have to put all the code into 1 store procedure to do that?
This sounds incredibly prone to failure.
Regardless, you will need to start the transaction in your code then, while using the same connection, execute the procs. The code would then commit or rollback once all the procs have executed.
Assuming this is c#, see the following question for answers: Call multiple SQL Server stored procedures in a transaction
You can write several stored procedures and then execute them as nested.
You can declare variables in order to get the result and use if statement to commit or rais error for catch block or rollback transaction

C# SQL transaction

I am using a C# class that is calling a SQL stored procedure in a serializable transaction.
So if something goes wrong in the stored procedure, everything is rolled back.
I have one statement in the SQL stored procedure that should be always executed (even if the stored procedure fails at some point and a rollback occurs). The statement is an update of a record.
I cannot change the C# library, so I need to do this in my stored procedure.
Is there some way I can execute that one statement outside the transaction?
You could perhaps use SAVE TRANSACTION. It is not supported in distributed transactions, and your statement must be executed first, so it might not be what you are looking for.
I have found the solution.
I didn't realize that SQL knew it was called in a transactional matter by the c# class.
The update statement that should always be executed is the first step (and also last step) in the procedure. Let me clarify:
I have an IF function. If it is true, the update should occur no matter what. If it is false, some transactional logic should be executed.
Now in the c# class, it expects a result from the stored proc. If the proc doesn't return a result (like it does in the update statement), it rollbacks the transaction.
So by just adding the following lines right after the update statement, the update occurs :)
IF ##TRANCOUNT > 0
BEGIN
COMMIT TRANSACTION
END
Your solution does not sound like a good one.. For example- if your stored procedure will be part of bigger transaction, then it will commit all changes made before it. Also I believe no one would guess that your proc has such behaviour without first seeing code.
The need to always execute some part of proc sounds like need for security audit. So maybe you should use trace, extended events or sql server audit instead.
if you really need what you say you need- you can use method described here: How to create an autonomous transaction in SQL Server 2008

Stored Procedure passing control back too quickily - VB6

I have a stored procedure that is updating a very large table (with over 100 million records). The stored procedure is updating records in this table.
The steps are as follows:
Store record IDs to be updated in a recordset (not all records will be updated - only about 20000)
Loop through the recordset and call the stored procedure for each record ID in the recordset
Each time the stored procedure has finished (for each record in the recordset mentioned in part 1), update a flag in a table to say that the update completed.
I am finding some strange behaviour. It appears that the stored procedure is passing control back to VB6 before it has completed its updates and is continuing processing the next record. The stored procedure is then timing out later on (on another record ID). Therefore there are flags that say updated (step 3), even though the stored procedure has not run (because it timed out). Is this normal behaviour i.e. for the stored procedure to pass control back to VB6 before it has finished the work?
I have Googled this and I have discovered that it could be because of the way the stored procedure is optimised by SQL Server. I would expect control only to be passed back to VB6 after the updates have completed. Is this not the case?
Please note that I realise there may be better ways of approaching this. My question specifically relates to SQL Server passing control back to VB6 before it has finished the work (update).
The following article proved to be the solution to this problem: http://weblogs.sqlteam.com/dang/archive/2007/10/20/Use-Caution-with-Explicit-Transactions-in-Stored-Procedures.aspx. It appears that the following behaviour was happening:
1) Record 1. Run stored procedure and create transaction. Timeout on SQL Command object occurrs.
2) Record 2. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
3) Record 3. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
4) Record 4. Run stored procedure successfully. Return constrol to VB6 to update flag in database.
5) Program ends. Stored procedure rolls back transaction (transaction now encompasses records 1-4). Therefore records 1-4 are not deleted.
Can you...
run the code in sql management studio and see what happens and report back? if so i will update this answer as that will help us understand if its the code / connection or sql.
other things to investigate, given we dont not what cases you have tested for...
use the same code path in ur vb application and change only the sql in the stored procedure to something very simple but has the same signature as far as what its doing (ie/ basica reading if there is reading, basic deleting if there is deleting, and same for updating and adding) to see what happens.
Also, some other thoughts...
if you are using MSSQL, it's as simple as someone leaving a query window open and it ties up the database. This is easily tested. I've had the same trouble before. I've run stored procedures before that had no timeout, that normally would run immediately but would sit overnight and not run. Only to realize another person left their query window open. Close their window and poof it finally runs. Check this out, it could be a table lock. Whether it be the application doing it, or it is being done by another user making queries to the DB. Check to make sure your application is closing connections to the DB each time their being used.