Oracle 10g - Lock table where two procedures might update this same table synchronously - sql

I want to lock a table in Oracle 10g, so that e.g. procedure A has to wait till procedure B is finished with updating. I read a about the Command LOCK TABLE, but I am not sure, if the other procedure is waiting for the lock to be aquired.
What's also possible is that another thread is calling the same stored procedure B during the update-process, I guess that since stored procedure is running in a single thread, this would be also a problem?

You wouldn't normally want to lock a whole table in Oracle, though of course locking in general is an important issue. By default if 2 sessions try to update the same row then the second will be "blocked" and will have to wait for the first to commit or rollback its change. You can use a select with for update clause to lock a row without updating it.
Instead of using a single regular table shared by all sessions, you could use a Global Temporary Table: then each session has its own copy.

Related

Revert SQL Operation Over Multiple Stored Procedures

We are working on an application where, sometimes, we need to delete (some) entries from an SQL table and replace them with a new list of rows. In our app, the logic goes:
Delete relevant_entries using delete_stored_procedure
For Each new_entry in new_entries
Insert new_entry using insert_stored_procedure
This logic works, but there is a catch. What if the DELETE works, but one of the INSERTS fail? We need to be able to revert the database back to before the table rows were deleted and any previous entries were inserted. Is there a way to do this and ONLY revert these things, without affecting any other unrelated operations that may have happened simultaneously? (i.e., a row gets created in another table- we don't want to revert that).
I understand that you can wrap multiple SQL statements in a transaction, but in this case, the logic is happening over multiple stored procedure calls. Any advice?
Create another stored procedure that:
creates a transaction
calls the stored procedures that do the work
commits or rolls back the transaction

How to remove deadlocks in SQL Server 2005?

First of all I would like to know what is the actual root cause of deadlocks in SQL Server 2005. Is it because when two processes access the same row in a table?
Anyways, consider two tables _Table_Now_ and _Table_History_ where both having the same structure.
Suppose there is one column called NAME.
So when one process tries to UPDATE a record with NAME='BLUE' in _Table_Now_, first, it need to put the present row with NAME='BLUE' into _Table_History_ then update
_Table_Now_, and also delete previously present row from _Table_History_.
Deadlock occurs while deleting. I do not understand why?
Please guide me!
deadlock basically mean when process A is dependent on process B and process B is dependent on process A, so A will just start\continue when B finishes and B will only start\continue when A finishes
what you may be experiencing are table (or row) lock, so SQL locks the row before updating the table to make sure no other process tries to access that row while it is doing the update.
Can you be more specific on how are you doing the insert\update\delete. You shouldnt have deadlocks in this scenario.
FYI, don't use with (NOLOCK). It will yes prevent from locking but it does so by telling SQL Server to read uncommitted data, and it can end up in data inconsistencies.
Deadlock occurs when Process A is waiting for Process B to release resources and Process B is waiting for Process A to release resources.
If I understand the order of Updates correctly, it is this:
1. Read a row in Table_Now
2. Update a row in Table_History
3. Update a row in Table_Now
4. Delete a row in Table_History.
This could be a risky order if you are using transactions or locks incorrectly.
To avoid deadlocks, for each process you should execute:
1. Begin Transaction (Preferably table lock)
2. Perform all the DB operations
3. Commit the transaction (or Rollback in case any problem occurs while DB update)
This will ensure each process to lock both the tables, perform all the operations and then exit.
If you are already using transactions, what scope and level are you using? If not, introduce transactions. It should solve the problem.

Deadlock in SQL Server 2008

I have two stored procedures, both insert rows into the same table.
Once stored procedure call regular time interval and another stored procedure call by user event. Sometimes both stored procedure are called together and at this time deadlock occurs.
How can I solve this problem?
Lock at the beginning of the SP and unlock at the end.
http://msdn.microsoft.com/en-us/library/ms187749.aspx
and
http://msdn.microsoft.com/en-us/library/ms190345.aspx
You can lock the table at the beginning of your both sprocs. This way there will be no deadlocks because data modification will have to wait until the other sproc finishes. See the following command:
select 1 from theTable with (tablock, holdlock) where 1=0;
It also needs to be done inside a transaction. The table will be editable when the transaction finishes.
You might also consider detecting the condition and retrying. Have each procedure back off for a short random amount of time

Local Temporary table in Oracle 10 (for the scope of Stored Procedure)

I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....

SQL Server 2005 deadlock on key

I have a table with a clustered primary key index on a uniqueidentifier column. I have a procedure that runs the following psuedo functions:
begin transaction
read from table 1
insert into table 2
update table 1 with pointer to table 2 record
commit transaction
This all works fine until the same procedure is executed concurrently from elsewhere. Once this happens, one of the executions gets deadlocked and terminated every single time on the primary key.
Any idea what I can do to prevent this, short of simply saying "don't run it concurrently"? The transactions are currently running in READ COMMITTED isolation level.
increase the transaction isolation level as eulerfx.myopenid.com is hinting.
use sql "mutexes" to simply wait for a procedure to finish before alowing another to run. http://weblogs.sqlteam.com/mladenp/archive/2008/01/08/Application-Locks-or-Mutexes-in-SQL-Server-2005.aspx
use snapshot isolation level. dependin on what your app does this can work. however this brings other problems to the table. http://msdn.microsoft.com/en-us/library/ms189050.aspx
number 2 requires more code change than 1 though. but sometimes you can't just increase the isolation level.
Look here:
http://msdn.microsoft.com/en-us/library/aa213026(SQL.80).aspx
and here:
http://msdn.microsoft.com/en-us/library/aa213039(SQL.80).aspx