I have two services talking to two different Data-stores (i.e SQL). I am using transactionscope:
eg:
using(TransactionScope scope = new TransactionScope())
{
service1.InsertUser(user);//Insert to SQL Service 1 table User
service2.SavePayment(payment);//Save payment SQL Service 2 table payment
scope.Complete();
}
Service1 is locking the table (User) until the transaction is completed making subsequent transactions with that table sequential. Is there a way to overcome the lock, so can have more than one concurrent calls to the SQL service1 table while the above code is executing?
I would appreciate any input.
Thanks in Advance.
Lihnid
I would guess that you may have triggers on your user or payment table that update the other one.
The most likely scenario is that your save call probably does some selects, which you cannot do in the same proc where you are inserting. This causes too much locking. Determine if you need to insert with a separate call to the db, using a new transactionscope inside and suppress option, removing the select from the transaction. I know your select would have the nolock, but it appears to be ignored in sql 2005 vs older versions. I've had this same problem.
Make all your calls as simple as possible.
Related
Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.
I know maybe I'm asking something stupid in my application users can create a sort of agendas but only a specific number of agendas is allowed per day. So, users perform this pseudo-code:
select count(*) as created
from Agendas
where agendaDay = 'dd/mm/yyyy'
if created < allowedAgendas {
insert into Agendas ...
}
All this obviously MUST be executed in mutual exclusion. Only one user at time can read the number of created agendas and, possibly, insert a new one if allowed.
How can I do this?
I tried to open a transaction with default Read Committed isolation level but this doesn't help because during the transaction the other users can still get the number of the created agendas at the same time with a select query and so try
to insert a new one even if it wouldn't be allowed.
I don't think changing the isolation level could help.
How can I do this?
For testing I'm using SQL Server 2008 while in our production server SQL Server 2012 is run.
it sounds like you have an architecture problem there, but you may be able to achieve this requirement with:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
If you're reading an inserting within the same transaction, I don't see where the problem will be, but if you're expecting interactive input on the basis of the count then you should probably ensure you do this within a single session of implement some kind of queuing functionality
I see so much information about avoiding blocks. My situation is that I WANT blocks.
We have this table with which two separate processes will be communicating with each other. The processes will run at random times and will use this control table to understand if the other process is busy. Both processes can't be busy at the same time, hence the control table.
Each job, when run, will check the control table... and based on that data will decide whether it's OK to run, and if OK, will update the control table record.
The issue is that if both processes run at the same moment, it's not clear that they won't do the following undesired actions (in this exact order):
Proc A reads the control table (table says good to go)
Proc B reads the control table (table says good to go)
Proc A updates control table (table now says "Proc A is busy")
Proc B updates control table (table now says "Proc B is busy")
<- In that scenario, both processes think they successfully updated the control table and will start their main flow (which is NOT what we want)
What I want here is for Proc B to be BLOCKED from SELECTING (not just updating) from the control table. This way, if/when Proc B's select query finally works, it will see the updated 'busy' value, not the value that existed before being changed by Proc A.
We're using SQL Server 2008 R2 I believe. I checked out SERIALIZABLE isolation but it doesn't appear to be strong enough.
For what it's worth we're trying to accomplish this with JDBC... using conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
Which we understand to be the highest level of isolation, but I can still run selects all day from another window.
I'm 100% sure this is nowhere near a unique problem.... does anyone have any suggestion for making this work?
Your approach can work, but there are a few things to consider:
You need to open a transaction in the very beginning (before the first read) and you must only commit it after you have finished your work.
If both A and B try to read/modify the same record, this will work out of the box, even with the default transaction isolation level (READ COMMITTED). Otherwise, you need to tell SQL Server to lock the whole table (using the TABLOCK hint).
In fact, you don't need the reads at all!
This is how it will work:
P1 P2
---------------------------------
BEGIN TRANS
BEGIN TRANS
WRITE (success)
WRITE (blocked)
do work |
. |
. |
COMMIT -> block released, WRITE finishes
do work
.
.
COMMIT
PS: Note, though, that SQL server supports application locks. Thus, if you just want to synchronize two processes, you don't need to "abuse" a table:
Implementing application locks within SQL Server (Distributed Locking Pattern)
PPS: For completeness, let me also answer the question in the title ("How to force SELECT blocking on SQL server?"): For this, you can use a combination of the HOLDLOCK and the XLOCK table hint (or TABLOCKX, if you want to exclusively lock the whole table).
If you need the read (because you want to some processing) I would do the following:
Set transaction isolation level serializable
begin transaction
select from tablea
update tablea
commit
I have an sql server database on which I use stored procedures to insert new data from the application. I wonder what I have to do in order to ensure that it is correct to use multiple threads to call the stored procedures in my db in a concurrent correct/safe way.
A possible concurrency issue is about correctness. I call multiple stored procedures from the data layer interface and each stored procedure (and the other stored procedure that they call) perform its updates in multiple db tables in ways that can break table structure correctness if done concurrently in any "interleaving way" (e.g. from SP_1 I insert different elements in Table1 or Table2 depending on some conditions related to the existence of some element y in Table2)
It seems to me that (using the way tables are defined) in order to make the program correct I have to see every stored procedure actions done in isolation than any other operation that could be called concurrently (operations called using the data access layer interface).
Does one big transaction (that includes everything done as part of an DAL interface call) can help me make the program correct from the point of view of the db in the presence of concurrent inserts? I cannot see (if the solution could be viable) how this could improve more than, say a mutual exclusion approach where only a single thread at the moment would be able to make the necessary inserts into the db tables at a time...
if done concurrently in any "interleaving way"
This is exactly what transactions are used to avoid.
I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....