There are like 6 procedures which are called internally to get data from a transactional table and do aggregations on the retrieved data , formated as an XML and then send emails hourly.
During this process, a lot of logging in done and logs are also sent as email in an HTML format(in the same email).There is one procedure where a deadlock occurs and one section of the email is always missed or we have a deadlock occurence(LOGS). So in order to handle I am trying to use the READ_COMMITTED_SNAPSHOT in that particular procedure. Can anyone please suggest how if this has worked for them or else which is the best way to handle this kind of deadlock.
Can I do a retry of that particular procedure internally by checking the output is Null or not.
I cant let the other process fail as that is a transaction.But I need the HTML to show all the information without missing anything in the body.
EDIT: This occurs very rarely.But the frequency is increasing daily now.I am not able to understand as the procedure is just trying to read from the transactional table and make some calculations and format it into XML and the other transaction is writting to the transactional table. So how does a WRITE effect a READ
You need to fix the deadlock in order to resolve this.
A deadlock occurs when one process holds a resource that the other requires in order to proceed and vice-versa. You'll get a deadlock when you have two processes that acquire the same set of resources in different orders. For instance, If process P1 acquires resources in the following order:
Resource A
Resource B
And a competing process, P2, requires the same resources in a different order:
Resource B
Resource A
P1 starts and acquires exclusive access to Resource A.
P2 starts and acquires exclusive access to Resource B.
In order for each to continue, P1 needs access to Resource B and P2 needs access to Resource A.
Neither of them can acquire the resource they need, thus causing the deadlock.
This is different than blocking, where one process is simply waiting for another process to release the needed resource. Given sufficient time, the blocking will be resolved. In a deadlock, the blocking cannot be resolved.
The SQL Engine can (and does) detect the deadlock situation. It resolves it by selecting one process or the other as the deadlock victim and rolling back.
Fix the deadlock by identifying the problem and resolving it, not by simply retrying and hoping it goes through. SQL Trace may help you identify the problem. You may need a DBA to help you.
A simpler (less dangerous) approach would be to change the six procedures in question so that they do dirty reads (i.e., WITH(NOLOCK)). This should work even in a deadlock, although you might get garbage data.
Related
Recently we see a huge amount of errors that pertain to Serializable isolation violation on table we have some base tables that forms our core data and we extract values from these tables to run our business logic in Lambda's.
Scenario :
Lambda 1 : Runs every 15 mins and gets the latest data from the source (RDBMS) into Redshift which forms our base tables (Does a DELETE and INSERT)
Lambda 2 : Triggered after successful run of the above Lambda and this is where the business logic is written and is normal SELECT statements
Lambda 3 : Triggered every 15 mins and is also run using the base tables and has only SELECT statements
When the Lambda 1 is triggered for its next run at instances we see that it fails with the Serializable isolation violation error.
Based on most of the posts putting a LOCK on the table might solve the issue but will increase the wait time for the other queries to run longer than expected and due to the constraint of the Lambda it will timeout after 15 mins which is not ideal. And I did see posts that stated putting a LOCK didn't entirely solve it too so skeptical to use to.
So something that struck me is would creating a VIEW on top of the base table and use the view in all the SELECT statements would that help here, if someone has any insights on this would really be helpful.
So the issue is with the locks each transaction is creating and being unable to determine the correct order the locks need to be resolved in. See: https://aws.amazon.com/premiumsupport/knowledge-center/redshift-serializable-isolation/
Now your description doesn't have enough writes in flight as stated to see how you are getting this from one pass of the Lambdas runs. So either this description isn't complete (multiple Lambdas updating tables) or the issue is coming between runs of Lambdas. A possibility is that the transaction aren't being closed and the Lambda invocations are having locks that cannot be resolved. Do you have COMMITs closing transactions? More info is needed to know which.
You can inspect pg_locks between Lambdas to see what locks are left around. The XID is the transaction that has the lock. I'd guess you have many more open transactions than you expect. Are your Lambda sessions in autocommit mode? Are you updating tables from multiple sessions? Are you COMMITting your changes? Are you reusing scratch tables?
Adding an explicit LOCK to serialize things can work if you know why / what tables are causing the serialization issue. This is also not likely the best solution and a better approach will likely be apparent when the issue is understood.
Adding a view to the mix won't resolve the issue (though it might move it if it changes the timing of events).
I want to implement a mutual exclusion system in PostgreSQL where multiple worker processes will temporarily lock resources (rows) from a table (queue) while they work on them. If the worker processes crash, I want the lock to be cleanly released and not have to rely on another process to clean up the leaked locks.
What I have come up with so far is to use a SELECT ... FOR UPDATE SKIP LOCKED query within a transaction, which locks the row it finds and skips any other locked row.
It works well but one of the issues is that the worker might take a while to do its task and I need to keep the transaction open for the entire duration of its task.
Another problem is that the workers work incrementally and persist their state to the database so that if they're stopped or crash, they can resume quickly where they were. The row being locked makes it impossible to persist their state in the same table (though I think I can get away from that by using another table to persist the state).
I've searched on the Web on how to implement a semaphore or a resource borrowing system in SQL/PostgreSQL but I haven't found something that fits my needs. Is there a simple way of achieving this with PostgreSQL?
The problem: a .NET application trying to save many records to SQL Server. BeginTrans was used, and right before commit a warning messages shows to end user to confirm to proceed to save data or not. The user simply left the computer and go away!!!
Now all other users are unable to access the locked records. Sometimes almost the entire system is affected. Almost all transaction are updating the same records; the confirmation message must be shown after data gets updated, and before commit so if user can rollback. What could be the best solution?
If no solution is found, the last thing i might do is to rollback, show the confirmation message, if user accepts then i will again save the data without any confirmation message (which i don't thing the right way)
My question is: What best i can do? any ideas?
This sounds like a WinForms app? It also sounds like you want to confirm the intent of user's action. Are you in a position to only start the transaction once they confirm they intend to save the data?
Ideally, you should
Prompt the user via [OK | Cancel]
Perform the database transaction
If the result of the transaction is deadlock (or any other failure), inform the user the save operation failed
In other words, the update of records should be a synchronous call.
EDIT: after understanding the specifics as mentioned in the comment below, I would recommend some form of server side task queue that all these requests would need to flow through. Your client would submit a request to the server, and the server application would then become the software responsible for updating records in the database. The clients would make their requests to this application and would be processed in the order they were received. I don't have much experience with inventory tracking software, but understand it's need to be absolutely correct. So this is just a rough idea, I'm sure someone with more experience in inventory tracking will have a better pattern. The proposed pattern creates a large bottleneck on the server that is responsible for updating the records. For example, this pattern would be terrible for someone like Amazon.
I am new to SQL Server, but am having a fair knowledge of simple things like select/update/delete and other transaction. I am facing a dead lock scenario in my application. I have understood the scenario as many threads are parallel trying to run a set of update operations. Its is not a single update but a set of update operations.
I have understood that this cannot be avoided in my application as many people want to do a update simultaneously. So I want to have a manual lock system. First the thread 1 should check if the manual lock is available and then start the transaction. Mean while if the second thread requests for the lock it should be busy and hence the second thread should wait. Once the first is completed the second should acquire the lock and start with the transaction.
This is just a logic i have thought about. But I do not have any idea of how to do this in SQL Server. Are there any examples which can help me. Please let me know if you can give me some sample sql scripts or links that will be helpful for me. Thank you for your time and help.
You probably mean "semaphore". That is, something to serialise execution of the DML to only one process can run at a time.
This is native in SQL Server using sp_getapplock
You can configure 2nd processes to wait or fail when they call sp_getapplock, and also it can be self-cancelling in "transaction" mode.
You will still most likely end up in the same scenario. Having a dead lock based around your tailor made locks. SQL Server internally implements a very robust locking mechanism. You should use it.
The problem you're having is that resources (tables, indexes, etc.) are accessed (or modified) in a conflicting order by different transactions/threads.
If you create your own locking mechanism, you may end up with a dead lock just the same. Example:
Thread 1 creates a lock on Customer record
Thread 2 creates a lock on Order record
Thread 1 attempts to create a lock on Order record (but cannot proceed due to step 2)
Thread 2 attempts to create a lock on Customer record (but cannot proceed due to step 3)
Voila ... deadlock
The solution is to refactor the way resources are accessed, so records are always accessed in the same order and the problem will go away.
Thread 1 creates a lock on Customer record
Thread 2 attempts to create a lock on Customer record (but cannot proceed due to step 1)
Thread 1 creates a lock on Order record
Thread 1 completes transaction and unlocks both Order and Customer records
Thread 2 creates a lock on Customer record
Thread 2 creates a lock on Order record
Also, have a look here to read how locking can happen on a single table.
You manual Lock system sounds interesting but you need to aware that it will sacrifice concurrency, which is quite important for many OLTP application.
Advance db like Oracle and SQL server is quite good in avoiding dead lock and give you the tool to resolve dead lock, which help you just kill the session that cause the dead lock and let the other query finish it's job first.
Microsoft Has documentation which can be find here.
http://support.microsoft.com/kb/832524
Beside, there are many other reasons that could lead to deadlock. You can find some example here. how to solve deadlock problem?
We've got a web-based application. There are time-bound database operations (INSERTs and UPDATEs) in the application which take more time to complete, hence this particular flow has been changed into a Java Thread so it will not wait (block) for the complete database operation to be completed.
My problem is, if more than 1 user comes across this particular flow, I'm facing the following error thrown by PostgreSQL:
org.postgresql.util.PSQLException: ERROR: deadlock detected
Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566.
Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560.
The above error is consistently thrown in INSERT statements.
Additional Information:
1) I have PRIMARY KEY defined in this table.
2) There are FOREIGN KEY references in this table.
3) Separate database connection is passed to each Java Thread.
Technologies
Web Server: Tomcat v6.0.10
Java v1.6.0
Servlet
Database: PostgreSQL v8.2.3
Connection Management: pgpool II
One way to cope with deadlocks is to have a retry mechanism that waits for a random interval and tries to run the transaction again. The random interval is necessary so that the colliding transactions don't continuously keep bumping into each other, causing what is called a live lock - something even nastier to debug. Actually most complex applications will need such a retry mechanism sooner or later when they need to handle transaction serialization failures.
Of course if you are able to determine the cause of the deadlock it's usually much better to eliminate it or it will come back to bite you. For almost all cases, even when the deadlock condition is rare, the little bit of throughput and coding overhead to get the locks in deterministic order or get more coarse-grained locks is worth it to avoid the occasional large latency hit and the sudden performance cliff when scaling concurrency.
When you are consistently getting two INSERT statements deadlocking it's most likely an unique index insert order issue. Try for example the following in two psql command windows:
Thread A | Thread B
BEGIN; | BEGIN;
| INSERT uniq=1;
INSERT uniq=2; |
| INSERT uniq=2;
| block waiting for thread A to commit or rollback, to
| see if this is an unique key error.
INSERT uniq=1; |
blocks waiting |
for thread B, |
DEADLOCK |
V
Usually the best course of action to resolve this is to figure out the parent objects that guard all such transactions. Most applications have one or two of primary entities, such as users or accounts, that are good candidates for this. Then all you need is for every transaction to get the locks on the primary entity it touches via SELECT ... FOR UPDATE. Or if touches several, get locks on all of them but in the same order every time (order by primary key is a good choice).
What PostgreSQL does here is covered in the documentation on Explicit Locking. The example in the "Deadlocks" section shows what you're probably doing. The part you may not have expected is that when you UPDATE something, that acquires a lock on that row that continues until the transaction involved ends. If you have multiple clients all doing updates of more than one thing at once, you'll inevitably end up with deadlocks unless you go out of your way to prevent them.
If you have multiple things that take out implicit locks like UPDATE, you should wrap the whole sequence in BEGIN/COMMIT transaction blocks, and make sure you're consistent about the order they acquire locks (even the implicit ones like what UPDATE grabs) at everywhere. If you need to update something in table A then table B, and one part of the app does A then B while the other does B then A, you're going to deadlock one day. Two UPDATEs against the same table are similarly destined to fail unless you can enforce some ordering of the two that's repeatable among clients. Sorting by primary key once you have the set of records to update and always grabbing the "lower" one first is a common strategy.
It's less likely your INSERTs are to blame here, those are much harder to get into a deadlocked situation, unless you violate a primary key as Ants already described.
What you don't want to do is try and duplicate locking in your app, which is going to turn into a giant scalability and reliability mess (and will likely still result in database deadlocks). If you can't work around this within the confines of the standard database locking methods, consider using either the advisory lock facility or explicit LOCK TABLE to enforce what you need instead. That will save you a world of painful coding over trying to push all the locks onto the client side. If you have multiple updates against a table and can't enforce the order they happen in, you have no choice but to lock the whole table while you execute them; that's the only route that doesn't introduce a potential for deadlock.
Deadlock explained:
In a nutshell, what is happening is that a particular SQL statement (INSERT or other) is waiting on another statement to release a lock on a particular part of the database, before it can proceed. Until this lock is released, the first SQL statement, call it "statement A" will not allow itself to access this part of the database to do its job (= regular lock situation). But... statement A has also put a lock on another part of the database to ensure that no other users of the database access (for reading, or modifiying/deleting, depending on the type of lock). Now... the second SQL statement, is itself in need of accessing the data section marked by the lock of Statement A. That is a DEAD LOCK : both Statement will wait, ad infinitum, on one another.
The remedy...
This would require to know the specific SQL statement these various threads are running, and looking in there if there is a way to either:
a) removing some of the locks, or changing their types.
For example, maybe the whole table is locked, whereby only a given row, or
a page thereof would be necessary.
b) preventing multiple of these queries to be submitted at a given time.
This would be done by way of semaphores/locks (aka MUTEX) at the level of the
multi-threading logic.
Beware that the "b)" approach, if not correctly implemented may just move the deadlock situation from within SQL to within the program/threads logic. The key would be to only create one mutex to be obtained first by any thread which is about to run one of these deadlock-prone queries.
Your problem, probably, is the insert command is trying to lock one or both index and the indexes is locked for the other tread.
One common mistake is lock resources in different order on each thread. Check the orders and try to lock the resources in the same order in all threads.