I'm getting nothing but ActiveRecord::TransactionIsolationConflict errors when I try to update records for one of my models. Retrying does not help.
What should I do?
Rails 3.2.13
Ruby 1.9.3
Option A: Hunt down locks on table
In my case, there were a number of orphaned processes which held locks on the table in question. Somehow, the application that initiated them dropped the connections, but the locks remained. The following instructions are drawn from Unlocking tables if thread is lost
Check locks:
mysql> show open tables where in_use > 0;
If you have no idea of what session or process locks the table(s), view the list of processes, and identify likely candidates by their username or the database which they're accessing:
mysql> show processlist;
Kill processes which you know or suspect to have locks on the table:
mysql> kill <process id>;
Option B: Increase timeout
A popular suggestion is to increase the innodb_lock_wait_timeout. The following instructions are drawn from How to debug Lock wait timeout exceeded on MySQL?
Check your timeout:
mysql> show variables like 'innodb_lock_wait_timeout';
Change timeout dynamically (not persistent):
mysql> SET GLOBAL innodb_lock_wait_timeout = 120;
Change timeout in config file (persistent):
[mysqld]
innodb_lock_wait_timeout=120
Related
In my vb application i have a query that updates one column in a table.
But because of the fact that the property for this database lock mode is
SET LOCK MODE TO NOT WAIT
sometimes when running query with update I get errors like this:
SQL ERR: EIX000: (-144) ISAM error: key value locked
EIX000: (-245) Could not position within a file via an index. (informix.table1)
My question is , is it safe to execute:
1st SET LOCK MODE TO WAIT;
2nd the update query;
3rd SET LOCK MODE TO NOT WAIT;
Or you can point me to other solution if this is not safe
It is "safe" to do the three operations as suggested, but …
Your application may block for an indefinite time while the operation runs.
If you terminate the query somehow but don't reset the lock mode, other parts of your code may get hung on locks unexpectedly.
Consider whether a wait with timeout is appropriate.
Each thread, if there are threads, should have exclusive access to one connection for the duration of the three operations.
I have 4 different services in my application which SELECT and UPDATE on the same table in my database (db2 v9.1) on AIX 6.1, not big table around 300,000 records.The 4 services work execute in parallel way, and each service execute in sequential way (not parallel).
The issue that everyday I face horrible deadlock problem, the db hangs for about 5 to 10 minutes then it get back to its normal performance.
My services are synchronized in a way which make them never SELECT or UPDATE on the same row so I believe even if a deadlock occurred it supposed to be on a row level not table level, RIGHT?
Also, in my SELECT queries I use "ONLY FOR FETCH WITH UR", in db2 v9.1 that means not to lock the row as its only for read purpose and there will be no update (UR = uncommitted read).
Any Ideas about whats happening and why?
Firstly, these are certainly not deadlocks: a deadlock would be resolved by DB2 within few seconds by rolling back one of the conflicting transactions. What you are experiencing are most likely lock waits.
As to what's happening, you will need to monitor locks as they occur. You can use the db2pd utility, e.g.
db2pd -d mydb -locks showlocks
or
db2pd -d mydb -locks wait
You can also use the snapshot monitor:
db2 update monitor switches using statement on lock on
db2 get snapshot for locks on mydb
or the snapshot views:
select * from sysibmadm.locks_held
select * from sysibmadm.lockwaits
SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.
Story
I have a SPROC using Snapshot Isolation to perform several inserts via MERGE. This SPROC is called with very high load and often in parallel so it occasionally throws an Error 3960- which indicates the snapshot rolled back because of change conflicts. This is expected because of the high concurrency.
Problem
I've implemented a "retry" queue to perform this work again later on, but I am having difficulty reproducing the error to verify my checks are accurate.
Question
How can I reproduce a snapshot failure (3960, specifically) to verify my retry logic is working?
Already Tried
RAISEERROR doesn't work because it doesn't allow me to raise existing errors, only user defined ones
I've tried re-inserted the same record, but this doesn't throw the same failure since it's not two different transactions "racing" another
Open two connections, start a snapshot transaction on both, on connection 1 update a record, on the connection 2 update the same record (in background because it will block), then on connection 1 commit
Or treat a user error as a 3960 ...
Why not just do this:
RAISERROR(3960, {sev}, {state})
Replacing {sev} and {state} with the actual values that you see when the error occurs in production?
(Nope, as Martin pointed out, that doesn't work.)
If not that then I would suggest trying to run your test query multiple times simultaneously. I have done this myself to simulate other concurrency errors. It should be doable as long as the test query is not too fast (a couple of seconds at least).
I'm going to make up some sql here. What I want is something like the following:
select ... for update priority 2; // Session 2
So when I run in another session
select ... for update priority 1; // Session 1
It immediately returns, and throws an error in session 2 (and hence does a rollback), and locks the row in session 1.
Then, whilst session 1 holds the lock, running the following in session 2.
select ... for update priority 2; // Session 2
Will wait until session 1 releases the lock.
How could I implement such a scheme, as the priority x is just something I've made up. I only need something that can do two priority levels.
Also, I'm happy to hide all my logic in PL/SQL procedures, I don't need this to work for generic SQL statements.
I'm using Oracle 10g if that makes any difference.
I'm not aware of a way to interrupt an atomic process in Oracle like you're suggesting. I think the only thing you could do would be to programmaticaly break down your larger processes into smaller ones and poll some type of sentinel table. So instead of doing a single update for 1 million rows perhaps you could write a proc that would update 1k, check a jobs table (or something similar) to see if there's a higher priority process running, and if a higher priority process is running, to pause its own execution through a wait loop. This is the only thing I can think that would keep your session alive during this process.
If you truly want to abort the progress of your currently running, lower priority thread and losing your session is acceptable, then I would suggest a jobs table again that registered the SQL that was being run and the session ID that it is run on. If you run a higher priority statement it should again check the jobs table and then issue a kill command to the low priority session (http://www.oracle-base.com/articles/misc/KillingOracleSessions.php) along with inserting a record into the jobs table to note the fact that it was killed. When a higher-priority process finishes it could check the jobs table to see if it was responsible for killing anything and if so, reissue it.
That's what resource manager was implemented for.