Setting lock mode in informix to wait before executing query - vb.net

In my vb application i have a query that updates one column in a table.
But because of the fact that the property for this database lock mode is
SET LOCK MODE TO NOT WAIT
sometimes when running query with update I get errors like this:
SQL ERR: EIX000: (-144) ISAM error: key value locked
EIX000: (-245) Could not position within a file via an index. (informix.table1)
My question is , is it safe to execute:
1st SET LOCK MODE TO WAIT;
2nd the update query;
3rd SET LOCK MODE TO NOT WAIT;
Or you can point me to other solution if this is not safe

It is "safe" to do the three operations as suggested, but …
Your application may block for an indefinite time while the operation runs.
If you terminate the query somehow but don't reset the lock mode, other parts of your code may get hung on locks unexpectedly.
Consider whether a wait with timeout is appropriate.
Each thread, if there are threads, should have exclusive access to one connection for the duration of the three operations.

Related

Redshift: Serializable isolation violation on table

I have a very large Redshift database that contains billions of rows of HTTP request data.
I have a table called requests which has a few important fields:
ip_address
city
state
country
I have a Python process running once per day, which grabs all distinct rows which have not yet been geocoded (do not have any city / state / country information), and then attempts to geocode each IP address via Google's Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP requests into my table, so when I attempt to execute my UPDATE statement, it is unable to select all rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that will stop failing regularly?
Your code is violating the serializable isolation level of Redshift. You need to make sure that your code is not trying to open multiple transactions on the same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other transaction can access the table for updates until the open transaction gets closed. Not sure how your code is architected (synchronous or asynchronous), but this will increase the run time as each lock will force others to wait till the transaction gets over.
Refer: http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html
Just got the same issue on my code, and this is how I fixed it:
First things first, it is good to know that this error code means you are trying to do concurrent operations in redshift. When you do a second query to a table before the first query you did moments ago was done, for example, is a case where you would get this kind of error (that was my case).
Good news is: there is a simple way to serialize redshift operations! You just need to use the LOCK command. Here is the Amazon documentation for the redshift LOCK command. It works basically making the next operation wait until the previous one is closed. Note that, using this command your script will naturally get a little bit slower.
In the end, the practical solution for me was: I inserted the LOCK command before the query messages (in the same string, separated by a ';'). Something like this:
LOCK table_name; SELECT * from ...
And you should be good to go! I hope it helps you.
Since you are doing a point update in your geo codes update process, while the other processes are writing to the table, you can intermittently get the Serializable isolation violation error depending on how and when the other process does its write to the same table.
Suggestions
One way is to use a table lock like Marcus Vinicius Melo has suggested in his answer.
Another approach is to catch the error and re run the transaction.
For any serializable transaction, it is said that the code initiating the transaction should be ready to retry the transaction in the face of this error. Since all transactions in Redshift are strictly serializable, all code initiating transactions in Redshift should be ready to retry them in the face of this error.
Explanations
The typical cause of this error is that two transactions started and proceeded in their operations in such a way that at least one of them cannot be completed as if they executed one after the other. So the db system chooses to abort one of them by throwing this error. This essentially gives control back to the transaction initiating code to take an appropriate course of action. Retry being one of them.
One way to prevent such a conflicting sequence of operations is to use a lock. But then it restricts many of the cases from executing concurrently which would not have resulted in a conflicting sequence of operations. The lock will ensure that the error will not occur but will also be concurrency restricting. The retry approach lets concurrency have its chance and handles the case when a conflict does occur.
Recommendation
That said, I would still recommend that you don't update Redshift in this manner, like point updates. The geo codes update process should write to a staging table, and once all records are processed, perform one single bulk update, followed by a vacuum if required.
Either you start a new session when you do second update on the same table or you have to 'commit' once you transaction is complete.
You can write set autocommit=on before you start updating.

Strange deadlock PostgreSQL deadlock issue with SELECT FOR UPDATE

I am building a locking system based on PostgreSQL, I have two methods, acquire and release.
For acquire, it works like this
BEGIN
while True:
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
if no rows return:
continue
UPDATE my_locks SET locked = true WHERE id = '<NAME>'
COMMIT
break
And for release
BEGIN
UPDATE my_locks SET locked = false WHERE id = '<NAME>'
COMMIT
This looks pretty straightforward, but it doesn't work. The strange part of it is, I thought
SELECT id FROM my_locks WHERE locked = false AND id = '<NAME>' FOR UPDATE
should only acquire the lock on target row only if the target row's locked is false. But in reality, it's not like that. Somehow, even no locked = false row exists, it acquire lock anyway. As a result, I have a deadlock issue. It looks like this
Release is waiting for SELECT FOR UPDATE, and SELECT FOR UPDATE is doing infinite loop while it's holding a lock for no reason.
To reproduce the issue, I wrote a simple test here
https://gist.github.com/victorlin/d9119dd9dfdd5ac3836b
You can run it with psycopg2 and pytest, remember to change the database setting, and run
pip install pytest psycopg2
py.test -sv test_lock.py
The test case plays out like this:
Thread-1 runs the SELECT and acquires the record lock.
Thread-2 runs the SELECT and enters the lock's wait queue.
Thread-1 runs the UPDATE / COMMIT and releases the lock.
Thread-2 acquires the lock. Detecting that the record has changed since its SELECT, it rechecks the data against its WHERE condition. The check fails, and the row is filtered out of the result set, but the lock is still held.
This behaviour is mentioned in the FOR UPDATE documentation:
...rows that satisfied the query conditions as of the query snapshot will be locked, although they will not be returned if they were updated after the snapshot and no longer satisfy the query conditions.
This can have some unpleasant consequences, so a superfluous lock isn't that bad, all things considered.
Probably the simplest workaround is to limit the lock duration by committing after every iteration of acquire. There are various other ways to prevent it from holding this lock (e.g. SELECT ... NOWAIT, running in a REPEATABLE READ or SERIALIZABLE isolation level, SELECT ... SKIP LOCKED in Postgres 9.5).
I think the cleanest implementation using this retry-loop approach would be to skip the SELECT altogether, and just run an UPDATE ... WHERE locked = false, committing each time. You can tell if you acquired the lock by checking cur.rowcount after calling cur.execute(). If there is additional information you need to pull from the lock record, you can use an UPDATE ... RETURNING statement.
But I would have to agree with #Kevin, and say that you'd probably be better off leveraging Postgres' built-in locking support than trying to reinvent it. It would solve a lot of problems for you, e.g.:
Deadlocks are automatically detected
Waiting processes are put to sleep, rather than having to poll the server
Lock requests are queued, preventing starvation
Locks would (generally) not outlive a failed process
The easiest way might be to implement acquire as SELECT FROM my_locks FOR UPDATE, release simply as COMMIT, and let the processes contend for the row lock. If you need more flexibility (e.g. blocking/non-blocking calls, transaction/session/custom scope), advisory locks should prove useful.
PostgreSQL normally aborts transactions which deadlock:
The use of explicit locking can increase the likelihood of deadlocks, wherein two (or more) transactions each hold locks that the other wants. For example, if transaction 1 acquires an exclusive lock on table A and then tries to acquire an exclusive lock on table B, while transaction 2 has already exclusive-locked table B and now wants an exclusive lock on table A, then neither one can proceed. PostgreSQL automatically detects deadlock situations and resolves them by aborting one of the transactions involved, allowing the other(s) to complete. (Exactly which transaction will be aborted is difficult to predict and should not be relied upon.)
Looking at your Python code, and at the screenshot you showed, it appears to me that:
Thread 3 is holding the locked=true lock, and is waiting to acquire a row lock.
Thread 1 is also waiting for a row lock, and also the locked=true lock.
The only logical conclusion is that Thread 2 is somehow holding the row lock, and waiting for the locked=true lock (note the short time on that query; it is looping, not blocking).
Since Postgres is not aware of the locked=true lock, it is unable to abort transactions to prevent deadlock in this case.
It's not immediately clear to me how T2 acquired the row lock, since all the information I've looked at says it can't do that:
FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt UPDATE, DELETE, SELECT FOR UPDATE, SELECT FOR NO KEY UPDATE, SELECT FOR SHARE or SELECT FOR KEY SHARE of these rows will be blocked until the current transaction ends; conversely, SELECT FOR UPDATE will wait for a concurrent transaction that has run any of those commands on the same row, and will then lock and return the updated row (or no row, if the row was deleted). Within a REPEATABLE READ or SERIALIZABLE transaction, however, an error will be thrown if a row to be locked has changed since the transaction started. For further discussion see Section 13.4.
I was not able to find any evidence of PostgreSQL "magically" upgrading row locks to table locks or anything similar.
But what you're doing is not obviously safe, either. You're acquiring lock A (the row lock), then acquiring lock B (the explicit locked=true lock), then releasing and re-acquiring A, before finally releasing B and A in that order. This does not properly observe a lock hierarchy since we try both to acquire A while holding B and vice-versa. But OTOH, acquiring B while holding A should not fail (I think), so I'm still not sure this is outright wrong.
Quite frankly, it's my opinion that you'd be better off just using the LOCK TABLE statement on an empty table. Postgres is aware of these locks and will detect deadlocks for you. It also saves you the trouble of the SELECT FOR UPDATE finagling.
Also, you should add locked = true in the release code:
BEGIN
UPDATE my_locks SET locked = false WHERE id = '<NAME>' AND locked = true
COMMIT
If not, you are updating the record whatever locked state it is (in your case, even when locked = false), and adding the odds of causing a deadlock.

Delphi XE - getting around a stalled sql query due to a locked table exception

Normally, I encapsulate the query.active in a try/except/end with a messagedlg appearing to prompt the user to retry after a few moments ...
try
query.active:=true;
except
if messagedlg('someone is locking the table. wait about 10 seconds before clicking retry',mtinformation,[mbretry],0) = mrretry then begin
query.active:=true;
end;
but this isn't ideal, since the user can just immediately click the retry,
and the query will fail if the table is still locked.
Even a dirty read doesn't sidestep the table locking.
Since automation is now a factor, with no user to attend to the prompt,
has anyone discovered a way to let the query continue attempting execution
even if the table remains locked for a bit?

How can I force a Snapshot Isolation failure of 3960

Story
I have a SPROC using Snapshot Isolation to perform several inserts via MERGE. This SPROC is called with very high load and often in parallel so it occasionally throws an Error 3960- which indicates the snapshot rolled back because of change conflicts. This is expected because of the high concurrency.
Problem
I've implemented a "retry" queue to perform this work again later on, but I am having difficulty reproducing the error to verify my checks are accurate.
Question
How can I reproduce a snapshot failure (3960, specifically) to verify my retry logic is working?
Already Tried
RAISEERROR doesn't work because it doesn't allow me to raise existing errors, only user defined ones
I've tried re-inserted the same record, but this doesn't throw the same failure since it's not two different transactions "racing" another
Open two connections, start a snapshot transaction on both, on connection 1 update a record, on the connection 2 update the same record (in background because it will block), then on connection 1 commit
Or treat a user error as a 3960 ...
Why not just do this:
RAISERROR(3960, {sev}, {state})
Replacing {sev} and {state} with the actual values that you see when the error occurs in production?
(Nope, as Martin pointed out, that doesn't work.)
If not that then I would suggest trying to run your test query multiple times simultaneously. I have done this myself to simulate other concurrency errors. It should be doable as long as the test query is not too fast (a couple of seconds at least).

Voluntary transaction priority in Oracle

I'm going to make up some sql here. What I want is something like the following:
select ... for update priority 2; // Session 2
So when I run in another session
select ... for update priority 1; // Session 1
It immediately returns, and throws an error in session 2 (and hence does a rollback), and locks the row in session 1.
Then, whilst session 1 holds the lock, running the following in session 2.
select ... for update priority 2; // Session 2
Will wait until session 1 releases the lock.
How could I implement such a scheme, as the priority x is just something I've made up. I only need something that can do two priority levels.
Also, I'm happy to hide all my logic in PL/SQL procedures, I don't need this to work for generic SQL statements.
I'm using Oracle 10g if that makes any difference.
I'm not aware of a way to interrupt an atomic process in Oracle like you're suggesting. I think the only thing you could do would be to programmaticaly break down your larger processes into smaller ones and poll some type of sentinel table. So instead of doing a single update for 1 million rows perhaps you could write a proc that would update 1k, check a jobs table (or something similar) to see if there's a higher priority process running, and if a higher priority process is running, to pause its own execution through a wait loop. This is the only thing I can think that would keep your session alive during this process.
If you truly want to abort the progress of your currently running, lower priority thread and losing your session is acceptable, then I would suggest a jobs table again that registered the SQL that was being run and the session ID that it is run on. If you run a higher priority statement it should again check the jobs table and then issue a kill command to the low priority session (http://www.oracle-base.com/articles/misc/KillingOracleSessions.php) along with inserting a record into the jobs table to note the fact that it was killed. When a higher-priority process finishes it could check the jobs table to see if it was responsible for killing anything and if so, reissue it.
That's what resource manager was implemented for.