Story
I have a SPROC using Snapshot Isolation to perform several inserts via MERGE. This SPROC is called with very high load and often in parallel so it occasionally throws an Error 3960- which indicates the snapshot rolled back because of change conflicts. This is expected because of the high concurrency.
Problem
I've implemented a "retry" queue to perform this work again later on, but I am having difficulty reproducing the error to verify my checks are accurate.
Question
How can I reproduce a snapshot failure (3960, specifically) to verify my retry logic is working?
Already Tried
RAISEERROR doesn't work because it doesn't allow me to raise existing errors, only user defined ones
I've tried re-inserted the same record, but this doesn't throw the same failure since it's not two different transactions "racing" another
Open two connections, start a snapshot transaction on both, on connection 1 update a record, on the connection 2 update the same record (in background because it will block), then on connection 1 commit
Or treat a user error as a 3960 ...
Why not just do this:
RAISERROR(3960, {sev}, {state})
Replacing {sev} and {state} with the actual values that you see when the error occurs in production?
(Nope, as Martin pointed out, that doesn't work.)
If not that then I would suggest trying to run your test query multiple times simultaneously. I have done this myself to simulate other concurrency errors. It should be doable as long as the test query is not too fast (a couple of seconds at least).
Related
I have a very large Redshift database that contains billions of rows of HTTP request data.
I have a table called requests which has a few important fields:
ip_address
city
state
country
I have a Python process running once per day, which grabs all distinct rows which have not yet been geocoded (do not have any city / state / country information), and then attempts to geocode each IP address via Google's Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP requests into my table, so when I attempt to execute my UPDATE statement, it is unable to select all rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that will stop failing regularly?
Your code is violating the serializable isolation level of Redshift. You need to make sure that your code is not trying to open multiple transactions on the same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other transaction can access the table for updates until the open transaction gets closed. Not sure how your code is architected (synchronous or asynchronous), but this will increase the run time as each lock will force others to wait till the transaction gets over.
Refer: http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html
Just got the same issue on my code, and this is how I fixed it:
First things first, it is good to know that this error code means you are trying to do concurrent operations in redshift. When you do a second query to a table before the first query you did moments ago was done, for example, is a case where you would get this kind of error (that was my case).
Good news is: there is a simple way to serialize redshift operations! You just need to use the LOCK command. Here is the Amazon documentation for the redshift LOCK command. It works basically making the next operation wait until the previous one is closed. Note that, using this command your script will naturally get a little bit slower.
In the end, the practical solution for me was: I inserted the LOCK command before the query messages (in the same string, separated by a ';'). Something like this:
LOCK table_name; SELECT * from ...
And you should be good to go! I hope it helps you.
Since you are doing a point update in your geo codes update process, while the other processes are writing to the table, you can intermittently get the Serializable isolation violation error depending on how and when the other process does its write to the same table.
Suggestions
One way is to use a table lock like Marcus Vinicius Melo has suggested in his answer.
Another approach is to catch the error and re run the transaction.
For any serializable transaction, it is said that the code initiating the transaction should be ready to retry the transaction in the face of this error. Since all transactions in Redshift are strictly serializable, all code initiating transactions in Redshift should be ready to retry them in the face of this error.
Explanations
The typical cause of this error is that two transactions started and proceeded in their operations in such a way that at least one of them cannot be completed as if they executed one after the other. So the db system chooses to abort one of them by throwing this error. This essentially gives control back to the transaction initiating code to take an appropriate course of action. Retry being one of them.
One way to prevent such a conflicting sequence of operations is to use a lock. But then it restricts many of the cases from executing concurrently which would not have resulted in a conflicting sequence of operations. The lock will ensure that the error will not occur but will also be concurrency restricting. The retry approach lets concurrency have its chance and handles the case when a conflict does occur.
Recommendation
That said, I would still recommend that you don't update Redshift in this manner, like point updates. The geo codes update process should write to a staging table, and once all records are processed, perform one single bulk update, followed by a vacuum if required.
Either you start a new session when you do second update on the same table or you have to 'commit' once you transaction is complete.
You can write set autocommit=on before you start updating.
I'm pretty new to PL-SQL although I've got lots of db experience with other RDBMS's. Here's my current issue.
procedure CreateWorkUnit
is
update workunit
set workunitstatus = 2 --workunit loaded
where
SYSDATE between START_DATE and END_DATE
and workunitstatus = 1 --workunit created;
--commit here?
call loader; --loads records based on status, will have a commit of its own
update workunit wu
set workunititemcount = (select count(*) from workunititems wui where wui.wuid = wu.wuid)
where workunitstatus = 2
So the behaviour I'm seeing, with or without commit statements is that I have to execute twice. Once to flip the statuses, then the loader will run on the second execution. I'd like it all to run in one go.
I'd appreciate any words of oracle wisdom.
Thanks!
When to commit transactions in a batch procedure? It is a good question, although it only seems vaguely related to the problems with the code you post. But let's answer it anyway.
We need to commit when the PL/SQL procedure has completed a unit of work. A unit of work is a business transaction. This would normally be at the end of the program, the last statement before the EXCEPTION section.
Sometimes not even then. The decision to commit or rollback properly lies with the top of the calling stack. If our PL/SQL is being called from a client (may a user clicking a button on a screen) then perhaps the client should issue the commit.
But it is not unreasonable for a batch process to manage its own commit (and rollback in the case of errors). But the main point is that the only the toppermost procedure should issue COMMIT. If a procedure calls other procedures those called programs should not issue commits or rollbacks. If they should handle any errors (log etc) and re-raise them to the calling program. Let it decode whether to rollback. Because all the called procedures run in the same session and hence the same transaction: a rollback in a called program will revert all the changes in the batch process. That's not right. The same reasoning applies to commits.
You will sometimes read advice on using intermittent commits to break up long running processes into smaller units e.g. every 1000 inserts. This is bad advice for several reasons, not all of them related to transactions. The pertinent ones are:
Issuing a commit frees locks on resources. This is the cause of ORA-1555 Snapshot too old errors.
It also affects read consistency, which only applies at the statement and/or transaction level. This is the cause of ORA-1002 Fetch out of sequence errors.
It affects re-startability. If the program fails having processed 30% of the records, can we be confident it will only process the remaining 70% when we re-run the batch?
Once we commit records other sessions can see those changes: does it make sense for other users to see a partially changed view of the data?
So, the words of "Oracle wisdom" are: always align the database transaction with the business transaction, with a single commit per unit of work.
Somebody mentioned autonmous transactions as a way of issuing commits in sub-processes. This is usually a bad idea. Changes made in an autonomous transaction are visible to other sessions but not to our own. That very rarely makes sense. It also creates the same problems with re-startability which I discussed earlier.
The only acceptable use for automomous transactions is recording activity (error log, trace, audit records). We need that data to persist regardless of what happens in the wider transaction. Any other use of the pragma is almost certainly a workaround for a porr design, which actually just makes the problem worse.
You may not need to commit in pl/sql procedure. the procedures that you call inside another procedure will use same session so you don't need to commit. by the way procedure must completely rollback if it session rollbacked or has an exception.
I mis-classfied my problem. I thought this was a transaction problem and really it was one of my flags not being set as expected.A number field was null when I was expecting 0.
Sorry for that.
Josh Robinson
I have a SQL Server 2008 database and I have a problem with this database that I don't understand.
The steps that caused the problems are:
I ran a SQL query to update a table called authors from another table called authorAff
The authors table is 123,385,300 records and the authorsAff table is 139,036,077
The query took about 7 days executing but it didn't finish
I decided to cancel the query to do it another way.
The connection on which I was running the query disconnected suddenly so the database became in recovery until the query cancels
The server was shut down many times afterwards because of some electricity problems
The database took about two days and then recovered.
Now when I run this query
SELECT TOP 1000 *
FROM AUTHORS WITH(READUNCOMMITTED)
It executes and returns the results but when I remove WITH(READUNCOMMITTED) hint it gets locked by a process running on the master database that appears only on the Activity Monitor with Command [DB STARTUP] and no results show up.
so what is the DB STARTUP command and if it's a problem, how can I solve it?
Thank you in advance.
I suspect that your user database is still trying to rollback the transaction that you canceled. A general rule of thumb indicates that it will take about the same amount of time, or more, for an aborted transaction to rollback as it has taken to run.
The rollback can't be avoided even with the SQL Server stops and starts you had.
The reason you can run a query WITH(READUNCOMMITTED) is because it's ignoring the locks associated with transaction that is rolling back. Your query results are considered unreliable, but ironically, the results are probably what you want to see since the blocking process is a rollback.
The best solution is to wait it out, if you can afford to do so. You may be able to find ways to kill the blocking process, but then you should be concerned with database integrity.
Our application (which uses NHibernate and ASP.NET MVC), when put under stress tests throws a lot of NHibernate transaction errors. The major types are:
Transaction not connected, or was disconnected
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Transaction (Process ID 177) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Can someone help me in identifying the reason for Exception 1?
I know I have to handle the other exceptions in my code. Can someone point me to resources which can help me handle these errors in an efficient manner?
Q. How do we manage Sessions and Transactions?
A. We are using Autofac. For every server request, we create a new request container which has the session in the container lifetime scope. On activating the session we begin the transaction. When the request completes, we commit the transaction. In some cases, the transaction can be huge. To simplify, every server request is contained in a transaction.
Have a look at this thread:
http://n2cms.codeplex.com/Thread/View.aspx?ThreadId=85016
Basically what it says as a possible cause of this exception:
2010-02-17 21:01:41,204 1 WARN
NHibernate.Util.ADOExceptionReporter -
System.Data.SqlClient.SqlException:
The transaction log for database
'databasename' is full. To find out
why space in the log cannot be reused,
see the log_reuse_wait_desc column in
sys.databases
As the transaction log's size is proportional to the amount of work done during the transaction, perhaps you ought to look into putting your transactional boundaries across command handlers 'handling' of commands on the write-part of transactions. You would then, with a session#X, load the state you wish to mutate, mutate it and commit it, all as one unit of work in #X.
With regards to the read-side of things, you might then have another ISession#Y that reads data; this ISession could be used to batch reads within e.g. RepeatableRead or something similar with the Futures feature and could simply be reading from a cache (albiet it being a crutch indeed). Doing it this way might help you recover from "errors" that aren't; livelocks, deadlocks and victim transactions.
The problem with using a transaction per request is that your ISession acquires a lot of book keeping data while you are working, all of which is part of the transaction. Hence the database marks the datas (rols, cols, tables, etc) as partaking in the transaction, causing the wait-graph to span 'entities' (in the database-sense, not the DDD-sense), which are not actually part of the transactional boundary of the command your application took.
For the record (other people googling this), Fabio had a post dealing with dealing with exceptions from the data layer. Quoting some of his code;
public class MsSqlExceptionConverterExample : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo exInfo)
{
var sqle = ADOExceptionHelper.ExtractDbException(exInfo.SqlException) as SqlException;
if(sqle != null)
{
switch (sqle.Number)
{
case 547:
return new ConstraintViolationException(exInfo.Message,
sqle.InnerException, exInfo.Sql, null);
case 208:
return new SQLGrammarException(exInfo.Message,
sqle.InnerException, exInfo.Sql);
case 3960:
return new StaleObjectStateException(exInfo.EntityName, exInfo.EntityId);
}
}
return SQLStateConverter.HandledNonSpecificException(exInfo.SqlException,
exInfo.Message, exInfo.Sql);
}
}
547 is the exception number for constraint conflict.
208 is the exception number for an invalid object name in the SQL.
3960 is the exception number for Snapshot isolation transaction aborted due to update conflict.
So if you are running into concurrency issues like what you describe; remember that they will invalidate your ISession and that you'd have to handle them like the above.
Part of what you might be looking for is CQRS, where you have separate read and write-sides. This might help: http://abdullin.com/cqrs/, http://cqrsinfo.com.
So to summarize; your problems might be related to the way your handle your transactions. Also, try running select log_wait_reuse_desc from sys.databases where name='MyDBName' and see what it gives you.
This thread has an explanation:
http://groups.google.com/group/nhusers/browse_thread/thread/7f5fb68a00829d13
In short, the database probably rolls back the transaction by itself due to some error, so that when you try to rollback the transaction later it is already rolled back and in a zombie state. This tends to hide the actual reason for the rollback since all you see is a TransactionException instead of the exception that actually triggered the rollback in the first place.
I don't think there is much you can do about it beyond logging it and trying to figure out what is causing the underlying error.
I know this post was a while back and assume you fixed it, but seems like you have thread sharing issues with the NHibernate ISession which is not threadsafe. Basically 1 thread is starting a transaction and another is attempting to close it causing all sorts of chaos.
I have a vendor reporting product executing queries to pull report data, no inserts, no updates just reading data.
We have double our heap size 3 times and are now at 1024 4k pages, The app will run fine for a week then we will begin to see DB2 SQL error: SQLCODE: -954, SQLSTATE: 57011 indicating the transaction log is not able to accomodate the request.
Its not the size of the reports since they run fine after a recycle. I spoke with another DBA on this. He believe the problem was in a difference between ORACLE and DB2 in that the vendor code is crappy and not issuing commits on the selects. This is causing the references to not be cleaned up and is slowly accumulating as garbage in the heap.
I wanted to know if this is accurate as I thought only inserts and updates needed to have commits included. Is there any IBM documentation on this?
We are currently recycling on a weekly basis to alleviate the problem but I would like to have a good handle on the issue before going back to the vendor asking them to alter their code.
Any transaction needs to be properly terminated -- why did you think that only applies to inserts and updates? Consider running transactionally a "select a from b where c > 12" and then "select a from b where c <= 12"; within a transaction the DB has to guarantee that every a gets returned exactly once either from the first or second select, not both (assuming c is never null;-). Without transactionality, some a's might fall between the cracks or be returned twice if their corresponding c was changed by a different transaction, and that's just not ACID!-)
So when you do not need separate SELECT queries to be transactional wrt each other, tell the DB! And the way you tell, is by terminating the transaction after each select (normally commit is what you use for the purpose, though I guess you could, indifferently, choose to use rollback here;-).
Per Alex's response, the first SQL activity after any CONNECT, COMMIT, or ROLLBACK initiates a transaction.
To get a handle on your resource issue (transaction logs full), you should investigate your application that issues the reports - ensure that transactions are being closed out explicitly in code. I've seen cases where application developers rely upon the Garbage Collector to clean up database objects - while those objects are waiting for cleanup, the database resources (transactions) are held open.
It's always good practice to explicitly COMMIT or ROLLBACK your transactions as soon as you are done with the data - regardless of the programming methodology you use.
I get this error when committing transaction on a SELECT query, but despite the error it does return a Result-Set that include queried data.
tran.Commit();
error [hy011] [ibm] cli0126e the operation is invalid sqlstate=hy011
I changed my code to tran.Rollback(); and the error disapered.
Can anyone explain this behavior?