Constraint problems when drop Not null on IBM iseries AS400 - sql

Please help me with this, I execute
ALTER TABLE MY_TABLE DATA CAPTURE NONE;
ALTER TABLE MY_TABLE ALTER MY_COLUMN DROP NOT NULL;
but then I have the followed error:
SQL State: 57007
Vendor Code: -910
Message: [SQL0910] The MY_TABLE object type * FILE MY_SCHEMA has a pending change.
Cause . . . . . : The MY_TABLE object has a
pending change made under commitment control is preventing this
operation. You may have produced one of the following circumstances:
This application process has performed an operation on this object
under commitment control. The transaction is not committed. The
application process is now trying to change the same object using the
commitment control level of * NONE.
A different process application
has performed an operation on this object under commitment control.
The transaction is not committed.
This application process has
performed an operation on this object under commitment control using a
different definition of commitment. The transaction is not committed.
This application process has performed an operation on this object under commitment control. The transaction is not committed.
You can not change the table until commit or roll back the changes.
Retrieval. Do one of the following and retry the request:
If your application
process issued the uncommitted operation, run a COMMIT or ROLLBACK
before attempting any other operation on this object, or issue the
statement from a program using a commitment control level other than *
NONE.
If the application process that issued the uncommitted
operation on this object is not to your application, then that
application process must perform a COMMIT or ROLLBACK.
If the
application process issued the uncommitted operation using a different
definition of commitment, issue a COMMIT or ROLLBACK to the definition
of commitment.
Issue COMMIT or ROLLBACK before attempting an ALTER
TABLE statement on this subject.
Please help me!!

The condition indicates that prior work under commitment control was pending; either awaiting normal COMMIT or ROLLBACK, or that the prior processing was interrupted and the job has ended without being able to clean up the commit definition. There would have been prior messages logged in the joblog to inform more of the condition than what the sqlcode=-910 and the msg SQL0910 can reveal; the spooled joblog would have revealed, most likely the prior msg CPF325E with the activation group and logical unit of work, preceded by a msg CPF70A6 with a reason code. Without the spooled joblog, and reviewing for such previously logged messages, the origin is unlikely to be learned; however there is the option to use the Work With Commitment Def[initions] (WRKCMTDFN) [with the option to search all jobs], and as already mentioned, to review the journal for what transactions were requested but never committed.
If origin by an interrupted process that no longer exists, the IPL will run a journal/commit/database recovery phases in the SCPF joblog [for which that IPL-time job has the joblog spooled after the system-portion of the IPL completes; i.e. looking at the active SCPF joblog is of no help], and messaging logged to QSYSOPR and/or QHST [see DSPLOG QHST for both, in addition to the spooled QPJOBLOG from the SCPF job for the IPL after the issue].
If the problem were to persist after an IPL, then the Reclaim Storage (RCLSTG) for *ALL [optionally omitting anything else; though best not to omit *DBXREF for which a lost DB change may be manifest later as an error when re-creating or otherwise performing work on the database file for which there were prior errors/recovery activity -- though if omitted on the SELECT(*ALL) request, a second request could be to just RCLSTG SELECT(*DBXREF).
Note: There have been past defects whereby work not performed under commitment control has incorrectly left a file in recovery with an indication that prior work was done under commitment control; the effect is little different, than a different message identifier, such that in both cases the file pending work is protected from other change activity. Interrupted work outside of those running under isolation are permitted to be deleted, for example with Delete File (DLTF) or DROP TABLE -- errors logged during the request are ignored by the DB, but be careful to review the messaging for conditions indicating a possible issue pending being encountered, such as that the member was not deleted, which means that a replacement for the file irrespective of create, restore, or add member will fail. Such difficulties are likely to be resolved only by contacting a service provider.

Related

Postgres could not serialize access due to concurrent update

I have an issue with "could not serialize access due to concurrent update". I checked logs and I can clearly see that two transactions were trying to update a row at the same time.
my sql query
UPDATE sessionstore SET valid_until = %s WHERE sid = %s;
How can I tell postgres to "try" update row without throwing any exception?
There is a caveat here which has been mentioned in comments. You must be using REPEATABLE READ transaction isolation or higher. Why? That is not typically required unless you really have a specific reason.
Your problem will go away if you use standard READ COMMITTED. But still it’s better to use SKIP LOCKED to both avoid lock waits and redundant updates and wasted WAL traffic.
As of Postgres 9.5+, there is a much better way to handle this, which would be like this:
UPDATE sessionstore
SET valid_until = %s
WHERE sid = (
SELECT sid FROM sessionstore
WHERE sid = %s
FOR UPDATE SKIP LOCKED
);
The first transaction to acquire the lock in SELECT FOR UPDATE SKIP LOCKED will cause any conflicting transaction to select nothing, leading to a no-op. As requested, it will not throw an exception.
See SKIP LOCKED notes here:
https://www.postgresql.org/docs/current/static/sql-select.html
Also the advice about a savepoint is not specific enough. What if the update fails for a reason besides a serialization error? Like an actual deadlock? You don’t want to just silently ignore all errors. But these are also in general a bad idea - an exception handler or a savepoint for every row update is a lot of extra overhead especially if you have high traffic. That is why you should use READ COMMITTED and SKIP LOCKED both to simplify the matter, and any actual error then would be truly unexpected.
The canonical way to do that would be to set a checkpoint before the UPDATE:
SAVEPOINT x;
If the update fails,
ROLLBACK TO SAVEPOINT x;
and the transaction can continue.
The default isolation level is "read committed" unless you need to change it for any specific use case.
You must be using the "repeatable read" or "serializable" isolation level. Here, the current transaction will roll-back if the already running transaction updates the value which was also supposed to be updated by current transaction.
Though this scenario can be easily handled by the "read_commit" isolation level where the current transaction accepts an updated value from other transaction and perform its instructions after the previous transaction is committed
ALTER DATABASE SET DEFAULT_TRANSACTION_ISOLATION TO 'read committed';
Ref: https://www.postgresql.org/docs/9.5/transaction-iso.html

Redshift: Serializable isolation violation on table

I have a very large Redshift database that contains billions of rows of HTTP request data.
I have a table called requests which has a few important fields:
ip_address
city
state
country
I have a Python process running once per day, which grabs all distinct rows which have not yet been geocoded (do not have any city / state / country information), and then attempts to geocode each IP address via Google's Geocoding API.
This process (pseudocode) looks like this:
for ip_address in ips_to_geocode:
country, state, city = geocode_ip_address(ip_address)
execute_transaction('''
UPDATE requests
SET ip_country = %s, ip_state = %s, ip_city = %s
WHERE ip_address = %s
''')
When running this code, I often receive errors like the following:
psycopg2.InternalError: 1023
DETAIL: Serializable isolation violation on table - 108263, transactions forming the cycle are: 647671, 647682 (pid:23880)
I'm assuming this is because I have other processes constantly logging HTTP requests into my table, so when I attempt to execute my UPDATE statement, it is unable to select all rows with the ip address I'd like to update.
My question is this: what can I do to update these records in a sane way that will stop failing regularly?
Your code is violating the serializable isolation level of Redshift. You need to make sure that your code is not trying to open multiple transactions on the same table before closing all open transactions.
You can achieve this by locking the table in each transaction so that no other transaction can access the table for updates until the open transaction gets closed. Not sure how your code is architected (synchronous or asynchronous), but this will increase the run time as each lock will force others to wait till the transaction gets over.
Refer: http://docs.aws.amazon.com/redshift/latest/dg/r_LOCK.html
Just got the same issue on my code, and this is how I fixed it:
First things first, it is good to know that this error code means you are trying to do concurrent operations in redshift. When you do a second query to a table before the first query you did moments ago was done, for example, is a case where you would get this kind of error (that was my case).
Good news is: there is a simple way to serialize redshift operations! You just need to use the LOCK command. Here is the Amazon documentation for the redshift LOCK command. It works basically making the next operation wait until the previous one is closed. Note that, using this command your script will naturally get a little bit slower.
In the end, the practical solution for me was: I inserted the LOCK command before the query messages (in the same string, separated by a ';'). Something like this:
LOCK table_name; SELECT * from ...
And you should be good to go! I hope it helps you.
Since you are doing a point update in your geo codes update process, while the other processes are writing to the table, you can intermittently get the Serializable isolation violation error depending on how and when the other process does its write to the same table.
Suggestions
One way is to use a table lock like Marcus Vinicius Melo has suggested in his answer.
Another approach is to catch the error and re run the transaction.
For any serializable transaction, it is said that the code initiating the transaction should be ready to retry the transaction in the face of this error. Since all transactions in Redshift are strictly serializable, all code initiating transactions in Redshift should be ready to retry them in the face of this error.
Explanations
The typical cause of this error is that two transactions started and proceeded in their operations in such a way that at least one of them cannot be completed as if they executed one after the other. So the db system chooses to abort one of them by throwing this error. This essentially gives control back to the transaction initiating code to take an appropriate course of action. Retry being one of them.
One way to prevent such a conflicting sequence of operations is to use a lock. But then it restricts many of the cases from executing concurrently which would not have resulted in a conflicting sequence of operations. The lock will ensure that the error will not occur but will also be concurrency restricting. The retry approach lets concurrency have its chance and handles the case when a conflict does occur.
Recommendation
That said, I would still recommend that you don't update Redshift in this manner, like point updates. The geo codes update process should write to a staging table, and once all records are processed, perform one single bulk update, followed by a vacuum if required.
Either you start a new session when you do second update on the same table or you have to 'commit' once you transaction is complete.
You can write set autocommit=on before you start updating.

How can I force a Snapshot Isolation failure of 3960

Story
I have a SPROC using Snapshot Isolation to perform several inserts via MERGE. This SPROC is called with very high load and often in parallel so it occasionally throws an Error 3960- which indicates the snapshot rolled back because of change conflicts. This is expected because of the high concurrency.
Problem
I've implemented a "retry" queue to perform this work again later on, but I am having difficulty reproducing the error to verify my checks are accurate.
Question
How can I reproduce a snapshot failure (3960, specifically) to verify my retry logic is working?
Already Tried
RAISEERROR doesn't work because it doesn't allow me to raise existing errors, only user defined ones
I've tried re-inserted the same record, but this doesn't throw the same failure since it's not two different transactions "racing" another
Open two connections, start a snapshot transaction on both, on connection 1 update a record, on the connection 2 update the same record (in background because it will block), then on connection 1 commit
Or treat a user error as a 3960 ...
Why not just do this:
RAISERROR(3960, {sev}, {state})
Replacing {sev} and {state} with the actual values that you see when the error occurs in production?
(Nope, as Martin pointed out, that doesn't work.)
If not that then I would suggest trying to run your test query multiple times simultaneously. I have done this myself to simulate other concurrency errors. It should be doable as long as the test query is not too fast (a couple of seconds at least).

Handle NHibernate Transaction Errors

Our application (which uses NHibernate and ASP.NET MVC), when put under stress tests throws a lot of NHibernate transaction errors. The major types are:
Transaction not connected, or was disconnected
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)
Transaction (Process ID 177) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Can someone help me in identifying the reason for Exception 1?
I know I have to handle the other exceptions in my code. Can someone point me to resources which can help me handle these errors in an efficient manner?
Q. How do we manage Sessions and Transactions?
A. We are using Autofac. For every server request, we create a new request container which has the session in the container lifetime scope. On activating the session we begin the transaction. When the request completes, we commit the transaction. In some cases, the transaction can be huge. To simplify, every server request is contained in a transaction.
Have a look at this thread:
http://n2cms.codeplex.com/Thread/View.aspx?ThreadId=85016
Basically what it says as a possible cause of this exception:
2010-02-17 21:01:41,204 1 WARN
NHibernate.Util.ADOExceptionReporter -
System.Data.SqlClient.SqlException:
The transaction log for database
'databasename' is full. To find out
why space in the log cannot be reused,
see the log_reuse_wait_desc column in
sys.databases
As the transaction log's size is proportional to the amount of work done during the transaction, perhaps you ought to look into putting your transactional boundaries across command handlers 'handling' of commands on the write-part of transactions. You would then, with a session#X, load the state you wish to mutate, mutate it and commit it, all as one unit of work in #X.
With regards to the read-side of things, you might then have another ISession#Y that reads data; this ISession could be used to batch reads within e.g. RepeatableRead or something similar with the Futures feature and could simply be reading from a cache (albiet it being a crutch indeed). Doing it this way might help you recover from "errors" that aren't; livelocks, deadlocks and victim transactions.
The problem with using a transaction per request is that your ISession acquires a lot of book keeping data while you are working, all of which is part of the transaction. Hence the database marks the datas (rols, cols, tables, etc) as partaking in the transaction, causing the wait-graph to span 'entities' (in the database-sense, not the DDD-sense), which are not actually part of the transactional boundary of the command your application took.
For the record (other people googling this), Fabio had a post dealing with dealing with exceptions from the data layer. Quoting some of his code;
public class MsSqlExceptionConverterExample : ISQLExceptionConverter
{
public Exception Convert(AdoExceptionContextInfo exInfo)
{
var sqle = ADOExceptionHelper.ExtractDbException(exInfo.SqlException) as SqlException;
if(sqle != null)
{
switch (sqle.Number)
{
case 547:
return new ConstraintViolationException(exInfo.Message,
sqle.InnerException, exInfo.Sql, null);
case 208:
return new SQLGrammarException(exInfo.Message,
sqle.InnerException, exInfo.Sql);
case 3960:
return new StaleObjectStateException(exInfo.EntityName, exInfo.EntityId);
}
}
return SQLStateConverter.HandledNonSpecificException(exInfo.SqlException,
exInfo.Message, exInfo.Sql);
}
}
547 is the exception number for constraint conflict.
208 is the exception number for an invalid object name in the SQL.
3960 is the exception number for Snapshot isolation transaction aborted due to update conflict.
So if you are running into concurrency issues like what you describe; remember that they will invalidate your ISession and that you'd have to handle them like the above.
Part of what you might be looking for is CQRS, where you have separate read and write-sides. This might help: http://abdullin.com/cqrs/, http://cqrsinfo.com.
So to summarize; your problems might be related to the way your handle your transactions. Also, try running select log_wait_reuse_desc from sys.databases where name='MyDBName' and see what it gives you.
This thread has an explanation:
http://groups.google.com/group/nhusers/browse_thread/thread/7f5fb68a00829d13
In short, the database probably rolls back the transaction by itself due to some error, so that when you try to rollback the transaction later it is already rolled back and in a zombie state. This tends to hide the actual reason for the rollback since all you see is a TransactionException instead of the exception that actually triggered the rollback in the first place.
I don't think there is much you can do about it beyond logging it and trying to figure out what is causing the underlying error.
I know this post was a while back and assume you fixed it, but seems like you have thread sharing issues with the NHibernate ISession which is not threadsafe. Basically 1 thread is starting a transaction and another is attempting to close it causing all sorts of chaos.

Is a commit needed on a select query in DB2?

I have a vendor reporting product executing queries to pull report data, no inserts, no updates just reading data.
We have double our heap size 3 times and are now at 1024 4k pages, The app will run fine for a week then we will begin to see DB2 SQL error: SQLCODE: -954, SQLSTATE: 57011 indicating the transaction log is not able to accomodate the request.
Its not the size of the reports since they run fine after a recycle. I spoke with another DBA on this. He believe the problem was in a difference between ORACLE and DB2 in that the vendor code is crappy and not issuing commits on the selects. This is causing the references to not be cleaned up and is slowly accumulating as garbage in the heap.
I wanted to know if this is accurate as I thought only inserts and updates needed to have commits included. Is there any IBM documentation on this?
We are currently recycling on a weekly basis to alleviate the problem but I would like to have a good handle on the issue before going back to the vendor asking them to alter their code.
Any transaction needs to be properly terminated -- why did you think that only applies to inserts and updates? Consider running transactionally a "select a from b where c > 12" and then "select a from b where c <= 12"; within a transaction the DB has to guarantee that every a gets returned exactly once either from the first or second select, not both (assuming c is never null;-). Without transactionality, some a's might fall between the cracks or be returned twice if their corresponding c was changed by a different transaction, and that's just not ACID!-)
So when you do not need separate SELECT queries to be transactional wrt each other, tell the DB! And the way you tell, is by terminating the transaction after each select (normally commit is what you use for the purpose, though I guess you could, indifferently, choose to use rollback here;-).
Per Alex's response, the first SQL activity after any CONNECT, COMMIT, or ROLLBACK initiates a transaction.
To get a handle on your resource issue (transaction logs full), you should investigate your application that issues the reports - ensure that transactions are being closed out explicitly in code. I've seen cases where application developers rely upon the Garbage Collector to clean up database objects - while those objects are waiting for cleanup, the database resources (transactions) are held open.
It's always good practice to explicitly COMMIT or ROLLBACK your transactions as soon as you are done with the data - regardless of the programming methodology you use.
I get this error when committing transaction on a SELECT query, but despite the error it does return a Result-Set that include queried data.
tran.Commit();
error [hy011] [ibm] cli0126e the operation is invalid sqlstate=hy011
I changed my code to tran.Rollback(); and the error disapered.
Can anyone explain this behavior?