Solution for java.sql.SQLException: could not use local transaction commit in a global transaction - sql

I'm running a J2EE MDB in a Weblogic 10.3 clustered environment.
Within one of the methods in the code I create a JDBC connection and turn off auto commit. The connection to the database is made through a XA enabled Data source. The reason for turning the auto-commit off is because I'm using a temp table.
When I'm done with the temp table & other SQL queries I commit the Transaction and turn auto-commit back on. For the most part this has not been a problem but I received the following exception when executing the method, the exception occurred on the line in the code where I issue the commit.
Not sure why I received as in the method I only make calls to a single database, I wonder if its because the data source is XA enabled? Any insight as to why this exception occurs and why it only occurs sometimes would be greatly appreciated
java.sql.SQLException: could not use local transaction commit in a global transaction
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:229)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:403)
at oracle.jdbc.driver.PhysicalConnection.disallowGlobalTxnMode(PhysicalConnection.java:6139)
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:3320)
at oracle.jdbc.driver.PhysicalConnection.commit(PhysicalConnection.java:3357)
at oracle.jdbc.OracleConnectionWrapper.commit(OracleConnectionWrapper.java:130)
at weblogic.jdbc.wrapper.XAConnection.commit(XAConnection.java:844)
at weblogic.jdbc.wrapper.JTAConnection.commit(JTAConnection.java:326)
at com.myPackage.myApp.myDBCaller.tempTableMethod(myDBCaller.java:390)
at com.myPackage.myApp.myDBCaller.start(myDBCaller.java:153)
at com.myPackage.myApp.myDBCaller.onMessage(myDBCaller.java:117)
at sun.reflect.GeneratedMethodAccessor1683.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:599)
at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:281)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:187)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:154)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:126)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:114)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:89)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:126)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:114)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:176)
at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:210)
at $Proxy228.onMessage(Unknown Source)
at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:518)
at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:423)
at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:325)
at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547)
at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233)
at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709)
at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114)
at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)

Try set property
<property name="AutomaticEnlistingEnabled" value="false"/>
of xaDatasource.

Related

Datastax: Block not found error from DSEFS

Spark streaming job running in DSE using DSEFS for check-pointing directory. I see this error in debug log file. How to resolve this error?
ERROR [dsefs-netty-worker-5] 2017-12-01 05:23:02,679 DSE-FS RestServerHandler.scala:126 - [id: 0x9964e082, /<>:58874 :> 0.0.0.0/0.0.0.0:5598] Streaming data to remote end failed.
java.io.IOException: Block not found a3859f30-aa23-11e7-80b9-4b8bdaf197cd
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:706) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$33$1.apply(BlockService.scala:703) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.executeInSameThread(SameThreadExecutionContext.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SameThreadExecutionContext$class.execute(SameThreadExecutionContext.scala:33) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.exec.SerialExecutionContextProvider$$anon$5$$anon$2.execute(SerialExecutionContextProvider.scala:24) ~[dsefs-common_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) [scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) ~[scala-library-2.10.6.jar:na]
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) ~[scala-library-2.10.6.jar:na]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at com.datastax.bdp.fs.server.blocks.BlockService$stateMachine$1$1.apply(BlockService.scala:60) ~[dsefs-server_2.10-5.0.19.jar:5.0.19]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.10.6.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112) [netty-all-4.0.34.Final.jar:4.0.34.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
This error means DSEFS server failed to find metadata of the data block in the dsefs.blocks Cassandra table. The ids of the file blocks are stored in the dsefs.block_offsets table and they reference blocks stored in dsefs.blocks. If a row exists in dsefs.block_offsets and points to the block id that is absent in dsefs.blocks, you get this error when reading the file.
This error should not happen under normal circumstances and it means the filesystem metadata somehow got into inconsistent state. This may be a bug in the DSEFS implementation, a result of a data loss caused by setting up dsefs keyspace with insufficient replication factor or a result of a write operation that did not finish successfully and was applied only partially.
Please make sure you set dsefs keyspace RF to at least 3 and run nodetool repair to avoid accidental data loss or unavailability of some DSEFS metadata.
If this doesn't help, please contact me directly or through DataStax technical support and provide more details, including logs from the time before the error and more context on what the job was doing when the failure occurred.

System.Data.SqlClient.SqlException (0x80131904)

I have a process that runs every morning looping through thousands of records and establishing whether they can be deleted or not. I sometimes see this exception in the log file:
System.Data.SqlClient.SqlException (0x80131904): A transport-level error has occurred when receiving results from the server. (provider: Session Provider, error: 19 - Physical connection is not usable)
I have read a potential solution here: http://wishmesh.com/2013/10/solution-for-a-transport-level-error-has-occurred-when-receiving-results-from-the-server/
I know that the SQLDataReader class implements IDisposable as documented here: http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqldatareader(v=vs.110).aspx. Therefore you should either wrap it in USING statements or call .close once you have finished with it.
What happens if you do not call .close? I do not understand why it could cause this exception as I thought a data reader was simply a collection of objects already returned from the database and stored in memory.

Why is NHibernate's adonet.batch_size setting ignored - session.SetBachSize() throws exception?

I am using NHibernate and switched from my local SQLExpress database to Oracle11g.
My code started to complain. The session objects method SetBatchSize() throws a System.NotSupported exception:
No batch size was defined for the session factory, batching is disabled. Set adonet.batch_size = 1 to enable batching.
It worked on the SQLExpress database. Ok, so I added this
<property name="adonet.batch_size">1</property>
to the config but it still throws the same exception. The sessions Batcher property is set to this
Value: {NHibernate.AdoNet.NonBatchingBatcher}
Type: NHibernate.Engine.IBatcher {NHibernate.AdoNet.NonBatchingBatcher}
It does not make any difference if i try to set the batch size in- or outside the transaction.
NHibernate has only batcher for some RDBMs. if it doesn't find one for the database in question it defaults to nonbatchingbatcher which is not able to batch at all. you could implement your own IBatcher.

"This SqlTransaction has completed; it is no longer usable."... configuration error?

I've been working on this for about a day and a half now, and searched numberous blogs and help articles on the Web. I found several questions on SO related to this error, but I didn't think they quite applied to my situation (or in some cases, unfortunately, I couldn't understand them well enough to implement :P). I'm not sure I can describe this well enough for help... but here goes:
We have a .NET app to track our resources. There's an export function to copy a resource to the time tracking system and the billing system; this accesses a stored procedure that links to the time and billing databases.
I recently moved the billing system database to a new server (original server: Server 2003 SP2, SQL 2005; new server: Server 2008 R2, SQL 2008 R2). I have a Linked Server set up that points to the 2008 databases. I updated the stored procedure to point to the 2008 server, and then I got an error about MSDTC and RPC (http://www.safnet.com/writing/tech/archives/2007/06/server_myserver.html). I enabled 'rpc/rpc out' on the Linked Server and set MSDTC to allow Network Access (something like this: http://www.sqlwebpedia.com/content/msdtc-troubleshooting).
Now I'm getting the above, when I try to run the export function: "This SqlTransaction has completed; it is no longer usable." What seems odd to me is that when I just run the stored procedure (from SSMS), it says it completes successfully.
Has anyone seen this before? Have I missed something in the configuration? I keep going over the same pages, and the only thing I found was that I didn't reboot after making the MSDTC changes (mentioned in here: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/).
I can post part or all of the stored procedure, if it would help... please let me know.
I believe this error message is due to a "zombie transaction".
Look for possible areas where the transacton is being committed twice (or rolled back twice, or rolled back and committed, etc.). Does the .Net code commit the transaction after the SP has already committed it? Does the .Net code roll it back on encountering an error, then attempt to roll it back again in a catch (or finally) clause?
It's possible an error condition was never being hit on the old server, and thus the faulty "double rollback" code was never hit. Maybe now you have a situation where there is some configuration error on the new server, and now the faulty code is getting hit via exception handling.
Can you debug into the error code? Do you have a stack trace?
I had this recently after refactoring in a new connection manager. A new routine accepted a transaction so it could be run as part of a batch, problem was with a using block:
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
using (transaction.Connection)
{
using (transaction)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
}
}
It looks as though the outer using was closing the underlying connection thus any attempts to commit or rollback the transaction threw up the message "This SqlTransaction has completed; it is no longer usable."
I removed the usings added a covering test and the problem went away.
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
Check for anything that might be closing the connection while inside the context of a transaction.
Had the exact same problem and just could not find the right solution.
Hope this helps somebody.
I have an .NET Core 3.1 WebApi with EF Core. Upon receiving multiple calls at the same time, the applications was trying to add and save changes to the database at the same time.
In my case the problem was that the table that the data would be saved in, did not have a primary key set.
Somehow EF Core missed when the migration was ran from the application that the ID in the model was supposed to be a primary key.
I found the problem by opening the SQL Profiler and seeing that all transactions was successfully submitted to the database (from the application) but only one new row was created. The profiler also showed that some type of deadlock was happening but I couldn't see much more in the trace logs of the profiler.
On further inspection I noticed that the primary key identifier was missing on the column "Id".
The exceptions I got from my application was:
This SqlTransaction has
completed; it is no longer usable.
and/or
An exception has been raised that is likely due to a transient
failure. Consider enabling transient error resiliency by adding
'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I have the same problem. This error occurs because conection pooling. When exists two or more users acess the system the connetion pooling reuse a connetion and the transation too. If the first user execute commit ou rollback the transaction is no longe usable.
I have recently ran across similar situation. To debug in any VS IDE version, open exceptions from Debug (Ctrl + D, E) - check all checkboxes against the column "Thrown", and run the application in debug mode. I have realized that one of the tables was not imported properly in the new database, so internal Sql Exception was killing the connection, thus results into this error.
Gist of the story is, If Previously working code returns this error on a new database, this could be database schema missing issue, realize by above debugging tip,
Hope It Helps,
HydTechie
Also check for any long running processes executed from your .NET app against the DB. For example you may be calling a stored procedure or query which does not have enough time to finish which can show in your logs as:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
This SqlTransaction has completed; it is no longer usable.
Check the command timeout settings
Try to run a trace (profiler) and see what is happening on the DB side...
In my case the problem was that one of the queries included in the transaction was raising an exception, and even though the exception was "gracefully" handled, it still managed to roll back the entire transaction.
My pseudo-code was like:
var transaction = connection.BeginTransaction();
for(all the lines in a file)
{
try{
InsertLineInTable(); // INSERT statement might fail and throw an exception
}
catch {
// notify the user about the error on line x and continue
}
}
// Commit and Rollback will fail if one of the queries
// in InsertLineInTable threw an exception
if(CheckTableForErrors())
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
Here is a way to detect Zombie transaction
SqlTransaction trans = connection.BeginTransaction();
//some db calls here
if (trans.Connection != null) //Detecting zombie transaction
{
trans.Commit();
}
Decompiling the SqlTransaction class, you will see the following
public SqlConnection Connection
{
get
{
if (this.IsZombied)
return (SqlConnection) null;
return this._connection;
}
}
I notice if the connection is closed, the transOP will become zombie, thus cannot Commit.
For my case, it is because I have the Commit() inside a finally block, while the connection was in the try block. This arrangement is causing the connection to be disposed and garbage collected. The solution was to put Commit inside the try block instead.
For what it's worth, I've run into this on what was previously working code. I had added SELECT statements in a trigger for debug testing and forgot to remove them. Entity Framework / MVC doesnt play nice when other stuff is output to the "grid". Make sure to check for any rogue queries and remove them.
In my case, I've some codes which need to execute after committing the transaction, at the same try-catch block. One of the codes threw
an error then try block handed over the error to its catch block which contains the transaction rollback.
It will show a similar error. For example, look at the code structure below :
SqlTransaction trans = null;
try{
trans = Con.BeginTransaction();
// your codes
trans.Commit();
//your codes having errors
}
catch(Exception ex)
{
trans.Rollback(); //transaction roll back
// error message
}
finally
{
// connection close
}
Hope it will help someone :)

Timeout exception when timeout set to infinite time

In my C# .NET 3.5 application I am using CastleProject ActiveRecord over NHibernate. This is desktop application using MS SQL Server 2008. I have set ADO command timeout to 0 to prevent timeout exception during bulk operations:
<activerecord>
<config>
...
<add key="hibernate.command_timeout" value="0" />
</config>
</activerecord>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<session-factory>
...
<property name="command_timeout">0</property>
</session-factory>
</hibernate-configuration>
However, I am still receiving timeout exception! The NHibernate log shows something like this:
Somewhere at the beginning:
2010-10-02 06:29:47,746 INFO
NHibernate.Driver.DriverBase - setting ADO.NET command timeout to 0 seconds
Somewhere at the end:
2010-10-02 07:36:03,020 DEBUG
NHibernate.AdoNet.AbstractBatcher -
Closed IDbCommand, open IDbCommand s:
0 2010-10-02 07:36:03,382 ERROR
NHibernate.Event.Default.AbstractFlushingEventListener
- Could not syn chronize database state with session
NHibernate.HibernateException: An
exception occurred when executing
batch queries ---> System.Data.S
qlClient.SqlException: Timeout
expired. The timeout period elapsed
prior to completion of the opera tion
or the server is not responding. at
System.Data.SqlClient.SqlConnection.OnError(SqlException
exception, Boolean breakConnection)
How come? How to fix this?
It's correct that a value of 0 indicates no timeout (as defined in the MSDN docs), however while NHibernate's driver passes the config value to the db command when it's >= 0, the batcher's condition checks that the value is > 0.
Therefore, when you set batching on, with a timeout value of 0, the value isn't carried over to the db command so it remains as default.
It's entirely possible that this is by design, and that NHibernate developers intentionally disabled disabling timeouts for batch scenarios. Disabling timeout is a bad idea anyway, if you have troubles with timeout errors I would raise the value, but not disable it.
Please confirm this with NHibernate devs.
You might be looking to set the timeout for specific queries and not at web.config level (otherwise you really need to tune up your application :) ).
I've recently found this answer that helped me:
How to set Nhibernate LINQ Command Timeout using Session.Query