.NET TransactionScopes and SQL 2005 Lightweight Transaction Manager - Multiple Connections same SPID? - sql-server-2005

Can someone shed light on what is happening behind the scenes with the SQL Lightweight transaction manager when multiple connections are opened to the same DB, using the Microsoft Data Access Application Block (DAAB)?
With the below code, we verified that MSDTC is indeed not required when opening 'multiple connections' to the same database.
This was the first scenario I tested: (where Txn1 and Txn2 use EntLib 4.1 to open a connection to the same DB and call different SPROCS)
using (var ts = new TransactionScope(TransactionScopeOption.Required))
{
DAL1.Txn1();
DAL2.Txn2();
ts.Complete();
}
Tracing this from profiler revealed that the same connection SPID was used for Txn1 and Txn2. After Txn1() was called, the Sql SPID would have been released back into the pool and Txn2() was able to re-use it.
However, when repeating this experiment and this time holding the connections open:
using (var ts = new TransactionScope(TransactionScopeOption.Required))
{
Database db1 = DatabaseFactory.CreateDatabase("db1");
DAL1.Txn1OnCon(db1);
Database db2 = DatabaseFactory.CreateDatabase("db1");
DAL2.Txn2OnCon(db2);
ts.Complete();
}
Viewing this from Profiler indicated that the 2 transactions were STILL using the same SPID. I was expecting the TransactionScope to have escalated to DTC as a distributed transaction should be required to control 2 concurrent connections. What have I missed?

Quoting from MSDN http://msdn.microsoft.com/en-us/library/8xx3tyca(VS.80).aspx
Connection pooling reduces the number
of times that new connections need to
be opened. The pooler maintains
ownership of the physical connection.
It manages connections by keeping
alive a set of active connections for
each given connection configuration.
Whenever a user calls Open on a
connection, the pooler looks to see if
there is an available connection in
the pool. If a pooled connection is
available, it returns it to the caller
instead of opening a new connection.
When the application calls Close on
the connection, the pooler returns it
to the pooled set of active
connections instead of actually
closing it. Once the connection is
returned to the pool, it is ready to
be reused on the next Open call.
Just because a connection was used in a transaction doesn't mean it cannot be available for the next call. I found that If the connection string varied by the slightest thing, such as capitalization of a hostname, then you'd get a new physical connection to the db.

Sql 2005 or Sql 2008?
If you use sql 2008, a sequence of open+close connections are not escalated to a distributed transaction. But all the connection must use exactly the same connection string.
(pseudo-code)
string connstring = "...."
using (TransactionScope ts=...)
{
c1 = new connection(connstring );
c1.open
...use c1
c1.close
c2 = new connection(connstring );
c2.open
...use c2
c2.close
ts.complete()
}
The same code with sql2005 escalates to distributed transaction --> yuo need MSDTC

OK, my misunderstanding was with DAAB.
The DAAB Database opens and closes connections as needed (or obtains / releases them from the pool), i.e. connections aren't held for the lifespan of the DAAB Database object.
It is possible to manually control the database connections in DAAB as per below - by holding the actual connections open, they cannot be reused. This then requires MSDTC to be running as soon as 2 physical connections are open, as I had expected in the original question.
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
using (DbConnection dbConn1 = DatabaseFactory.CreateDatabase("myDb").CreateConnection())
using (DbConnection dbConn2 = DatabaseFactory.CreateDatabase("myDb").CreateConnection())
{
dbConn1.Open();
DAL1.Txn1OnCon(dbConn1);
dbConn2.Open();
DAL2.Txn2OnCon(dbConn2);
DAL1.Txn1OnCon(dbConn1);
ts.Complete();
}
}

Related

Control the timeout for locking Exclusive SQLite3 database

I have a SQLite database that I want to lock for synchronization purposes. I don't want a process that runs async on a different box processing data that has been added from a different box until it has finished with updates. DataAccess is a class that connects to sPackageFileName and reuses the same connection as long as sPackageFileName is the same or unless .Close method is called. So basically DataAccess.ExecCommand executes a command.
In Google I found this ....
DataAccess.ExecCommand("PRAGMA locking_mode = EXCLUSIVE", sPackageFileName)
DataAccess.ExecCommand("BEGIN EXCLUSIVE", sPackageFileName)
DataAccess.ExecCommand("COMMIT", sPackageFileName)
This works as advertise. If I run this on box A and then on box B I get a "database locked" exception. The problem is how long it takes. I found a PRAGMA busy_timeout. This PRAGMA is timeout controls access locks, not database locks. I am stratring to think there is not PRAGMA for database lock timeout. Right now it seems about 3-4 minutes. One other note, the sPackageFileName is not on either box, they (box A and B) connect to it over a share drive.
Also I am using the VB.NET wrapper for the SQLite dll.
CL got me on the right trail. It was the timeout of the .NET command. Here the code setting it up from my class.
Dim con As DbConnection = OpenDb(DatabaseName, StoreNumber, ShareExclusive, ExtType)
Dim cmd As DbCommand = con.CreateCommand()
If _QueryTimeOut > -1 Then cmd.CommandTimeout = _QueryTimeOut
Don't get hang up on the variables, the purpose of posting the code is show I could show the property I was talking about. The default _QueryTimeOut was set the 300 (seconds). I set cmd.ComandTimeout to 1 (second) and it returned as expected.
As CL finally got through to me, the timeout was happening someplace else. Sometimes it takes a kick to get you out of the box. :-)

How to address MSDN's Caution: Associate BeginTransaction before Readers are Open

I noticed a caution in the BeginTransaction documentation located here:
http://msdn.microsoft.com/en-us/library/86773566.aspx
When your query returns a large amount of data and calls
BeginTransaction, a SqlException is thrown because SQL Server does not
allow parallel transactions when using MARS. To avoid this problem,
always associate a transaction with the command, the connection, or
both before any readers are open.
Is this suggestion that I change this:
sqlConn.Open();
System.Data.SqlClient.SqlTransaction trans = sqlConn.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
sqlCmd.Transaction = trans;
System.Data.SqlClient.SqlDataAdapter adapt = new System.Data.SqlClient.SqlDataAdapter(sqlCmd);
adapt.Fill(dt);
To this, which is not what I normally see in examples, with the BeginTransaction command BEFORE the Open command...
System.Data.SqlClient.SqlTransaction trans = sqlConn.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
sqlCmd.Transaction = trans;
sqlConn.Open();
System.Data.SqlClient.SqlDataAdapter adapt = new System.Data.SqlClient.SqlDataAdapter(sqlCmd);
adapt.Fill(dt);
Otherwise, can anyone give an example of what this caution is saying to avoid?
The latter code will not execute because you cannot start a transaction before having an open connection. The first example is correct. What MSDN is saying here is that once you have started executing a command (and not finished reading results), you cannot open a transaction. You have to open the transaction before your first command (or between commands).
I consider it best practice to always operate under an explicit transaction because there is a bug/feature in the ADO.NET connection pooling that leaks isolation levels across pooled connections.

Data is not properly stored to hsqldb when using pooled data source by dbcp

I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations

Releasing XASession XAResource - Manual enlisting

In our MDB we have a Xatransaction between DB and Tibco foreign server Queue. we have enlisted the foreign server XaResouce using below.
The MDB is on Weblogic server 10.3.6, JDK 1.6.
init()---
XAConnection tempXAConn = xaConn;
TibjmsXAConnectionFactory xaConnFactory = (TibjmsXAConnectionFactory)ServiceLocator.getInstance().getJNDIReferencedObject(JMS_Q_CONNECTION_FACTORY_JNDI_XA);
xaConn = xaConnFactory.createXAConnection(JMS_USER,JMS_PSWD);
getsession()---
XASession xaSession = xaConn.createXASession();
TransactionHelper txHelper = TransactionHelper.popTransactionHelper();
Transaction tx = txHelper.getTransaction();
tx.enlistResource(xaSession.getXAResource());
Transactions are working fine. we are using one connection and create new xasession for every message.
but the problem is releasing resources. after few thousand msgs i see heap containing same number of Tibjmsxasession,Tibjmsxaresource,Tibjmslongkey objects. this leads to outofmemory issue.
we cannot use session.close() in between the transaction.
The transaction are container managed. only enlisting is done manually.
i used
tx.registerSynchronization(new SessionSynchronization());
SessionSynchronization implements Synchronization and has 2 methods afterCompletion and beforeCompletion.
session.close can be called inside afterCompletion. session can be maintained in threadlocal.

Transaction timeout expired while using Linq2Sql DataContext.SubmitChanges()

please help me resolve this problem:
There is an ambient MSMQ transaction. I'm trying to use new transaction for logging, but get next error while attempt to submit changes - "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." Here is code:
public static void SaveTransaction(InfoToLog info)
{
using (TransactionScope scope =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
using (TransactionLogDataContext transactionDC =
new TransactionLogDataContext())
{
transactionDC.MyInfo.InsertOnSubmit(info);
transactionDC.SubmitChanges();
}
scope.Complete();
}
}
Please help me.
Thx.
You could consider increasing the timeout or eliminating it all together.
Something like:
using(TransactionLogDataContext transactionDC = new TransactionLogDataContext())
{
transactionDC.CommandTimeout = 0; // No timeout.
}
Be careful
You said:
thank you. but this solution makes new question - if transaction scope was changed why submit operation becomes so time consuming? Database and application are on the same machine
That is because you are creating new DataContext right there:
TransactionLogDataContext transactionDC = new TransactionLogDataContext())
With new data context ADO.NET opens up new connection (even if connection strings are the same, unless you do some clever connection pooling).
Within transaction context when you try to work with more than 1 connection instances (which you just did)
ADO.NET automatically promotes transaction to a distributed transaction and will try to enlist it into MSDTC. Enlisting very first transaction per connection into MSDTC will take time (for me it takes 30+ seconds), consecutive transactions will be fast, however (in my case 60ms). Take a look at this http://support.microsoft.com/Default.aspx?id=922430
What you can do is reuse transaction and connection string (if possible) when you create new DataContext.
TransactionLogDataContext tempDataContext =
new TransactionLogDataContext(ExistingDataContext.Transaction.Connection);
tempDataContext.Transaction = ExistingDataContext.Transaction;
Where ExistingDataContext is the one which started ambient transaction.
Or attemp to speed up your MS DTC.
Also do use SQL Profiler suggested by billb and look for SessionId between different commands (save and savelog in your case). If SessionId changes, you are in fact using 2 different connections and in that case will have to reuse transaction (if you don't want it to be promoted to MS DTC).