Implicit ref-cursors not being closed in ODP.net 12c release 2 - odp.net

I'm porting a .net application from SqlServer to Oracle 12c.
I'm using the unmanaged 64-bit ODAC 12c Release 2 (12.1.0.1.2) client to access the database.
Oracle 12c introduced the DBMS_SQL.RETURN_RESULT(cur) function that allows me to reuse the .net code as it is without having to add specific output parameters to the ado.net commands.
This is a snippet of what my code looks like:
using (var command = CreateCommand("uspGetAllNumericUnits", CommandType.StoredProcedure))
{
using (var connectionScope = command.Connection.CreateConnectionScope())
{
using (var reader = command.ExecuteReader())
{
if (reader.HasRows)
{
while (reader.Read())
{
....
}
}
}
}
}
The uspGetAllNumericUnits stored procedure is simply like:
PROCEDURE uspGetAllNumericUnits_RPT
AS
cv_1 SYS_REFCURSOR;
BEGIN
OPEN cv_1 FOR
SELECT * FROM NumericUnit;
DBMS_SQL.RETURN_RESULT(cv_1);
END;
I believe I properly dispose the dbReader and the dbCommand and the connection is closed and disposed as well by the connectionScope.
However, if I check the v$open_cursor view on the server, I see that a cursor for "SELECT * FROM NumericUnit;" remains open.
I use connection pooling, if I disable it (which I do not want to do) the cursor gets correctly closed as the connectionScope gets disposed.
My issue is that I have many ado.net calls like this one, and I quickly reach the max allowed cursor limit per session and an ORA-01000 error is raised.
If I use the old Oracle 11g approach, returning the resultset cursor as an output parameter and not using the DBMS_SQL.RETURN_RESULT(cv_1) function, the cursor gets correctly closed as the connectionScope is disposed regardless connection pooling or not.
Are there some extra objects I need to dispose the close the implicit ref-cursors? Is this a known ODAC 12cR2 bug?
The introduction of DBMS_SQL.RETURN_RESULT(cv_1) makes the porting from SqlServer to Oracle way easier than having to add output parameters almost everywhere, but I do not want to get rid of connection pools.

Related

EFUtilities with EFProfiler running

I was wondering if there was a way to get EFUtilities running at the same time EFProfiler is running.
I appreciate the profiler would not show the bulk insert due to it being done outside the confines of DBContext. At the moment, I cannot run batch jobs as the profiler has the connection wrapped. It Runs fine when not enabled
The exception I am getting is thus:
A first chance exception of type 'System.InvalidOperationException'
occurred in EntityFramework.Utilities.dll
Additional information: No provider supporting the InsertAll operation
for this datasource was found
The inner exception is null.
This is because EFUtilities automatically finds the correct provider. But when the connection is wrapped this is no longer possible.
InsertAll looks like this.
public void InsertAll<TEntity>(IEnumerable<TEntity> items, DbConnection connection = null, int? batchSize = null)
To use the SqlProvider (which is actually the only provider out of the box) you can create a new SqlConnection() and pass that to insert all.
So basically you would need to do this:
using (var db = new YourContext())
using (var con = new SqlConnection(YourConnectionString))
{
EFBatchOperation.For(db, db.PartialTestClass1).InsertAll(partials, con);
}
Now, maybe you are doing more and want both parts to run under the same transaction. In that case you can wrap that code block in a TransactionScope.

Data is not properly stored to hsqldb when using pooled data source by dbcp

I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations

jdbc statement connection

QUESTION: can YOU use multiple statements and recordset, which operate simultaneously, using the same connection in a non MULTI THREAD?
I only found this question which interests me, but the answer is not consistent.
JDBC Statement/PreparedStatement per connection
The answer explain the relationship between recordset and statement, which is known to me.
Given that, you can not have multiple recordsets for statement
The answer says that you can have multiple recordsets for connection. But they are not mentioned any other sources.
I'm asking if it's possible to loop over the first recordset, then using the same connection (used to generate first recordset) to open another recordset use it looping in iteration. And where is the documentation that define this behavior?
The situation that interests me is like this, the statement perform tasks simultaneously ins
Connection con = Factory.getDBConn (user, pss, endpoint, etc);
Statement stmt = con.createStatement ();
ResultSet rs = stmt.executeQuery ("SELECT TEXT FROM dba");
while (rs.next ()) {
rs.getInt (....
rs.getInt (....
rs.getInt (....
rs.getInt (....
Statement stmt2 con.createStatement = ();
ResultSet rs2 = stmt2.executeQuery ("iSelect ......");
while (rs2.next ()) {
....
rs2.close ();
stm2.close ();
Statement stmt3 con.createStatement = ();
ResultSet rs3 = stmt3.executeQuery ("Insert Into table xxx ......");
....
rs3.close ();
stm3.close ();
}
To clarify a bit more: with the execution of update in stmt3, you could obtain an error like this:
java.sql.SQLException: There is an open result set on the current connection, which must be closed prior to executing a query.
So you can't mix SQL in the same connection.
If I understand correctly, you need to work with two (or more) resultsets simmultaneously within a single method.
It is possible, and it works well. But you have to remember a few things:
Everything you do on each recordset is handled by a single Connection, unless you declare new connections for each Statement (and ResultSet)
If you need to do a multithreaded process, I suggest you to create a Connection for each thread (or use a connection pool); if you use a single connection in a multithreaded process, your program will hang or crash, since every SQL statement goes through a single connection, and every new statement has to wait until the previous one has finnished.
Besides that, your question needs some clarification. What do you really need to do?
A ResultSet object is automatically closed when the Statement object that generated it is closed, re-executed, or used to retrieve the next result from a sequence of multiple results.
http://docs.oracle.com/javase/6/docs/api/java/sql/ResultSet.html
SQL Server is a database that does support multiple record sets. So you can exceute a couple of queries in a single stored procedure for example
SELECT * FROM employees
SELECT * FROM products
SELECT * FROM depts
You can then move between each record set. At least I know you can do this in .Net for example
using (var conn = new SqlConnection("connstring"))
using (var command = new SqlCommand("SPName", conn))
{
conn.Open();
command.CommandType = CommandType.StoredProcedure;
var (reader = command.ExecuteReader())
{
while(reader.Read())
{
//Process all records from first result set
}
reader.Next();
while(reader.Read())
{
//Process all records from 2nd result set
}
reader.Next();
while(reader.Read())
{
//Process all records from 3rd result set
}
}
}
I am assuming that java would support a similar mechanism

.NET TransactionScopes and SQL 2005 Lightweight Transaction Manager - Multiple Connections same SPID?

Can someone shed light on what is happening behind the scenes with the SQL Lightweight transaction manager when multiple connections are opened to the same DB, using the Microsoft Data Access Application Block (DAAB)?
With the below code, we verified that MSDTC is indeed not required when opening 'multiple connections' to the same database.
This was the first scenario I tested: (where Txn1 and Txn2 use EntLib 4.1 to open a connection to the same DB and call different SPROCS)
using (var ts = new TransactionScope(TransactionScopeOption.Required))
{
DAL1.Txn1();
DAL2.Txn2();
ts.Complete();
}
Tracing this from profiler revealed that the same connection SPID was used for Txn1 and Txn2. After Txn1() was called, the Sql SPID would have been released back into the pool and Txn2() was able to re-use it.
However, when repeating this experiment and this time holding the connections open:
using (var ts = new TransactionScope(TransactionScopeOption.Required))
{
Database db1 = DatabaseFactory.CreateDatabase("db1");
DAL1.Txn1OnCon(db1);
Database db2 = DatabaseFactory.CreateDatabase("db1");
DAL2.Txn2OnCon(db2);
ts.Complete();
}
Viewing this from Profiler indicated that the 2 transactions were STILL using the same SPID. I was expecting the TransactionScope to have escalated to DTC as a distributed transaction should be required to control 2 concurrent connections. What have I missed?
Quoting from MSDN http://msdn.microsoft.com/en-us/library/8xx3tyca(VS.80).aspx
Connection pooling reduces the number
of times that new connections need to
be opened. The pooler maintains
ownership of the physical connection.
It manages connections by keeping
alive a set of active connections for
each given connection configuration.
Whenever a user calls Open on a
connection, the pooler looks to see if
there is an available connection in
the pool. If a pooled connection is
available, it returns it to the caller
instead of opening a new connection.
When the application calls Close on
the connection, the pooler returns it
to the pooled set of active
connections instead of actually
closing it. Once the connection is
returned to the pool, it is ready to
be reused on the next Open call.
Just because a connection was used in a transaction doesn't mean it cannot be available for the next call. I found that If the connection string varied by the slightest thing, such as capitalization of a hostname, then you'd get a new physical connection to the db.
Sql 2005 or Sql 2008?
If you use sql 2008, a sequence of open+close connections are not escalated to a distributed transaction. But all the connection must use exactly the same connection string.
(pseudo-code)
string connstring = "...."
using (TransactionScope ts=...)
{
c1 = new connection(connstring );
c1.open
...use c1
c1.close
c2 = new connection(connstring );
c2.open
...use c2
c2.close
ts.complete()
}
The same code with sql2005 escalates to distributed transaction --> yuo need MSDTC
OK, my misunderstanding was with DAAB.
The DAAB Database opens and closes connections as needed (or obtains / releases them from the pool), i.e. connections aren't held for the lifespan of the DAAB Database object.
It is possible to manually control the database connections in DAAB as per below - by holding the actual connections open, they cannot be reused. This then requires MSDTC to be running as soon as 2 physical connections are open, as I had expected in the original question.
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
using (DbConnection dbConn1 = DatabaseFactory.CreateDatabase("myDb").CreateConnection())
using (DbConnection dbConn2 = DatabaseFactory.CreateDatabase("myDb").CreateConnection())
{
dbConn1.Open();
DAL1.Txn1OnCon(dbConn1);
dbConn2.Open();
DAL2.Txn2OnCon(dbConn2);
DAL1.Txn1OnCon(dbConn1);
ts.Complete();
}
}

why would a SQLCLR proc run slower than the same code client side

I am writing a stored procedure that when completed will be used to scan staging tables for bogus data on a column by column basis.
Step one in the exercise was just to scan the table --- which is what the code below does. The issue is that this code runs in 5:45 seconds --- however the same code run as a console app (changing the connectionstring of course) runs in about 44 seconds.
using (SqlConnection sqlConnection = new SqlConnection("context connection=true"))
{
sqlConnection.Open();
string sqlText = string.Format("select * from {0}", source_table.Value);
int count = 0;
using (SqlCommand sqlCommand = new SqlCommand(sqlText, sqlConnection))
{
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
count++;
SqlDataRecord record = new SqlDataRecord(new SqlMetaData("rowcount", SqlDbType.Int));
SqlContext.Pipe.SendResultsStart(record);
record.SetInt32(0, count);
SqlContext.Pipe.SendResultsRow(record);
SqlContext.Pipe.SendResultsEnd();
}
}
However the same code (different connection string of course) runs in a console app in about 44 seconds (which is closer to what I was expecting on the client side)
What am I missing on the SP side, that would cause it to run so slow.
Please note: I fully understand that if I wanted a count of rows, I should use the count(*) aggregation --- that's not the purpose of this exercise.
The type of code you are writing is highly susceptible to SQL Injection. Rather than processing the reader like you are, you could just use the RecordsAffected Property to find the number of rows in the reader.
EDIT:
After doing some research, the difference you are seeing is a by design difference between the context connection and a regular connection. Peter Debetta blogged about this and writes:
"The context connection is written such that it only fetches a row at a time, so for each of the 20 million some odd rows, the code was asking for each row individually. Using a non-context connection, however, it requests 8K worth of rows at a time."
http://sqlblog.com/blogs/peter_debetta/archive/2006/07/21/context-connection-is-slow.aspx
Well it would seem the answer is in the connection string after all.
context connection=true
versus
server=(local); database=foo; integrated security=true
For some bizzare reason, using the "external" connection the SP runs almost as fast as a console app (still not as fast mind you! -- 55 seconds)
Of course now the assembly has to be deployed as External rather than Safe --- and that introduces more frustration.