QUESTION: can YOU use multiple statements and recordset, which operate simultaneously, using the same connection in a non MULTI THREAD?
I only found this question which interests me, but the answer is not consistent.
JDBC Statement/PreparedStatement per connection
The answer explain the relationship between recordset and statement, which is known to me.
Given that, you can not have multiple recordsets for statement
The answer says that you can have multiple recordsets for connection. But they are not mentioned any other sources.
I'm asking if it's possible to loop over the first recordset, then using the same connection (used to generate first recordset) to open another recordset use it looping in iteration. And where is the documentation that define this behavior?
The situation that interests me is like this, the statement perform tasks simultaneously ins
Connection con = Factory.getDBConn (user, pss, endpoint, etc);
Statement stmt = con.createStatement ();
ResultSet rs = stmt.executeQuery ("SELECT TEXT FROM dba");
while (rs.next ()) {
rs.getInt (....
rs.getInt (....
rs.getInt (....
rs.getInt (....
Statement stmt2 con.createStatement = ();
ResultSet rs2 = stmt2.executeQuery ("iSelect ......");
while (rs2.next ()) {
....
rs2.close ();
stm2.close ();
Statement stmt3 con.createStatement = ();
ResultSet rs3 = stmt3.executeQuery ("Insert Into table xxx ......");
....
rs3.close ();
stm3.close ();
}
To clarify a bit more: with the execution of update in stmt3, you could obtain an error like this:
java.sql.SQLException: There is an open result set on the current connection, which must be closed prior to executing a query.
So you can't mix SQL in the same connection.
If I understand correctly, you need to work with two (or more) resultsets simmultaneously within a single method.
It is possible, and it works well. But you have to remember a few things:
Everything you do on each recordset is handled by a single Connection, unless you declare new connections for each Statement (and ResultSet)
If you need to do a multithreaded process, I suggest you to create a Connection for each thread (or use a connection pool); if you use a single connection in a multithreaded process, your program will hang or crash, since every SQL statement goes through a single connection, and every new statement has to wait until the previous one has finnished.
Besides that, your question needs some clarification. What do you really need to do?
A ResultSet object is automatically closed when the Statement object that generated it is closed, re-executed, or used to retrieve the next result from a sequence of multiple results.
http://docs.oracle.com/javase/6/docs/api/java/sql/ResultSet.html
SQL Server is a database that does support multiple record sets. So you can exceute a couple of queries in a single stored procedure for example
SELECT * FROM employees
SELECT * FROM products
SELECT * FROM depts
You can then move between each record set. At least I know you can do this in .Net for example
using (var conn = new SqlConnection("connstring"))
using (var command = new SqlCommand("SPName", conn))
{
conn.Open();
command.CommandType = CommandType.StoredProcedure;
var (reader = command.ExecuteReader())
{
while(reader.Read())
{
//Process all records from first result set
}
reader.Next();
while(reader.Read())
{
//Process all records from 2nd result set
}
reader.Next();
while(reader.Read())
{
//Process all records from 3rd result set
}
}
}
I am assuming that java would support a similar mechanism
Related
I have the following code to check if a table exists:
var selectQuery = $"SELECT count(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '{tableName}'";
using (var conn = new SqlConnection(_sqlServerConnectionString.SqlServerConnectionString))
{
conn.Open();
using (var cmd = new SqlCommand(selectQuery, conn))
{
var result = (int)cmd.ExecuteScalar();
return result > 0;
}
conn.Close();
}
this is called multiple times. On running this, I see:
The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
What am I missing? I have closed connection so not sure what's missing?
This probably seems like a leak somewhere else. Also, if some kind of error is happening between your open and close code then your connections will keep piling up as the execution will not exit the using block.
Please refer this another stack overflow link:-
How can I solve a connection pool problem between ASP.NET and SQL Server?
I was wondering if there was a way to get EFUtilities running at the same time EFProfiler is running.
I appreciate the profiler would not show the bulk insert due to it being done outside the confines of DBContext. At the moment, I cannot run batch jobs as the profiler has the connection wrapped. It Runs fine when not enabled
The exception I am getting is thus:
A first chance exception of type 'System.InvalidOperationException'
occurred in EntityFramework.Utilities.dll
Additional information: No provider supporting the InsertAll operation
for this datasource was found
The inner exception is null.
This is because EFUtilities automatically finds the correct provider. But when the connection is wrapped this is no longer possible.
InsertAll looks like this.
public void InsertAll<TEntity>(IEnumerable<TEntity> items, DbConnection connection = null, int? batchSize = null)
To use the SqlProvider (which is actually the only provider out of the box) you can create a new SqlConnection() and pass that to insert all.
So basically you would need to do this:
using (var db = new YourContext())
using (var con = new SqlConnection(YourConnectionString))
{
EFBatchOperation.For(db, db.PartialTestClass1).InsertAll(partials, con);
}
Now, maybe you are doing more and want both parts to run under the same transaction. In that case you can wrap that code block in a TransactionScope.
I'm porting a .net application from SqlServer to Oracle 12c.
I'm using the unmanaged 64-bit ODAC 12c Release 2 (12.1.0.1.2) client to access the database.
Oracle 12c introduced the DBMS_SQL.RETURN_RESULT(cur) function that allows me to reuse the .net code as it is without having to add specific output parameters to the ado.net commands.
This is a snippet of what my code looks like:
using (var command = CreateCommand("uspGetAllNumericUnits", CommandType.StoredProcedure))
{
using (var connectionScope = command.Connection.CreateConnectionScope())
{
using (var reader = command.ExecuteReader())
{
if (reader.HasRows)
{
while (reader.Read())
{
....
}
}
}
}
}
The uspGetAllNumericUnits stored procedure is simply like:
PROCEDURE uspGetAllNumericUnits_RPT
AS
cv_1 SYS_REFCURSOR;
BEGIN
OPEN cv_1 FOR
SELECT * FROM NumericUnit;
DBMS_SQL.RETURN_RESULT(cv_1);
END;
I believe I properly dispose the dbReader and the dbCommand and the connection is closed and disposed as well by the connectionScope.
However, if I check the v$open_cursor view on the server, I see that a cursor for "SELECT * FROM NumericUnit;" remains open.
I use connection pooling, if I disable it (which I do not want to do) the cursor gets correctly closed as the connectionScope gets disposed.
My issue is that I have many ado.net calls like this one, and I quickly reach the max allowed cursor limit per session and an ORA-01000 error is raised.
If I use the old Oracle 11g approach, returning the resultset cursor as an output parameter and not using the DBMS_SQL.RETURN_RESULT(cv_1) function, the cursor gets correctly closed as the connectionScope is disposed regardless connection pooling or not.
Are there some extra objects I need to dispose the close the implicit ref-cursors? Is this a known ODAC 12cR2 bug?
The introduction of DBMS_SQL.RETURN_RESULT(cv_1) makes the porting from SqlServer to Oracle way easier than having to add output parameters almost everywhere, but I do not want to get rid of connection pools.
I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations
I am writing a stored procedure that when completed will be used to scan staging tables for bogus data on a column by column basis.
Step one in the exercise was just to scan the table --- which is what the code below does. The issue is that this code runs in 5:45 seconds --- however the same code run as a console app (changing the connectionstring of course) runs in about 44 seconds.
using (SqlConnection sqlConnection = new SqlConnection("context connection=true"))
{
sqlConnection.Open();
string sqlText = string.Format("select * from {0}", source_table.Value);
int count = 0;
using (SqlCommand sqlCommand = new SqlCommand(sqlText, sqlConnection))
{
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
count++;
SqlDataRecord record = new SqlDataRecord(new SqlMetaData("rowcount", SqlDbType.Int));
SqlContext.Pipe.SendResultsStart(record);
record.SetInt32(0, count);
SqlContext.Pipe.SendResultsRow(record);
SqlContext.Pipe.SendResultsEnd();
}
}
However the same code (different connection string of course) runs in a console app in about 44 seconds (which is closer to what I was expecting on the client side)
What am I missing on the SP side, that would cause it to run so slow.
Please note: I fully understand that if I wanted a count of rows, I should use the count(*) aggregation --- that's not the purpose of this exercise.
The type of code you are writing is highly susceptible to SQL Injection. Rather than processing the reader like you are, you could just use the RecordsAffected Property to find the number of rows in the reader.
EDIT:
After doing some research, the difference you are seeing is a by design difference between the context connection and a regular connection. Peter Debetta blogged about this and writes:
"The context connection is written such that it only fetches a row at a time, so for each of the 20 million some odd rows, the code was asking for each row individually. Using a non-context connection, however, it requests 8K worth of rows at a time."
http://sqlblog.com/blogs/peter_debetta/archive/2006/07/21/context-connection-is-slow.aspx
Well it would seem the answer is in the connection string after all.
context connection=true
versus
server=(local); database=foo; integrated security=true
For some bizzare reason, using the "external" connection the SP runs almost as fast as a console app (still not as fast mind you! -- 55 seconds)
Of course now the assembly has to be deployed as External rather than Safe --- and that introduces more frustration.