I was wondering if there was a way to get EFUtilities running at the same time EFProfiler is running.
I appreciate the profiler would not show the bulk insert due to it being done outside the confines of DBContext. At the moment, I cannot run batch jobs as the profiler has the connection wrapped. It Runs fine when not enabled
The exception I am getting is thus:
A first chance exception of type 'System.InvalidOperationException'
occurred in EntityFramework.Utilities.dll
Additional information: No provider supporting the InsertAll operation
for this datasource was found
The inner exception is null.
This is because EFUtilities automatically finds the correct provider. But when the connection is wrapped this is no longer possible.
InsertAll looks like this.
public void InsertAll<TEntity>(IEnumerable<TEntity> items, DbConnection connection = null, int? batchSize = null)
To use the SqlProvider (which is actually the only provider out of the box) you can create a new SqlConnection() and pass that to insert all.
So basically you would need to do this:
using (var db = new YourContext())
using (var con = new SqlConnection(YourConnectionString))
{
EFBatchOperation.For(db, db.PartialTestClass1).InsertAll(partials, con);
}
Now, maybe you are doing more and want both parts to run under the same transaction. In that case you can wrap that code block in a TransactionScope.
Related
I have a SQL job (4 steps job consist of SSIS packages) which runs on daily basis and extract data from various sources (source1, source2, source3) then it loads data to warehouse. Now my job fails due to 'Communication Link failure' with source1 at step 1.
Is there any way I can set up retry attempt SQL job based on this above error only.
For example, if I get error 'primary key violation' or some other data related issue then we should directly get notification that job failed but if we have error 'Communication Link failure' then step1 should do retry attempt.
Any suggestion would be appreciated.
Short answer: No, not with the SQL agent.
Longer answer: Maybe you can build up som logic where the package checks if the previous error was that specific error you're looking for, if then, execute again. Cumbersome but possible.
You can create an Event Handler for the OnError event with a Script Task which will check for this specific error and execute msdb.dbo.sp_start_job if this error occurred. Since I'm not sure the exact error code you're getting, this only checks the #[System::ErrorDescription] system variable for the specific text using the StringComparison.CurrentCultureIgnoreCase option to make this match case insensitive. However I would strongly recommend finding the exact error code and using the #[System::ErrorCode] variable to verify this instead. I'd also suggest only retrying the job a certain number of times or within a given time-frame to avoid excessive failures if this issue persists as well.
string errorMsg = Dts.Variables["System::ErrorDescription"].Value.ToString();
if (errorMsg.IndexOf("Communication Link failure", 0, StringComparison.CurrentCultureIgnoreCase) >= 0)
{
string connString = #"YourConnectionString;";
string startJobCmd = #"EXEC MSDB.DBO.SP_START_JOB N'NameOfJobToRetry;";
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand sql = new SqlCommand(startJobCmd, conn);
conn.Open();
sql.ExecuteNonQuery();
}
}
I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations
I've been working on this for about a day and a half now, and searched numberous blogs and help articles on the Web. I found several questions on SO related to this error, but I didn't think they quite applied to my situation (or in some cases, unfortunately, I couldn't understand them well enough to implement :P). I'm not sure I can describe this well enough for help... but here goes:
We have a .NET app to track our resources. There's an export function to copy a resource to the time tracking system and the billing system; this accesses a stored procedure that links to the time and billing databases.
I recently moved the billing system database to a new server (original server: Server 2003 SP2, SQL 2005; new server: Server 2008 R2, SQL 2008 R2). I have a Linked Server set up that points to the 2008 databases. I updated the stored procedure to point to the 2008 server, and then I got an error about MSDTC and RPC (http://www.safnet.com/writing/tech/archives/2007/06/server_myserver.html). I enabled 'rpc/rpc out' on the Linked Server and set MSDTC to allow Network Access (something like this: http://www.sqlwebpedia.com/content/msdtc-troubleshooting).
Now I'm getting the above, when I try to run the export function: "This SqlTransaction has completed; it is no longer usable." What seems odd to me is that when I just run the stored procedure (from SSMS), it says it completes successfully.
Has anyone seen this before? Have I missed something in the configuration? I keep going over the same pages, and the only thing I found was that I didn't reboot after making the MSDTC changes (mentioned in here: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/).
I can post part or all of the stored procedure, if it would help... please let me know.
I believe this error message is due to a "zombie transaction".
Look for possible areas where the transacton is being committed twice (or rolled back twice, or rolled back and committed, etc.). Does the .Net code commit the transaction after the SP has already committed it? Does the .Net code roll it back on encountering an error, then attempt to roll it back again in a catch (or finally) clause?
It's possible an error condition was never being hit on the old server, and thus the faulty "double rollback" code was never hit. Maybe now you have a situation where there is some configuration error on the new server, and now the faulty code is getting hit via exception handling.
Can you debug into the error code? Do you have a stack trace?
I had this recently after refactoring in a new connection manager. A new routine accepted a transaction so it could be run as part of a batch, problem was with a using block:
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
using (transaction.Connection)
{
using (transaction)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
}
}
It looks as though the outer using was closing the underlying connection thus any attempts to commit or rollback the transaction threw up the message "This SqlTransaction has completed; it is no longer usable."
I removed the usings added a covering test and the problem went away.
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
Check for anything that might be closing the connection while inside the context of a transaction.
Had the exact same problem and just could not find the right solution.
Hope this helps somebody.
I have an .NET Core 3.1 WebApi with EF Core. Upon receiving multiple calls at the same time, the applications was trying to add and save changes to the database at the same time.
In my case the problem was that the table that the data would be saved in, did not have a primary key set.
Somehow EF Core missed when the migration was ran from the application that the ID in the model was supposed to be a primary key.
I found the problem by opening the SQL Profiler and seeing that all transactions was successfully submitted to the database (from the application) but only one new row was created. The profiler also showed that some type of deadlock was happening but I couldn't see much more in the trace logs of the profiler.
On further inspection I noticed that the primary key identifier was missing on the column "Id".
The exceptions I got from my application was:
This SqlTransaction has
completed; it is no longer usable.
and/or
An exception has been raised that is likely due to a transient
failure. Consider enabling transient error resiliency by adding
'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I have the same problem. This error occurs because conection pooling. When exists two or more users acess the system the connetion pooling reuse a connetion and the transation too. If the first user execute commit ou rollback the transaction is no longe usable.
I have recently ran across similar situation. To debug in any VS IDE version, open exceptions from Debug (Ctrl + D, E) - check all checkboxes against the column "Thrown", and run the application in debug mode. I have realized that one of the tables was not imported properly in the new database, so internal Sql Exception was killing the connection, thus results into this error.
Gist of the story is, If Previously working code returns this error on a new database, this could be database schema missing issue, realize by above debugging tip,
Hope It Helps,
HydTechie
Also check for any long running processes executed from your .NET app against the DB. For example you may be calling a stored procedure or query which does not have enough time to finish which can show in your logs as:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
This SqlTransaction has completed; it is no longer usable.
Check the command timeout settings
Try to run a trace (profiler) and see what is happening on the DB side...
In my case the problem was that one of the queries included in the transaction was raising an exception, and even though the exception was "gracefully" handled, it still managed to roll back the entire transaction.
My pseudo-code was like:
var transaction = connection.BeginTransaction();
for(all the lines in a file)
{
try{
InsertLineInTable(); // INSERT statement might fail and throw an exception
}
catch {
// notify the user about the error on line x and continue
}
}
// Commit and Rollback will fail if one of the queries
// in InsertLineInTable threw an exception
if(CheckTableForErrors())
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
Here is a way to detect Zombie transaction
SqlTransaction trans = connection.BeginTransaction();
//some db calls here
if (trans.Connection != null) //Detecting zombie transaction
{
trans.Commit();
}
Decompiling the SqlTransaction class, you will see the following
public SqlConnection Connection
{
get
{
if (this.IsZombied)
return (SqlConnection) null;
return this._connection;
}
}
I notice if the connection is closed, the transOP will become zombie, thus cannot Commit.
For my case, it is because I have the Commit() inside a finally block, while the connection was in the try block. This arrangement is causing the connection to be disposed and garbage collected. The solution was to put Commit inside the try block instead.
For what it's worth, I've run into this on what was previously working code. I had added SELECT statements in a trigger for debug testing and forgot to remove them. Entity Framework / MVC doesnt play nice when other stuff is output to the "grid". Make sure to check for any rogue queries and remove them.
In my case, I've some codes which need to execute after committing the transaction, at the same try-catch block. One of the codes threw
an error then try block handed over the error to its catch block which contains the transaction rollback.
It will show a similar error. For example, look at the code structure below :
SqlTransaction trans = null;
try{
trans = Con.BeginTransaction();
// your codes
trans.Commit();
//your codes having errors
}
catch(Exception ex)
{
trans.Rollback(); //transaction roll back
// error message
}
finally
{
// connection close
}
Hope it will help someone :)
please help me resolve this problem:
There is an ambient MSMQ transaction. I'm trying to use new transaction for logging, but get next error while attempt to submit changes - "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." Here is code:
public static void SaveTransaction(InfoToLog info)
{
using (TransactionScope scope =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
using (TransactionLogDataContext transactionDC =
new TransactionLogDataContext())
{
transactionDC.MyInfo.InsertOnSubmit(info);
transactionDC.SubmitChanges();
}
scope.Complete();
}
}
Please help me.
Thx.
You could consider increasing the timeout or eliminating it all together.
Something like:
using(TransactionLogDataContext transactionDC = new TransactionLogDataContext())
{
transactionDC.CommandTimeout = 0; // No timeout.
}
Be careful
You said:
thank you. but this solution makes new question - if transaction scope was changed why submit operation becomes so time consuming? Database and application are on the same machine
That is because you are creating new DataContext right there:
TransactionLogDataContext transactionDC = new TransactionLogDataContext())
With new data context ADO.NET opens up new connection (even if connection strings are the same, unless you do some clever connection pooling).
Within transaction context when you try to work with more than 1 connection instances (which you just did)
ADO.NET automatically promotes transaction to a distributed transaction and will try to enlist it into MSDTC. Enlisting very first transaction per connection into MSDTC will take time (for me it takes 30+ seconds), consecutive transactions will be fast, however (in my case 60ms). Take a look at this http://support.microsoft.com/Default.aspx?id=922430
What you can do is reuse transaction and connection string (if possible) when you create new DataContext.
TransactionLogDataContext tempDataContext =
new TransactionLogDataContext(ExistingDataContext.Transaction.Connection);
tempDataContext.Transaction = ExistingDataContext.Transaction;
Where ExistingDataContext is the one which started ambient transaction.
Or attemp to speed up your MS DTC.
Also do use SQL Profiler suggested by billb and look for SessionId between different commands (save and savelog in your case). If SessionId changes, you are in fact using 2 different connections and in that case will have to reuse transaction (if you don't want it to be promoted to MS DTC).
I am writing a stored procedure that when completed will be used to scan staging tables for bogus data on a column by column basis.
Step one in the exercise was just to scan the table --- which is what the code below does. The issue is that this code runs in 5:45 seconds --- however the same code run as a console app (changing the connectionstring of course) runs in about 44 seconds.
using (SqlConnection sqlConnection = new SqlConnection("context connection=true"))
{
sqlConnection.Open();
string sqlText = string.Format("select * from {0}", source_table.Value);
int count = 0;
using (SqlCommand sqlCommand = new SqlCommand(sqlText, sqlConnection))
{
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
count++;
SqlDataRecord record = new SqlDataRecord(new SqlMetaData("rowcount", SqlDbType.Int));
SqlContext.Pipe.SendResultsStart(record);
record.SetInt32(0, count);
SqlContext.Pipe.SendResultsRow(record);
SqlContext.Pipe.SendResultsEnd();
}
}
However the same code (different connection string of course) runs in a console app in about 44 seconds (which is closer to what I was expecting on the client side)
What am I missing on the SP side, that would cause it to run so slow.
Please note: I fully understand that if I wanted a count of rows, I should use the count(*) aggregation --- that's not the purpose of this exercise.
The type of code you are writing is highly susceptible to SQL Injection. Rather than processing the reader like you are, you could just use the RecordsAffected Property to find the number of rows in the reader.
EDIT:
After doing some research, the difference you are seeing is a by design difference between the context connection and a regular connection. Peter Debetta blogged about this and writes:
"The context connection is written such that it only fetches a row at a time, so for each of the 20 million some odd rows, the code was asking for each row individually. Using a non-context connection, however, it requests 8K worth of rows at a time."
http://sqlblog.com/blogs/peter_debetta/archive/2006/07/21/context-connection-is-slow.aspx
Well it would seem the answer is in the connection string after all.
context connection=true
versus
server=(local); database=foo; integrated security=true
For some bizzare reason, using the "external" connection the SP runs almost as fast as a console app (still not as fast mind you! -- 55 seconds)
Of course now the assembly has to be deployed as External rather than Safe --- and that introduces more frustration.