SQL Azure Database retry logic - sql

I have implemented the following code for handling INSERT/UPDATE retry logic with exponential backoff when writing to an Azure Database.
static SqlConnection TryOpen(this SqlConnection connection)
{
int attempts = 0;
while (attempts < 5)
{
try
{
if (attempts > 0)
System.Threading.Thread.Sleep(((int)Math.Pow(3, attempts)) * 1000);
connection.Open();
return connection;
}
catch { }
attempts++;
}
throw new Exception("Unable to obtain a connection to SQL Server or SQL Azure.");
}
However should I consider applying retry logic for my database reads as well? Or would the SqlCommand.CommandTimeout() method suffice? Most of my reads are instituted using the following code:
Dim myDateAdapter As New SqlDataAdapter(mySqlCommand)
Dim ds As New DataSet
myDateAdapter.Fill(ds, "dtName")
It's hard to know what sort of transient errors will occur in a production environment with Azure so I am trying to do as much mitigation as possible now.

I think retries are going to be part of your Windows Azure SQL Database operations in general.
Rather than implementing a custom solution, have you looked at the transient fault handling application block published by Microsoft Patterns and Practices, specifically for SQL Database?

Connection failures in SQL Azure are common. This is because your application will create a connection pool but while your side thinks these connections are over, Azure could terminate them at their end and you will never know about it.
They do this for valid reasons such as a particular instance has become overloaded and they are transferring connections to another one. With in-house SQL servers you generally never get this problem because your SQL Servers are always available and dedicated for your use.
As an example, I get about 5 connection failures with SQL Azure on about 100,000 database queries in a day.
It's going to happen with SQL Azure. If you are using ADO.NET then David's suggestion of transient fault handling is the way to go.
If you are going to use Entity Framework, there is good news and bad news: Transient Fault Handling with SQL Azure using Entity Framework

I have implemented SqlConnection and SqlCommand extension methods providing retry logic. It is available on NuGet.

Related

Raw sql with EF Core and in-memory db provider

One of my API routes uses a raw sql merge into command in order to do an atomic upsert operation, and in my automated tests I've got a TestServer instance that uses the in-memory db provider. It gives me an error, probably because the in-memory provider doesn't support running raw sql commands - is that true? If not, how do I get it to work?
Here's the Startup class for the tests:
// In memory DB for testing
services.AddDbContext<MyContext>(optionsBuilder => optionsBuilder.UseInMemoryDatabase("stuff"));
services.AddDbContext<MyStatusContext>(optionsBuilder => optionsBuilder.UseInMemoryDatabase("status"));
services.AddDbContext<MyUserRolesContext>(optionsBuilder => optionsBuilder.UseInMemoryDatabase("userroles"));
And the API code is as you'd expect:
var count = await context.Database.ExecuteSqlCommandAsync(#"merge into ...", default(CancellationToken), ...);
return count;
This code works fine in production against a real database, I just can't get it working with the in-memory provider in my tests. Is there any hope for me? What's the usual test strategy for custom sql scripts?
There is no hope for you, as the InMemory provider is a NoSQL non-relational provider. You should use SQL Server (for example localdb) for integration testing
As you've discovered the in-memory provider can't do relational operations (a reasonable limitation).
I had similar problems and ended up putting together a library to extend the in-memory provider to support relational operations - EntityFrameworkCore.Testing. It'll do the ExecuteSqlCommand/ExecuteSqlCommandAsync mocking.

SQL connection pooling in Azure Functions

In traditional webservers you would have a SQL connection pool and persistent connection to the database.
But I am thinking of creating my entire application as Azure Functions.
Will the functions create a new connection the SQL server everytime its called upon?
Azure Functions doesn't currently have SQL as an option for an input or output binding, so you'd need to use the SqlClient classes directly to make your connections and issue your queries.
As long as you follow best practices of disposing your SQL connections (see this for example: C# SQLConnection pooling), you should get pooling by default.
Here's a full example of inserting records into SQL from a function: https://www.codeproject.com/articles/1110663/azure-functions-tutorial-sql-database
Although this is already answered, I believe this answer can provide more information.
If you are not using connection pool then probably you are creating connection every time function is invoked. Creating connection has associated cost, for warmed up instances it is recommended to use connection pool. max number of connection should also be chosen cautiously since there can be couple of parallel functions app running (as per plan).
This is example of connection pool.

EF and TransactionScope for both SQL Server and Oracle without escalating/spanning to DTC?

Can anyone update me on this topic?
I want to support both SQL Server and Oracle in my application.
Is it possible to have the following code (in BL) working for both SQL Server and Oracle without escalating/spanning to distributed transactions (DTC) ?
// dbcontext is created before, same dbcontext will be used by both repositories
using (var ts = new TransactionScope())
{
// create order - make use of dbcontext, possibly to call SaveChanges here
orderRepository.CreateOrder(order);
// update inventory - make use of same dbcontext, possibly to call SaveChanges here
inventoryRepository.UpdateInventory(inventory);
ts.Complete();
}
As of today, end of August 2013, I understand that it works for SQL Server 2008+ ... but what about Oracle? I found this thread... it looks like for Oracle is promoting to distributed transactions but is still not clear to me.
Does anyone have experience with writing apps to support both SQL Server and Oracle with Entity Framework to enlighten me?
Thanks!
Update: Finally I noticed EF6 comes with Improved Transaction Support. This, in addition to Remus' recommendations could be the solution for me.
First: never use var ts = new TransactionScope(). Is the one liner that kills your app. Always use the explicit constructor that let you specify the isolation level. See using new TransactionScope() Considered Harmful.
Now about your question: the logic not to promote two connections in the same scope into DTC relies heavily on the driver/providers cooperating to inform the System.Transactions that the two distinct connections are capable of managing the distributed transaction just fine on their own because the resource managers involved is the same. SqlClient post SQL Server 2008 is a driver that is capable of doing this logic. The Oracle driver you use is not (and I'm not aware of any version that is, btw).
Ultimately is really really really basic: if you do not want a DTC, do not create one! Make sure you use exactly one connection in the scope. It is clearly arguable that you do not need two connections. In other words, get rid of the two separate repositories in your data model. Use only one repository for Orders, Inventory and what else what not. You are shooting yourself in the foot with them and you're asking for pixie dust solutions.
Update: Oracle driver 12c r1:
"Transaction and connection association: ODP.NET connections, by default, detach from transactions only when connection objects are closed or transaction objects are disposed"
Nope, DTC is needed for distributed transactions - and something spanning 2 different database technologies like this is a distributed transaction. Sorry!

SQLite, open one permanent connection or not?

I have been under the understanding that database connections are best used and closed. However with SQLite Im not sure that this applies. I do all the queries with a Using Connection statment. So it is my understanding that I open a connection and then close it doing this. When it comes to SQLite and optimal usage, is it better to open one permament connection for the duration of the program being in use or do I continue to use the method that I currently use.
I am using the database for a VB.net windows program with a fairly large DB of about 2gig.
My current method of connection example
Using oMainQueryR As New SQLite.SQLiteCommand
oMainQueryR.CommandText = ("SELECT * FROM CRD")
Using connection As New SQLite.SQLiteConnection(conectionString)
Using oDataSQL As New SQLite.SQLiteDataAdapter
oMainQueryR.Connection = connection
oDataSQL.SelectCommand = oMainQueryR
connection.Open()
oDataSQL.FillSchema(crd, SchemaType.Source)
oDataSQL.Fill(crd)
connection.Close()
End Using
End Using
End Using
As with all things database, it depends. In this specific case of sqlite, there are two "depends" you need to look at:
Are you the only user of the database?
When are implicit transactions committed?
For the first item, you probably want to open/close different connections frequently if there are other users of the database or if it's all possible that more than process will be hitting your sqlite database file at the same time.
For the second item, I'm not sure how sqlite specifically behaves. Some database engines don't commit implicit transactions until the connection is closed. If this is the case for sqlite, you probably want to be closing your connection a little more often.
The idea that connections should be short-lived in .Net applies mainly to Microsoft Sql Server, because the .Net provider for Sql Server is also able to take advantage of a feature known as connection pooling. Outside of Sql Server this advice is not entirely without merit, but it's not as much of a given.
If it is a local application being used by only one user I think it is fine to keep one connection opened for the life of the application.
I think with most databases the "Best used and closed" idea comes from the perspective of saving memory by ensuring you only have the minimum number of connections need open.
In reality opening the connection can be a large amount of of overhead and should be done when needed. This is why managed server infrastructure (weblogic etc.) promotes the use of connection pooling. In this way you have N connections that are utilizable at any given time. You never "waste" resources but you also aren't left with the responsibility of managing them at a global level.

NHibernate + Sql Compact + IoC - Connection Managment

When working with NHibernate and Sql Compact in a Windows Form application I am wondering what is the best practice for managing connections. With SQL CE I have read that you should keep your connection open vs closing it as one would typically do with standard SQL. If that is the case and your using a IoC, would you make your repositories lifetime be singletons so they exist forever or dispose of them after you perform a "Unit of Work".
Also is there a way to determine the number of connections open to Sql CE?
In my DAL or DataService, which will have a lifetime of the entire app, I'd create and hold open a connection to the database and then let the ORM do whatever it wants for its own connection management. I would only do this in a Compact Framework app, though, where the speed of building up and tearing down the connection for each query might make a difference.