Async=true and Entity Framework - wcf

Background WCF Stack, Data Access Implemented in Entity Framework, Simple ASP.NET Front End
This is a two part question.
Recently we ran into an issue with periodic crashes with an exception that read:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available
We had been running our application without issues for over a week, and then all the sudden we were hit with this random crash/ If I had to guess I would say it was network related, but we were unable to determine the exact source. Has anyone periodically gotten this message? If so what was the root cause?
Second question is someone suggested to set "async=true" in our Entity Framework connection string. I was under the impression this just enables the async api. Does this do anything when you are using EF? Does switching this flag do anything with the queries that get generated by EF?

To be that guy I will answer this one on my own.
First I posted the question about the "async=true"s effect on entity framrwork to MS and no one answered ... as usual(if they answer i will update this post).
Our issue:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available
Was environment related. Something was causing the DB to run a little bit slower, but it was hinting to a larger issue. Apparently EF has horrible issues when you share context between threads (not an easy problem to solve), so we were seeing a race condition with opening connections.
We basically had a "read only context" that only did gets. Our issue was two threads attempt to open the connection at the same time, one wins, the other loses resulting in some variation of the exception below:
The connection was not closed. The connection's current state is connecting.
Our solution was to convert our singleton to be thread specific. Not exactly what we wanted, but it worked, and when we pushed this fix our other issue magically went away.
The second half to this question was what does async=true do. When it comes to EF, it made our system crash. We had a block of code that did a join, and if async=true and MARS=false we got a:
There is already an open DataReader associated with this Command which must be closed first
Once we cut back on MARS, and disabled async things were good again.

Related

The wait operation timed out. .aspx

I created an internal website for our company. It run smoothly for several months and then I add more items to website. When I run in live, it run normally. Then suddenly one of my user from another server sending me an "The Wait operation timed out." error. When I check access that certain link, It run normally for me and some other who I ask to check if they access that page. I already increase the connection timeout but still no luck. Is it the error come from another server? Can someone explain the possible causes?
This is how the another plant faced, every time they firstly open the website, error screen show up, but when they refresh it, they can use the website. I dont know why this happened. I need your help.
Down below is a error detail:
1.Exception Details: System.ComponentModel.Win32Exception: The wait operation timed out
source error :An unhandled exception was generated during the execution of the current web request.
2.Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Thanks in advance
The fact that this happens for a user but not for the testers implies this may occur when the system is under load; database timeouts are pretty common in database queries functioning under stress if the database has been set up "out of the box" without tuning.
I would suggest referring to
The wait operation timed out. ASP
I don't have enough information to troubleshoot more question properly, since I don't know what DBMS you are working with. But as a rule this seems to happen because a call to the database is timing out. In SQL Server, increasing the CommandTimeout (NOT connection timeout) is one of the quick-and-dirty ways to solve the problem.
In SQL Server, CommandTimeout is the time allowed for an operation before exiting with a time out error. Connectiontimeout, by contrast, is the time the system waits when trying to open an initial connection to the database. Changing connectiontimeout won't help with the timeout of an operation, but commandtimeout will.
Other DBMS systems will have other mechanisms for resolving timeout issues.
That's one quick and dirty solution. The longer solution is to add more logging to your system to identify which calls are timing out, then doing some DBA work to optimize the query and database performance. My understanding is that entity frameworks also have tuning options for automatically generated queries, but exactly what those are depends on which one you're using!

Unhandled Exception Error - Login Failed for User

We have a strange error here. In our ASP.NET 4.6 app, using Entity Framework 6.2, we are getting "Login failed for user" when accessing the SQL Azure database. I'm pretty sure the cause of the error is switching tiers in Azure. What I don't get is why the error isn't caught. Every SQL operation we have is inside a try...catch block. The errors fall out of the block and get caught by Globals.asax just before the app crashes.
We have
SetExecutionStrategy("System.Data.SqlClient", Function() New SqlServer.SqlAzureExecutionStrategy(10, TimeSpan.FromSeconds(7)))
which,as I understand it, will retry any SQL execution 10 times for at least 70 seconds from the first error. According to the Microsoft tech support, this isn't engaged because it hasn't made the connection to SQL Azure yet. The ConnectRetryCount and interval in the connection string do not apply since it is talking to the server. The server is just saying, "I know you are there, but I'm not going to let you in!"
According to MS Tech support, the only way around this is to have a try...catch block around all of our SQL commands... which we do! It just falls through and crashes the app!
I can't do a retry in globals.asax because at that point, it is already crashed.
According to MS, there is no way to trap the error in the context and retry from there. So, what's the solution? There must be some answer other than, "just let the app crash and have them refresh the page!"
When the page is refreshed seconds later, all is fine. No errors, no problems.
Example of one of the lines of code throwing the error:
MapTo = ctx.BrowserMaps.FirstOrDefault(Function(x) code.Contains(x.NameOrUserAgent))
It's really very straight forward. this one just happens to come up a lot because this code block is called frequently. The actual SQL request is irrelevant because no matter what line is used, the connection, within EF, fails.
Server logins will be disconnected while scaling up/down to a new tier, and transactions are rolled back. However, contained database logins stay connected during the scaling process, and for that reason they are recommended over server logins.
Having a try and catch may not solve the issue because you may be capturing error # zero and a lot of errors in Azure SQL database fall on that error 0 category.
Just a comment, performance after scaling may be poor right after scaling and improves after a few minutes. Query plans may also change.

Error 40 and SqlAzureExecutionStrategy

I have a service fabric service (guest executable), using entityframework core, talking to sql azure.
From time to time I see the following error:
A network-related or instance-specific error occurred while establishing a connection
to SQL Server. The server was not found or was not accessible. Verify that the
instance name is correct and that SQL Server is configured to allow remote connections.
(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
It seems transient as there are numerous database transactions that occur without errors. This seems to occur more when a node is busy.
I've added code in start up to set EnableRetryOnFailure to set the SqlServerRetryingExecutionStrategy:
services.AddEntityFrameworkSqlServer()
.AddDbContext<MyDbContext>(options =>
options.UseSqlServer(_configuration.GetConnectionString("MyDbConnection"),
o => o.EnableRetryOnFailure()))
One major caveat, is at the moment I'm losing context so I don't know what data was attempting to be updated/inserted, so I don't know if it was eventually successful or not.
Couple of questions:
From the Transient Detection Code it doesn't look like error: 40 is caught, but my understanding is that error 40 may actually be another error (unclear). Is that correct?
Is this really a transient issue or does it mean I have another problem?
Without additional logging (working on it), do we know if the retry strategy logs the error, but still retry's and in fact may have been successful?
If this is a transient error, but it's not caught in the default execution strategy, why not? and what would be unintentded consequences of sub classing the SqlAzureExecutionStrategy to include this error.
I've seen this question: Sql Connection Error 40 Under Load, and it feels familiar, but he seems to have resolved it by tuning his database - which I will look at doing, I'm trying to make my code more resilient in the case of database issues.
There is a certain version of EF Core that caches the query or requests if the time span between two database transactions is very small, so update your packages to make sure you are using the most recent.
Query: Threading issues cause NullReferenceException in SimpleNullableDependentKeyValueFactory #5456
check these other links
https://github.com/aspnet/EntityFramework/issues/5456
https://github.com/aspnet/Security/issues/739
https://github.com/aspnet/EntityFramework/issues/6347
https://github.com/aspnet/EntityFramework/issues/4120

SQL Compact lock timeout on __SysObjects

I'm using SQL Compact 3.5 SP2. My application is multi-threaded, but it does not share connections across threads. Instead, I use a custom object pool to ensure that each thread gets its own connection. That said, it's possible that a connection might be re-used on different threads at different times... in other words, I'm assuming that the connections don't have thread affinity. Also, not sure if it matters, but I'm using Entity Framework in .NET 3.5 SP1.
Anyway, when I've got high load situations (8+ threads), I'm getting lock timeout exceptions (regardless of the length of the timeout setting), and the exception always says the lock was on the __SysObjects table.
I'm not doing any DDL, so I don't understand why I would get locking timeouts on that table. Ideas?
I somewhat resolved this issue by making sure that my connections were closed after each use (as opposed to pooling the open connections), but if I let the code run for a long period of time I started getting OutOfMemoryException and AccessViolation exceptions.
This smells like the SqlCeConnection class has some kind of thread affinity dependency. Either that, or it has a memory leak of some kind.
At any rate, I've given up on trying to pool these objects.
EDIT: This actually appears to be an issued address by Cummulative Update 2. Since updating my references to the new libs, I haven't seen this problem. See: http://support.microsoft.com/kb/983516

How to find unclosed connection? Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding

I've had this problem before and found that basically I've got a connection that I'm not closing quickly enough (leaving connections open and waiting for garbage collection isn't really a best practice).
Now I'm getting it again but I can't seem to find where I'm leaving my connections open. By the time is see the error the database has cleared out the old connections so I can't see all the locked up connections last command (very helpful last time I had this issue).
Any idea how I could instrument my code or database to track what's going on so I can find my offending piece of code?
The error you are providing doesnt really point to a connection that is left open; it is more likely that there is a query that is taking longer than the application expects.
you can increase the time it waits for a response, and you could use Sql to find which queries are the most taxing.
Hopefully you have one data access layer class, instead of a whole bunch of classes, each one creating its own connection, right? What language are you using? If your using C#, the biggest cause of this problem is DataReaders and returning these objects to the upper layers. Most likely some client class is not closing the DataReader it received from your DAL class, leaving the connection open/locked for who knows how long. Track down the DataReaders you're returning and make sure your client classes are closing/disposing of them properly.
I'd also start thinking about redesigning your data access layer by implementing Disposable pattern and possibly returning POCOs instead of Data (...Tables, ...Sets, ...Readers) objects.