Make WCF request in a loop results in a Timeout Exception - wcf

I've two DBs offline and online.
On the system where offline DB exists, I've created a scheduled application that pulls the record from offline db and push them to online db using WCF and Entity Framework.
Application pulls the batch of 10 records at a time and then push them.
Most of the time there are great number of records on offline db that need to put on online db.
So a loop executes that
Pulls the db from offline DB.
Push them to WCF.
WCF calls DAL layer and those records has been inserted in online DB.
After the request complete, marks those batch of records that they are uploaded in offline db.
It runs fine couple of times, then it gives the error
{"Connection Timeout Expired. The timeout period elapsed during the post-login phase. The connection could have timed out while waiting for server to complete the login process and respond; Or it could have timed out while attempting to create multiple active connections. The duration spent while attempting to connect to this server was - [Pre-Login] initialization=3977; handshake=7725; [Login] initialization=0; authentication=0; [Post-Login] complete=3019; "}
Why does this happen and how do I resolve this?

In point 3 you mentioned
WCF calls DAL layer and those records has been inserted in online DB.
How you are inserting records in your database?
You must dispose your context, so it'll be available for next request that you are about to make.
I'll prefer "using " statement. something like this
using (var context = new YourContext())
{
context.methodThatIsUsedToInsertRecords();
}
Try this

Related

EF6 Entity framework sometimes slow query on SQL Server

We have started a new project in .NET Core. We are just starting out and we are hitting a Web API endpoint to get some reference data of 8 records.
We noticed in our Angular screen that perodically (about every 10 requests) to a screen that the EF query takes about 6 to 15 seconds to run, rather than the 30ms it normally takes.
On debugging we know that we are getting right up to the .ToListAsync() and then in SQL profiler we can see it initiates the query and it take a delayed time.
So first impressions says its a SQL issue but if we run the SQL query manually in SQL itself it never delays.
Any ideas?
This might have to do with the connection pooling setup of efcore, It should not request a new connection on each request to db, enable connection pooling by adding this in your dependency management:
builder.Services.AddPooledDbContextFactory<DbContext>(
o => o.UseSqlServer(builder.Configuration.GetConnectionString("AppContext")));
reference :- https://learn.microsoft.com/en-us/ef/core/performance/advanced-performance-topics?tabs=with-di%2Cwith-constant

What is considered a normal number of concurrent login for Azure SQL Database?

I'm new to Azure SQL Database as this is my first project to migrate from a on premise setup to everything on Azure. So the first thing that got my concern is that there is a limit on concurrent login to the Azure SQL Database, and if it exist that number, then it'll start dropping the subsequent request. For the current service tier (S0), it caps at 60 concurrent logins, which I have already reached multiple times as I've encountered a few SQL failures from looking at my application log.
So my question is:
Is it normal to exceed that number of concurrent login? I'd like to get an idea of whether or not my application is having some issue, or my current service tier is too low.
I've looked into our database code to make sure we are not leaving database connection open. We use Enterprise Library, every use of DBCommand and IDataReader are being wrapped within a using block, thus they will get disposed once it runs out of scope.
Note: my web application consists of a front end app with multiple web services supporting the underlying feature, and each service will connect to the same database for specific collection of data, which makes me think hitting 60 concurrent login might be normal since a page or an action might involve multiple calls behind the scene, thus multiple connection to the database from a few api, and if there are more than one user on the application, then 60 is really easy to reach.
Again, in the past with on prem setup, I never notice this kind of limitation.
Thanks.
To answer your question, the limit is 60 on an S(0)
http://gavinstevens.com/2016/11/30/sql-server-vs-azure-sql/

Prevent an ASP .NET page having to poll a database for a change

I have a ASP .NET web service that leverages a long lived connection from the client.
The client connects in and waits for 15 minutes for a response.
Just prior to 15 minutes, the ASP .NET Web Service responds with an OK.
The client repeats this connection establishment.
During the 15 minutes, the Web Service checks for a change in a field value in a record in an SQL table. If that value changes it then immediately sends a response to the client with ReadMessage. This checking / polling of the database is done every 30 seconds. This has several drawbacks:
it does not scale well. It works well with 1 or 2 clients, but when you end up with 10,000 client connection that is a lot of polling on the database.
It leads to latency in processing as it may take up to 30 seconds for the client to be notified.
What I would like is to find a way of notifying the Web Service for the active http client that the record has been updated.
It should also be noted that each client connection to the web service has it's own specific record in the table.
I think SqlDependency is what you are looking for. Query Notifications allow applications to receive a notice when the results of a query have been changed
Have you considered setting up some triggers in the db? If you are using SQL Server you can use SQL Server CLR integration.
http://msdn.microsoft.com/en-us/library/ms254963%28v=VS.80%29.aspx
You could put a trigger on the table. Disclaimer: I try to stay away from triggers because it's very easy to write one poorly and when it errors it's hard to debug. However, I haven't ever written a CLR trigger and I imagine there's a little more safety in that since you have more control over error handling.
But even better would be to have whatever process is updating the table to begin with notify your webservice of the change if that's an option.

BizTalk WCF SQL adapter receiving timeout trying to get a connection from the pool

I have an extremely simple BizTalk orchestration that takes a HIPAA 837 file in, breaks it into its individual claims, and saves the complete xml message to the database. I have a WCF SQL send port that calls a stored procedure to do this... the proc just does an insert with no return value. The problem is that I keep (randomly) getting the timeout error:
Details:"Microsoft.ServiceModel.Channels.Common.InvalidUriException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I just tried this with a small file - only 5 individual claims in it (so I should only need 5 connections from the pool, right?). The BT server has been doing nothing else for the past 10 hours (no messages processed). Yet I still received this error... My MaxConnectionPoolSize is set to 100, so that means 100 connections have been held open and idle for at least 10 hours ?? What's going on here?
Thanks.
I would take a look here or here. To be honest the WCF SQL adapter is very picky and quirky as to what SQL it works well with and what it doesn't. I typically look for a custom solution for inserting into SQL to have more control over the inserts or updates without having to write my SQL specific for the SQL adapter. I find if I'm inserting or updating more than one table or returning a complex records, I avoid the WCF SQL adapter.
If that's not an option, look at re-writing your SQL.

WCF: Efficiently consuming large numbers of singleton requests via SQL job?

I'm planning to build a console app to run as part of a SQL 2005 job which will gather records from a database table, create a request object for a WCF service, pass this object to the service for processing, receive a response object, and update a log table with its data. This will be for processing at least several thousand records each time the job step executes.
The WCF service currently exposes a single method which I'd be hitting once for each record in the table, so I imagine I'd want to open a channel to the service, keep it open during processing, then close and dispose and such when complete.
Beyond maintaining the connection, how else could I minimize this console app's performance as a bottleneck? Should I not use a console app and instead try using SQLCLR or some other means to perform this processing?
You've probably considered Service Broker...