We have the following code:
var db = new CoreEntityDB();
var abc = new abcDB();
var connection = new DataStore(db.ConnectionStrings.First(p => p.Name == "Abc").Value, DataStore.Server.SqlServer);
var projects = new List<abc_Employees>();
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
The project is failing on the following line:
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
And the error says: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"
Everything was working fine a few days ago, and now, nothing. Nothing's changed either as far as code, or sql stored proc.
Anyone else experienced this before?
Did you try to run SP independently to see if that's the bottle neck?
Is it the command that is timing out?
You can increase the command timeout using:
((IObjectContextAdapter)abc).ObjectContext.CommandTimeout = 180;
You should take a look at your stored procedure. The default timeout is 30 seconds so it looks like it is taking longer for the stored procedure to return results. Increasing the timeout is just treating the symptoms.
Related
We have a .Net Core API accessing Azure SQL (Gen5, 4 vCores)
Since quite some time,
the API keeps throwing below exception for a specific READ operation
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout
Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The READ operation has code to read rows of data and convert an XML column into a specific output format.
Most of the read operation extracts hardly 4-5 rows # a time.
The tables involved in the query have ~ 500,000 rows
We are clueless on Root Cause of this issue.
Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings, apart from others
MultipleActiveResultSets=True;Connection Timeout=60
Overall code looks something like this.
HINT: The above timeout exception comes # ConvertHistory, when the 2nd table is being read.
HttpGet]
public async Task<IEnumerable<SalesOrders>> GetNewSalesOrders()
{
var SalesOrders = await _db.SalesOrders.Where(o => o.IsImported == false).OrderBy(o => o.ID).ToListAsync();
var orders = new List<SalesOrder>();
foreach (var so in SalesOrders)
{
var order = ConvertSalesOrder(so);
orders.Add(order);
}
return orders;
}
private SalesOrder ConvertSalesOrder(SalesOrder o)
{
var newOrder = new SalesOrder();
var oXml = o.XMLContent.LoadFromXMLString<SalesOrder>();
...
newOrder.BusinessUnit = oXml.BusinessUnit;
var history = ConvertHistory(o.ID);
newOrder.history = history;
return newOrder;
}
private SalesOrderHistory[] ConvertHistory(string id)
{
var history = _db.OrderHistory.Where(o => o.ID == id);
...
}
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
From Microsoft Document,
You will get this error in both conditions Connection timeout or Query or command timeout. first identify it from call stack of the error messages.
If you found it as a connection issue you can either Increase connection timeout parameter. if you are still getting same error, it is causing due to a network issue.
from information that you provided It is Query or command timeout error to work around this error you can set CommandTimeout for query or command
command.CommandTimeout = 10;
The default timeout value is 30 seconds, the query will continue to run until it is finished if the time-out value is set to 0 (no time limit).
For more information refer Troubleshoot query time-out errors provided by Microsoft.
I have a script in LINQPad that looks like this:
var serverMode = EnvironmentType.EWPROD;
var jobToSchedule = JobType.ABC;
var hangfireCs = GetConnectionString(serverMode);
JobStorage.Current = new SqlServerStorage(hangfireCs);
Action<string, string, XElement> createOrReplaceJob =
(jobName, cronExpression, inputPackage) =>
{
RecurringJob.RemoveIfExists(jobName);
RecurringJob.AddOrUpdate(
jobName,
() => new BTR.Evolution.Hangfire.Schedulers.JobInvoker().Invoke(
jobName,
inputPackage,
null,
JobCancellationToken.Null),
cronExpression, TimeZoneInfo.Local);
};
// psuedo code to prepare inputPackage for client ABC...
createOrReplaceJob("ABC.CustomReport.SurveyResults", "0 2 * * *", inputPackage);
JobStorage.Current.GetConnection().GetRecurringJobs().Where( j => j.Id.StartsWith( jobToSchedule.ToString() ) ).Dump( "Scheduled Jobs" );
I have to schedule in both QA and PROD. To do that, I toggle the serverMode variable and run it once for EWPROD and once for EWQA. This all worked fine until recently, and I don't know exactly when it changed unfortunately because I don't always have to run in both environments.
I did purchase/install LINQPad 7 two days ago to look at some C# 10 features and I'm not sure if that affected it.
But here is the problem/flow:
Run it for EWQA and everything works.
Run it for EWPROD and the script (Hangfire components) seem to run in a mix of QA and PROD.
When I'm running it the 'second time' in EWPROD I've confirmed:
The hangfireCs (connection string) is right (pointing to PROD) and it is assigned to JobStorage.Current
The query at the end of the script, JobStorage.Current.GetConnection().GetRecurringJobs() uses the right connection.
The RecurringJob.* methods inside the createOrReplaceJob Action use the connection from the previous run (i.e. EWQA). If I monitor my QA Hangfire db, I see the job removed and added.
Temporary workaround:
Run it for EWQA and everything works.
Restart LINQPad or use 'Cancel and Reset All Queries' method
Run it for EWPROD and now everything works.
So I'm at a loss of where the issue might lie. I feel like my upgrade/install of LINQPad7 might be causing problems, but I'm not sure if there is a different way to make the RecurringJob.* static methods use the 'updated' connection string.
Any ideas on why the restart or reset is now needed?
LINQPad - 5.44.02
Hangfire.Core - 1.7.17
Hangfire.SqlServer - 1.7.17
This is caused by your script (or a library that you call) caching something statically, and not cleaning up between executions.
Either clear/dispose objects when you're done (e.g., JobStorage.Current?) or tell LINQPad not to re-use the process between executions, by adding Util.NewProcess=true; to your script.
I have a SQL job (4 steps job consist of SSIS packages) which runs on daily basis and extract data from various sources (source1, source2, source3) then it loads data to warehouse. Now my job fails due to 'Communication Link failure' with source1 at step 1.
Is there any way I can set up retry attempt SQL job based on this above error only.
For example, if I get error 'primary key violation' or some other data related issue then we should directly get notification that job failed but if we have error 'Communication Link failure' then step1 should do retry attempt.
Any suggestion would be appreciated.
Short answer: No, not with the SQL agent.
Longer answer: Maybe you can build up som logic where the package checks if the previous error was that specific error you're looking for, if then, execute again. Cumbersome but possible.
You can create an Event Handler for the OnError event with a Script Task which will check for this specific error and execute msdb.dbo.sp_start_job if this error occurred. Since I'm not sure the exact error code you're getting, this only checks the #[System::ErrorDescription] system variable for the specific text using the StringComparison.CurrentCultureIgnoreCase option to make this match case insensitive. However I would strongly recommend finding the exact error code and using the #[System::ErrorCode] variable to verify this instead. I'd also suggest only retrying the job a certain number of times or within a given time-frame to avoid excessive failures if this issue persists as well.
string errorMsg = Dts.Variables["System::ErrorDescription"].Value.ToString();
if (errorMsg.IndexOf("Communication Link failure", 0, StringComparison.CurrentCultureIgnoreCase) >= 0)
{
string connString = #"YourConnectionString;";
string startJobCmd = #"EXEC MSDB.DBO.SP_START_JOB N'NameOfJobToRetry;";
using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand sql = new SqlCommand(startJobCmd, conn);
conn.Open();
sql.ExecuteNonQuery();
}
}
I have an EntityFramework select statement which return like some 50K rows. i am getting this exception for this simple select command
var db = new DBEntity();
db.CommandTimeout = 350;
var actCosts = (from i in db.Products
where i.productID== productID
select i).ToList();
the database is in Azure. i have connected through SSMS to know the actual time taking to retrieve the rows it takes 4:30 minutes to bring all data. So i set Commandtimeout to like 350 seconds. But it didnt worked
is there any performance differences between the above and this one
var actCosts = db.Products.Where(t => t.productID== productID).ToList();
First try to run a .FirstOrDefault(), see if it returns data in time
var actCosts = (from i in db.Products
where i.productID== productID
select i).FirstOrDefault();
If it works, I suggest setting an even larger Timeout, like 1000, and see if your results are returned then.
I believe SSMS uses other ways of retrieving the data and is probably better optimized for this than a simple .ToList() method.
I am converting my database from SQL Server 2005 to MySQL using asp .net mvc.
I have bulk data in SQL Server (400k records), but I am facing command timeout/waiting for CommandTimeout error which, when I search on Google, can be given 65535 as its highest value or 0 (if it should wait for unlimited time).
Both of these aren't working for me. I also have set any ConnectTimeout to 180. So should I have to change it too? Anybody who had faced this problem or have any confirmed knowledge please share.
For me increasing the CommandTimeout fixed the issue.
Code sample:
//time in seconds
int timeOut = 300;
//create command
MySqlCommand myCommand = new MySqlCommand(stringSQL);
//set timeout
myCommand.CommandTimeout = timeOut;
Try sending commands in a batch of 100/500 then there will be no need of command timeout. Hope it works for you