RavenDB BulkInsert Could not flush in the specified timeout - ravendb

I use BulkInsert to update ~600k items in RavenDB 3.5. I use Parallel.ForEach, but after a few minutes I get this error:
System.TimeoutException: Could not flush in the specified timeout, server probably not responding or responding too slowly.
Are you writing very big documents?
at Raven.Client.Document.RemoteBulkInsertOperation.Write(String id, RavenJObject metadata, RavenJObject data, Nullable`1 dataSize)
at Raven.Client.Document.BulkInsertOperation.Store(Object entity, String id)
I use this BulkInsertOptions:
BatchSize = 128,
WriteTimeoutMilliseconds = 60000,
OverwriteExisting = true,
ChunkedBulkInsertOptions = null

Related

Database timeout in Azure SQL

We have a .Net Core API accessing Azure SQL (Gen5, 4 vCores)
Since quite some time,
the API keeps throwing below exception for a specific READ operation
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout
Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The READ operation has code to read rows of data and convert an XML column into a specific output format.
Most of the read operation extracts hardly 4-5 rows # a time.
The tables involved in the query have ~ 500,000 rows
We are clueless on Root Cause of this issue.
Any hints on where to start looking # for root cause?
Any pointer would be highly appreciated.
NOTE : Connection string has following settings, apart from others
MultipleActiveResultSets=True;Connection Timeout=60
Overall code looks something like this.
HINT: The above timeout exception comes # ConvertHistory, when the 2nd table is being read.
HttpGet]
public async Task<IEnumerable<SalesOrders>> GetNewSalesOrders()
{
var SalesOrders = await _db.SalesOrders.Where(o => o.IsImported == false).OrderBy(o => o.ID).ToListAsync();
var orders = new List<SalesOrder>();
foreach (var so in SalesOrders)
{
var order = ConvertSalesOrder(so);
orders.Add(order);
}
return orders;
}
private SalesOrder ConvertSalesOrder(SalesOrder o)
{
var newOrder = new SalesOrder();
var oXml = o.XMLContent.LoadFromXMLString<SalesOrder>();
...
newOrder.BusinessUnit = oXml.BusinessUnit;
var history = ConvertHistory(o.ID);
newOrder.history = history;
return newOrder;
}
private SalesOrderHistory[] ConvertHistory(string id)
{
var history = _db.OrderHistory.Where(o => o.ID == id);
...
}
Microsoft.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
From Microsoft Document,
You will get this error in both conditions Connection timeout or Query or command timeout. first identify it from call stack of the error messages.
If you found it as a connection issue you can either Increase connection timeout parameter. if you are still getting same error, it is causing due to a network issue.
from information that you provided It is Query or command timeout error to work around this error you can set CommandTimeout for query or command
command.CommandTimeout = 10;
The default timeout value is 30 seconds, the query will continue to run until it is finished if the time-out value is set to 0 (no time limit).
For more information refer Troubleshoot query time-out errors provided by Microsoft.

Is there a way to control the number of bytes read in Reactor Netty's TcpClient?

I am using TcpClient to connect to a simple TCP echo server. Messages consist of the message size in 4 bytes followed by the message itself. For instance, to send the message "hello", the server will expect "0005hello", and respond with "0005hello".
When testing under load (approximately 300+ concurrent users), adjacent requests sometimes result in responses "piling up", e.g. sending "0004good" followed by "0003day" might result in the client receiving "0004good0003" followed by "day".
In a conventional, non-WebFlux-based TCP client, one would normally read the first 4 bytes from the socket into a buffer, determine the length of the message N, then read the following N bytes from the socket into a buffer, before returning the response. Is it possible to achieve such fine-grained control, perhaps by using TcpClient's underlying Channel?
I have also considered the approach of accumulating responses in some data structure (Queue, StringBuffer, etc.) and having a daemon parse the result, but this has not had the desired performance in practice.
I solved this by adding a handler of type LengthFieldBasedFrameDecoder to the Connection:
TcpClient.create()
.host(ADDRESS)
.port(PORT)
.doOnConnected((connection) -> {
connection.addHandler("parseLengthFromFirstFourBytes", new LengthFieldBasedFrameDecoder(9999, 0, 4) {
#Override
protected long getUnadjustedFrameLength(ByteBuf buf, int offset, int length, ByteOrder order) {
ByteBuf lengthBuffer = buf.copy(0, 4);
byte[] messageLengthBytes = new byte[4];
lengthBuffer.readBytes(messageLengthBytes);
String messageLengthString = new String(messageLengthBytes);
return Long.parseLong(messageLengthString);
}
});
})
.connect()
.subscribe();
This solves the issue with the caveat that responses still "pile up" (as described in the question) when the application is subjected to sufficient load.

SSAS tabular model timeout raised during processing

When doing a Full Process on a tabular model to an Azure Analysis Service model I get the following error after 10 minutes into the processing:
Failed to save modifications to the server. Error returned: 'Microsoft SQL: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.. The exception was raised by the IDbCommand interface.
Technical Details:
RootActivityId: cd0cfc78-416a-4039-a79f-ed7fe9836906
Date (UTC): 2/27/2018 1:25:58 PM
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The data source for the model is Azure Data Warehouse and SSAS authenticates to it via SQL authentication. When the Timeout occurs some partitions have retrieved all their rows but the others are still processing. The model contains 11 tables each with a single partition.
I get the error both when processing with Visual Studio 2015 and SSMS 2017. I can't see any SSAS server properties with a 10 minute (600 second) timeout. Individual table processing can be done without the timeout issue since individually they all complete in under 10 minutes.
I've tried setting the timeout property in the dataSources.connectionDetails object in my Tabular Model Scripting Language json file (i.e. Model.bim). But editing it drops the authentication credentials, and then resetting the credentials drops the timeout property. So I don't know if that property is even relevant to the timeout error issue.
An example of a partition query expression I'm using:
let
Source = #"SQL/resourcename database windows net;DatabaseName",
MyQuery =
Value.NativeQuery(
Source,
"SELECT * FROM [dbo].[MyTable]"
)
in
MyQuery
So thanks to GregGalloway's prompting I've figured out the timeout can be set on a per Partition basis using the Power Query M language.
So the data access parts of my TMSL object now look like so...
The model.dataSource is as so:
"dataSources": [
{
"type": "structured",
"name": "MySource",
"connectionDetails": {
"protocol": "tds",
"address": {
"server": "serverName.database.windows.net",
"database": "databaseName"
},
"authentication": null,
"query": null
},
"options": {},
"credential": {
"AuthenticationKind": "UsernamePassword",
"Username": "dbUsername",
"EncryptConnection": true
}
}
]
And the individual Partition queries are as so (note the CommandTimeout parameter):
let
Source = Sql.Database("serverName.database.windows.net","databaseName",[CommandTimeout=#duration(0, 2, 0, 0)]),
MyQuery =
Value.NativeQuery(
Source,
"SELECT * FROM [dbo].[MyTable]"
)
in
MyQuery
So now I'm explicitly setting a timeout of 2 hours for the Partition query.
Data Source -> Options: increasing Command timeout (default 600 secs) will also do the trick:

Exception in RavenDB when doing bulk inserts

I'm doing some bulk inserts with RavenDB, and at some random document I get an InvalidOperationException with a HTTP 403 (forbidden), and the message "This single use token has expired".
The code is pretty straightforward:
using (var bulkInsert = store.BulkInsert("MyDatabase", new BulkInsertOptions { CheckForUpdates = true, BatchSize = 512 })
{
// Mapping stuff
bulkInsert.store(doc, productId);
}
I have tried experimenting with the batch size without any luck.
How do I fix it?
I'm using RavenDB 2.5 build 2700 and hosting RavenDb as a Windows service.
When running RavenDB in a console, I can see that it logs this message:
Error when using events transport. An operation was attempted on a
nonexistent network connection.

CLR webservice call times out at 100 seconds

I have not found this answered anywhere. I have a CLR function that exectues a webmethod call of my .NET application (.asmx). The web service successfully executes when called directly but when called via the CLR it times out after 100 seconds with the following error:
Msg 6522, Level 16, State 1, Line 1
A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_ExecuteReport":
System.Net.WebException: The operation has timed out
System.Net.WebException:
at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at DD.WebServices.WebExec.ExecuteReport(String ddBotID, String serverKey, Int32 ddUserID, String reportReportTypeList, String deliverToUserList)
at ExecuteReport.GetResult(Int32 userID, SqlString reportList, SqlString deliverToUserList)
I have increased the web service proxy timeout in fn_ExecuteReport without effect:
WebExec svc = new WebExec();<br/>
svc.Timeout = 3600000; // set timeout to 1 hour<br/>
result = svc.ExecuteReport(userID, reportTypeList.ToString(),
deliverToUserList.ToString());
I want to capture the returned result so executing the webservice asynchronously is not a solution. Where else might I override timeout settings for the SQL CLR call? Thanks for any help you can provide.
Here's the code for the function. I'm able to execute the webservice, the timeout only occurs when executing via the CLR.
ALTER FUNCTION [dbo].[fn_ExecuteReport]
(#UserID int, #ReportTypeList nvarchar(max), #DeliverToUserList nvarchar(max))
RETURNS [nvarchar](255)
WITH EXECUTE AS CALLER AS EXTERNAL NAME [MyCLRLib].[ExecuteReport].[GetResult]
I've tried both synchronous and asynchronous calls to the web service in the CLR function and both end up with the 100 second timeout. Here's both calls that I've tried:
Synchronous:
WebExec svc = new WebExec();
svc.Timeout = 3600000; // set timeout to 1 hour
result = svc.ExecuteReport(userID, reportTypeList.ToString(), deliverToUserList.ToString());
Asynchronous:
WebExec svc = new WebExec();
IAsyncResult result = svc.BeginExecuteReport(userID, reportTypeList.ToString(), deliverToUserList.ToString(), null, null);
result.AsyncWaitHandle.WaitOne();
retStr = svc.EndExecuteReport(result);
Your error message suggests that the timeout is originating at SQL Server.
Have you tried updating your SQL Server statistics (or rebuilding indexes)?
Can you post the code of your CLR function?