Exception in RavenDB when doing bulk inserts - ravendb

I'm doing some bulk inserts with RavenDB, and at some random document I get an InvalidOperationException with a HTTP 403 (forbidden), and the message "This single use token has expired".
The code is pretty straightforward:
using (var bulkInsert = store.BulkInsert("MyDatabase", new BulkInsertOptions { CheckForUpdates = true, BatchSize = 512 })
{
// Mapping stuff
bulkInsert.store(doc, productId);
}
I have tried experimenting with the batch size without any luck.
How do I fix it?
I'm using RavenDB 2.5 build 2700 and hosting RavenDb as a Windows service.
When running RavenDB in a console, I can see that it logs this message:
Error when using events transport. An operation was attempted on a
nonexistent network connection.

Related

Vichuploader breaks entity - Unexpected EOF

I am installing the package Vichuploader (https://github.com/dustin10/VichUploaderBundle).
To make the file upload when no other inputs are changed from the entity I migrated my entity to add updated_at field.
After this migration my entity does not update anymore.
I can create a new entity without any problems but I have weird (non 500) errors:
Fatal error: Maximum execution time of 30+2 seconds exceeded (terminated) in /Users/alphabetus/Documents/repos/fluid-cms/src/Controller/BlockController.php on line 181
ERROR| SERVER issue with server callback error="unable to fetch the response from the backend: unexpected EOF"
ERROR| SERVER POST (502) /admin/blocks/edit/706ae964-e2c1-11ea-b09a-69c7fbc1be88 host="127.0.0.1:8004" ip="::1" scheme="https"
My line #181 contains the following:
/**
* #return File\Null
*/
public function getImageFile()
{
return $this->image_file; // line 81
}
public function setImageFile(File $image_file = null): void
{
$this->image_file = $image_file;
if ($image_file) {
$this->updated_at = new \DateTime('none');
}
}
I am new to symfony. What am I doing wrong?
Thanks
Judging from your error message, the time it takes to process the file is too long (more the 30 seconds) so your PHP servers kills the process.
You can use set_time_limit in your index.php file or change the max_execution_time in php.ini.
Also, the error points to line 181 from BlockController.php not your entity.
If this doesn't fix your issue please supply more details/code. I've used VichUploderBundle quite a lot (including the updatedAt trigger) without any issues.

Azure SQL Serverless Database return code when paused

Description:
I have an application that connects to an Azure Serverless Database. The database can be in a paused state and in an online state. The database auto-pauses when there has been no activity for one hour. This means that when my application tries to open a connection to the database when it is paused, the connection times out and gives a timeout exception.
Azure states in their documentation that:
If a serverless database is paused, then the first login will resume the database and return an error stating that the database is unavailable with error code 40613. Once the database is resumed, the login must be retried to establish connectivity. Database clients with connection retry logic should not need to be modified. source
I am able to get this error code 40613 returned when I try to connect to the database via SQL Management Studio. But when I try to open a connection to the database from my application I only get a timeout exception, hence I don't know whether or not the database is not available or if the database is in fact resuming.
Code example:
public IDbConnection GetConnection()
{
var connection = new SqlConnection(_connectionString);
try
{
connection.Open();
return connection;
}
catch (SqlException e)
{
if (e.Number == 40613)
{
//Database is resuming
}
}
finally
{
connection.Close();
}
}
Exception example:
When I run my application and the database is in paused state I get this exception:
Snippet of exception in Visual Studio
Does anyone know why I don't get the error code 40613 that Azure states in their documentation?
Indeed you may get timeout errors when the Azure database is unavailable. In fact you may get the following errors:
HTTP error GatewayTimeout : The gateway did not receive a response
from ‘Microsoft.Sql’ within the specified time period
HTTP error ServiceUnavailable : The request timed out
SQLException : Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
You may get also error 40613 but you can capture some transient errors like below too:
•Database on server is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of
•Database on server is not currently available. Please retry the connection later. If the problem persists, contact customer support, and provide them the session tracing ID of . (Microsoft SQL Server, Error: 40613)
•An existing connection was forcibly closed by the remote host.
•System.Data.Entity.Core.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details. ---> System.Data.SqlClient.SqlException: A transport-level error has occurred when receiving results from the server. (provider: Session Provider, error: 19 - Physical connection is not usable)
•An connection attempt to a secondary database failed because the database is in the process of reconfguration and it is busy applying new pages while in the middle of an active transation on the primary database.
Because of those errors and more explained here, it is necessary to create a retry logic on applications that connect to Azure SQL Database.
public void HandleTransients()
{
var connStr = "some database";
var _policy = RetryPolicy.Create < SqlAzureTransientErrorDetectionStrategy(
retryCount: 3,
retryInterval: TimeSpan.FromSeconds(5));
using (var conn = new ReliableSqlConnection(connStr, _policy))
{
// Do SQL stuff here.
}
}
More about how to create a retry logic here.

GoogleApiException: Google.Apis.Requests.RequestError Backend Error [500] when streaming to BigQuery

I'm streaming data to BigQuery for the past year or so from a service in Azure written in c# and recently started to get increasing amount of the following errors (most of the requests succeed):
Message: [GoogleApiException: Google.Apis.Requests.RequestError An
internal error occurred and the request could not be completed. [500]
Errors [
Message[An internal error occurred and the request could not be completed.] Location[ - ] Reason[internalError] Domain[global] ] ]
This is the code I'm using in my service:
public async Task<TableDataInsertAllResponse> Update(List<TableDataInsertAllRequest.RowsData> rows, string tableSuffix)
{
var request = new TableDataInsertAllRequest {Rows = rows, TemplateSuffix = tableSuffix};
var insertRequest = mBigqueryService.Tabledata.InsertAll(request, ProjectId, mDatasetId, mTableId);
return await insertRequest.ExecuteAsync();
}
Just like any other cloud service, BigQuery doesn't offer a 100% uptime SLA (it's actually 99.9%), so it's not uncommon to encounter transient errors like these. We also receive them frequently in our applications.
You need to build exponential backoff-and-retry logic into your application(s) to handle such errors. A good way of doing this is to use a queue to stream your data to BigQuery. This is what we do and it works very well for us.
Some more info:
https://cloud.google.com/bigquery/troubleshooting-errors
https://cloud.google.com/bigquery/loading-data-post-request#exp-backoff
https://cloud.google.com/bigquery/streaming-data-into-bigquery
https://cloud.google.com/bigquery/sla

What is causing EventStore to throw ConcurrencyException so easily?

Using JOliver EventStore 3.0, and just getting started with simple samples.
I have a simple pub/sub CQRS implementation using NServiceBus. A client sends commands on the bus, a domain server recieves and processes the commands and stores events to the eventstore, which are then published on the bus by the eventstore's dispatcher. a read-model server then subscribes to those events to update the read-model. Nothing fancy, pretty much by-the-book.
It is working, but just in simple tests I am getting lots of concurrency exceptions (intermittantly) on the domain server when the event is stored to the EventStore. It properly retries, but sometimes it hits the 5 retry limit and the command ends up on the error queue.
Where could I start investigating to see what is causing the concurrency exception? I remove the dispatcher and just focus on storing events and it has the same issue.
I'm using RavenDB for persistence of my EventStore. I'm not doing anything fancy, just this:
using (var stream = eventStore.OpenStream(entityId, 0, int.MaxValue))
{
stream.Add(new EventMessage { Body = myEvent });
stream.CommitChanges(Guid.NewGuid());
}
The stack trace for the exception looks like this:
2012-03-17 18:34:01,166 [Worker.14] WARN
NServiceBus.Unicast.UnicastBus [(null)] <(null)> -
EmployeeCommandHandler failed handling message.
EventStore.ConcurrencyException: Exception of type
'EventStore.ConcurrencyException' was thrown. at
EventStore.OptimisticPipelineHook.PreCommit(Commit attempt) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticPipelineHook.cs:line
55 at EventStore.OptimisticEventStore.Commit(Commit attempt) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStore.cs:line
90 at EventStore.OptimisticEventStream.PersistChanges(Guid
commitId) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStream.cs:line
168 at EventStore.OptimisticEventStream.CommitChanges(Guid
commitId) in
c:\Code\public\EventStore\src\proj\EventStore.Core\OptimisticEventStream.cs:line
149 at CQRSTest3.Domain.Extensions.StoreEvent(IStoreEvents
eventStore, Guid entityId, Object evt) in
C:\dev\test\CQRSTest3\CQRSTest3.Domain\Extensions.cs:line 13 at
CQRSTest3.Domain.ComandHandlers.EmployeeCommandHandler.Handle(ChangeEmployeeSalary
message) in
C:\dev\test\CQRSTest3\CQRSTest3.Domain\ComandHandlers\Emplo
yeeCommandHandler.cs:line 55
I figured it out. Had to dig through source code to find it though. I wish this was better documented! Here's my new eventstore wireup:
EventStore = Wireup.Init()
.UsingRavenPersistence("RavenDB")
.ConsistentQueries()
.InitializeStorageEngine()
.Build();
I had to add .ConsistentQueries() in order for the raven persistence provider to internally use WaitForNonStaleResults on the queries eventstore was making to raven.
Basically when I add a new event, and then try to add another before raven has caught up with indexing, the stream revision was not up to date. The second event would step on the first one.

Detect TimeoutException on server side WCF

I have a WCF service that has some operations that may take long...
The client receives a TimeoutException, but the server continues executing after the long operation.
Server:
public void doSomeWork(TransmissionObject o) {
doDBOperation1(o); //
doDBOperation2(o); // may result in TimeoutException on client
doDBOperation3(o); // it continues doing DB operations. The client is unaware!
}
Client:
ServiceReference.IServiceClient cli = new ServiceReference.IServiceClient("WSHttpBinding_IService","http://localhost:3237/Test/service.svc");
int size = 1000;
Boolean done = false;
TransmissionObject o = null;
while(!done) {
o = createTransmissionObject(size);
try {
cli.doSomeWork(o);
done = true;
}catch(TimeoutException ex) {
// We want to reduce the size of the object, and try again
size--;
// the DB operations in server succeed, but the client doesn't know
// this makes errors.
}catch(Exception ex) { ... }
}
Since the server is performing some DB operations, I need to detect the timeout on the server side to be able to rollback the DB operations.
I tried to use Transactions with [TransactionFlow], TransactionScope, etc, on the client side, but the DB operations on the server are using Stored Procedures that are NESTED!!, therefore I cannot use distributed transactions. (I receive an SqlException saying: Cannot Use SAVE TRANSACTION Within A Distributed Transaction.). If I use simple SPs (that are not nested), then the solution with the transactions works fine.
My Question:
How can I detect the TimeoutException, but on the server side? I guess is something related to the proxy status... or probably some Events that can be captured by the server.
I'm not sure if handling the transaction on the server side is the correct solution..
Is there a pattern to solve this problem?
Thanks!
Instead of waiting for an operation to time out, you may consider using asynchronous operations, as in this blog post: http://www.danrigsby.com/blog/index.php/2008/03/18/async-operations-in-wcf-event-based-model/
The idea is that you precisely expect an operation to take some time. The server will signal to the client when the job is finished.