A resource manager with the same identifier is already registered with the specified transaction coordinator - ravendb

I am consistently getting ResourceManager error. I use below code:
public static IDocumentStore Initialize()
{
const string id = "2a5434cf-56f6-4b33-9661-5b6cc53bd9a5";
_instance = new DocumentStore
{
Url = "http://localhost:8085",
DefaultDatabase = "Testing",
ResourceManagerId = new Guid(id)
};
_instance.Initialize();
return _instance;
}
Here's the call stack:
2015-06-04 15:39:08.366 INFO NServiceBus.Unicast.Transport.TransportReceiver Failed to process message
System.Runtime.InteropServices.COMException (0x8004D102): A resource manager with the same identifier is already registered with the specified transaction coordinator. (Exception from HRESULT: 0x8004D102)
at System.Transactions.Oletx.IDtcProxyShimFactory.CreateResourceManager(Guid resourceManagerIdentifier, IntPtr managedIdentifier, IResourceManagerShim& resourceManagerShim)
at System.Transactions.Oletx.OletxResourceManager.get_ResourceManagerShim()
at System.Transactions.Oletx.OletxResourceManager.EnlistDurable(OletxTransaction oletxTransaction, Boolean canDoSinglePhase, IEnlistmentNotificationInternal enlistmentNotification, EnlistmentOptions enlistmentOptions)
at System.Transactions.Oletx.OletxTransaction.EnlistDurable(Guid resourceManagerIdentifier, ISinglePhaseNotificationInternal singlePhaseNotification, Boolean canDoSinglePhase, EnlistmentOptions enlistmentOptions)
at System.Transactions.TransactionStatePromotedBase.EnlistDurable(InternalTransaction tx, Guid resourceManagerIdentifier, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.Transaction.EnlistDurable(Guid resourceManagerIdentifier, IEnlistmentNotification enlistmentNotification, EnlistmentOptionsenlistmentOptions)
at Raven.Client.Document.InMemoryDocumentSessionOperations.TryEnlistInAmbientTransaction() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\InMemoryDocumentSessionOperations.cs:line 1082
at Raven.Client.Document.InMemoryDocumentSessionOperations.PrepareForSaveChanges() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\InMemoryDocumentSessionOperations.cs:line 931
at Raven.Client.Document.DocumentSession.SaveChanges() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\DocumentSession.cs:line 707
at NServiceBus.RavenDB.SessionManagement.OpenSessionBehavior.Invoke(IncomingContext context, Action next) in c:\BuildAgent\work\c4d62ce02b983878\src\NServiceBus.RavenDB\SessionManagement\OpenSessionBehavior.cs:line 22
[…]
Has anyone else come across this issue?

Here is the solution I found:
I had created 6 endpoints for each of the endpoint I was using same ResourceManagerId. Once I created different ResourceManagerId (constant guid) for each endpoint this issue got resolved.

Related

Redis Reactive Streams Subscriber Thread Hangs

Am trying to use Spring Boot Redis Reactive Stream to subscribe to the stream as a listener.When the data is inserted into the stream listener will pass to client using GRPC stream. I have pointer which is just redis value to keep track of last record which was delivered to the client.The thread is getting blocked randomly when i set pointer to redis and getting timeout.
Mainly am using this pointer if client re-connects after sometime I wanted to deliver data which was last sent and till current data template.opsForValue().set(pointerKey, msg.getId().toString()) .block(Duration.ofSeconds(5)); this place thread is getting blocked
Please let me know is anything wrong in the below code. If posted 10 record to the stream I got the error after receiving 5 record.
Code
public void subscribe(){
String channelId = this.streamRequest.getTopic();
String identifier = this.streamRequest.getIdentifier();
boolean isNew = this.streamRequest.getNew();
String pointerKey = channelId + "_" + identifier + "_pointer";
StreamOffset<String> stringStreamOffset = StreamOffset.fromStart(channelId);
if(isNew){
// If client want to read data from the start
// Removed the pointer
template.opsForValue().delete(pointerKey).block();
} else {
String id = template.opsForValue().get(pointerKey).block();
stringStreamOffset = id != null ? StreamOffset.create(channelId, ReadOffset.from(id)) : StreamOffset.fromStart(channelId);
}
logger.info("[SC] subscribed {}", this.streamRequest);
Flux<ObjectRecord<String, String>> receiver = this.streamReceive.receive(stringStreamOffset);
disposable = receiver.subscribe(msg -> {
logger.info("Processing message {}", msg.getValue());
String value = msg.getValue();
StreamResponse streamResponse = StreamResponse.newBuilder().setData(value).build();
try{
logger.info("[SC] posting data to the grpc client topic {}", this.streamRequest);
this.responseObserver.onNext(streamResponse);
logger.info("[SC] Successfully posted data to the grpc client {}", this.streamRequest);
logger.info("[SC] Updating pointer {}", pointerKey);
template.opsForValue().set(pointerKey, msg.getId().toString())
.block(Duration.ofSeconds(5));
logger.info("[SC] pointer update completed {}", pointerKey);
}catch (Exception ex){
logger.error("Error:{}", ex.getMessage());
this.responseObserver.onError(ex.getCause());
close();
}
});
}
Error:
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Thanks in advance.

Akka.net: Should I specify "split brain resolver" configuration for Lighthouse/Seed nodes

I have this application using Akka.net cluster feature. The people who wrote the code have left the company.
I am trying to understand the code and we are planning a deployment.
The cluster has 2 types of nodes
QueueServicer: supports sharding and only these nodes should participate in sharding.
LightHouse: They are just seed nodes, nothing else.
Lighthouse : 2 nodes
QueueServicer : 3 Nodes
I see one of the QueueServicer node unable to join the cluster. Both lighthouse nodes are refusing connection. It constantly tries to join and never succeeds. This has been happening for the last 5 days or so and the node is never dying also. Its CPU and memory usage is high. Also It doesn't have any queue processor actors running when filtered search through the log. It takes long hours for Garbage collection etc. I see in the log for this node, the following.
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-1:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-1:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-1:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-1:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-0:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-0:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-0:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-0:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
There are other "Now supervising", "Stopping" "Started" logs which I am omitting here.
Can you please verify if the HCON config is correct for split brain resolver and Sharding?
I think LightHouse/SeeNodes should not have the sharding configuration specified. I think it is a mistake.
I also think, split brain resolver configuration might be wrong in LightHouse/SeedNodes and should not be specified for seed nodes.
I appreciate your help.
Here is the HOCON for QueueServicer Trimmed
akka {
    loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
    log-config-on-start = on
    loglevel = "DEBUG"
    actor {
        provider = cluster
        serializers {
            hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        }
        serialization-bindings {
            "System.Object" = hyperion
        }
    }
remote {
dot-netty.tcp {
….
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["QueueProcessor"]
sharding {
role = "QueueProcessor"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 20s
keep-majority {
role = "QueueProcessor"
}
}
down-removal-margin = 20s
}
extensions = ["Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools"]
}
Here is the HOCON for Lighthouse
akka {
    loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
    log-config-on-start = on
    loglevel = "DEBUG"
    actor {
        provider = cluster
        serializers {
            hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        }
        serialization-bindings {
            "System.Object" = hyperion
        }
    }
remote {
dot-netty.tcp {
…
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["lighthouse"]
sharding {
role = "lighthouse"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-oldest
stable-after = 30s
keep-oldest {
down-if-alone = on
role = "lighthouse"
}
}
}
}
I meant to reply to this sooner.
Here is your problem: you're using two different split brain resolver configurations - one for the QueueServicer and one for Lighthouse. Therefore, how your cluster resolves itself is going to be quite different depending upon who is the leader of each half of the cluster.
I would stick with a simple keep-majority strategy and use it uniformly on all nodes throughout the cluster - we're very likely going to enable this by default in Akka.NET v1.5.
If you have any questions, please feel free to reach out to us: https://petabridge.com/

RavenDB C# client throws error as Invalid node tag character: n

I am trying to remove a document in RavenDB which has 3 nodes running in a single machine (development setup).
Below is the code for removing the document.
public bool Remove<T>(string id) where T : new()
{
bool bResult = false;
using (var session = _session.OpenSession())
{
session.Delete(id);
session.SaveChanges();
bResult = true;
}
return bResult;
}
but it throws error on the line session.SaveChanges();
Invalid node tag character: n ...
Stack trace:
System.ArgumentException: Invalid node tag character: n
at Raven.Server.Documents.Replication.ChangeVectorParser.ThrowInvalidNodeTag(Char ch) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\Replication\ChangeVectorParser.cs:line 71
at Raven.Server.Documents.Replication.ChangeVectorParser.ParseNodeTag(String changeVector, Int32 start, Int32 end) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\Replication\ChangeVectorParser.cs:line 52
at Raven.Server.Documents.Replication.ChangeVectorParser.MergeChangeVector(String changeVector, List`1 entries) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\Replication\ChangeVectorParser.cs:line 186
at Raven.Server.Utils.ChangeVectorUtils.MergeVectors(String vectorAstring, String vectorBstring) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Utils\ChangeVectorUtils.cs:line 213
at Raven.Server.Documents.DocumentsStorage.CreateTombstone(DocumentsOperationContext context, Slice lowerId, Int64 documentEtag, CollectionName collectionName, String docChangeVector, Int64 lastModifiedTicks, String changeVector, DocumentFlags flags, NonPersistentDocumentFlags nonPersistentFlags) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\DocumentsStorage.cs:line 1378
at Raven.Server.Documents.DocumentsStorage.Delete(DocumentsOperationContext context, Slice lowerId, String id, LazyStringValue expectedChangeVector, Nullable`1 lastModifiedTicks, String changeVector, CollectionName collectionName, NonPersistentDocumentFlags nonPersistentFlags, DocumentFlags documentFlags) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\DocumentsStorage.cs:line 1195
at Raven.Server.Documents.DocumentsStorage.Delete(DocumentsOperationContext context, String id, String expectedChangeVector) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\DocumentsStorage.cs:line 1091
at Raven.Server.Documents.Handlers.BatchHandler.MergedBatchCommand.ExecuteCmd(DocumentsOperationContext context) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\Handlers\BatchHandler.cs:line 706
at Raven.Server.Documents.TransactionOperationsMerger.ExecutePendingOperationsInTransaction(List`1 pendingOps, DocumentsOperationContext context, Task previousOperation, DurationMeasurement& meter) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\TransactionOperationsMerger.cs:line 825
at Raven.Server.Documents.TransactionOperationsMerger.MergeTransactionsOnce() in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\TransactionOperationsMerger.cs:line 500
--- End of stack trace from previous location where exception was thrown ---
at Raven.Server.Documents.TransactionOperationsMerger.Enqueue(MergedTransactionCommand cmd) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\TransactionOperationsMerger.cs:line 124
at Raven.Server.Documents.Handlers.BatchHandler.BulkDocs() in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Documents\Handlers\BatchHandler.cs:line 96
at Raven.Server.Routing.RequestRouter.HandlePath(RequestHandlerContext reqCtx) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\Routing\RequestRouter.cs:line 124
at Raven.Server.RavenServerStartup.RequestHandler(HttpContext context) in C:\Builds\RavenDB-Stable-4.1\src\Raven.Server\RavenServerStartup.cs:line 172
I've run into the same error yesterday. I restored database on the different machine where I installed brand new RavenDB and (being lazy) named new instance node "A". It appears that RavenDB cannot currently remove documents when change vector and instance tag name don't match.
node-tag-mismatch
It looks like an honest mistake in code instead of intentional behavior.. but its only my guess, because I didn't find anything about this behavior in 4.1 documentation.
Solution (if you confirm there is a mismatch):
You could try to add a new node to your cluster with the name matching change vector of locked documents.
In my case I am not able to configure standalone RavenDB to have node tag of more than 4 characters (that was no problem on Docker)... It might be harder to get database to consistent state then.
Alternatively try exporting and importing data. It fixes the problem because change vectors are updated and match new node tag.

BigQuery Streaming with C# Client Library

I've an Oracle table that needed to migrate to BigQuery. I wrote a simple console application with C# and started to streaming inserts. But sometimes aplication throws an error which is below. and my code is pretty simple that which is also below. Are there anyone have an idea what may cause this error? Thanks in advance.
Unhandled Exception: System.Net.Http.HttpRequestException: An error occurred whi
le sending the request. ---> System.Net.WebException: The underlying connection
was closed: A connection that was expected to be kept alive was closed by the se
rver. ---> System.IO.IOException: Unable to read data from the transport connect
ion: An existing connection was forcibly closed by the remote host. ---> System.
Net.Sockets.SocketException: An existing connection was forcibly closed by the r
emote host
at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Net.Security._SslStream.EndRead(IAsyncResult asyncResult)
at System.Net.TlsStream.EndRead(IAsyncResult asyncResult)
at System.Net.PooledStream.EndRead(IAsyncResult asyncResult)
at System.Net.Connection.ReadCallback(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
--- End of inner exception stack trace ---
at Google.Apis.Http.ConfigurableMessageHandler.<SendAsync>d__58.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNot
ification(Task task)
at Google.Apis.Requests.ClientServiceRequest`1.<ExecuteUnparsedAsync>d__33.Mo
veNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Google.Apis.Requests.ClientServiceRequest`1.Execute()
at Google.Cloud.BigQuery.V2.BigQueryClientImpl.InsertRows(TableReference tabl
eReference, IEnumerable`1 rows, InsertOptions options)
at Google.Cloud.BigQuery.V2.BigQueryClient.InsertRows(String datasetId, Strin
g tableId, BigQueryInsertRow[] rows)
at BigQueryStreamer.Program.UploadJsonStreamingSync(String datasetId, String
tableId, BigQueryClient client, BigQueryInsertRow[] _rows) in C:\Projects\BigQue
ryStreamer\BigQueryStreamer\Program.cs:line 330
at BigQueryStreamer.Program.Main(String[] args) in C:\Projects\BigQueryStream
er\BigQueryStreamer\Program.cs:line 185
my code block is:
List<BigQueryInsertRow> _list = new List<BigQueryInsertRow>();
while (oracleReader.Read())
{
BigQueryInsertRow bigQueryInsertRow = new BigQueryInsertRow();
Dictionary<string, object> dictionary = new Dictionary<string, object>();
for (int ordinal = 0; ordinal < oracleReader.FieldCount; ++ordinal)
{
typeof(Decimal).ToString();
string str = oracleReader.GetValue(ordinal).GetType().ToString();
object obj = (str == "System.Decimal" || str== "System.Double" || str == "System.Float") ?
(object)double.Parse(oracleReader.GetValue(ordinal).ToString()) :
(str == "System.DBNull" ? (object)null : oracleReader.GetValue(ordinal));
dictionary.Add(oracleReader.GetName(ordinal), obj);
}
bigQueryInsertRow.Add(dictionary);
_list.Add(bigQueryInsertRow);
}
List<BigQueryInsertRow> _SendList = new List<BigQueryInsertRow>();
//To Stream in 1000 rows at a time, I set _batcSize to 1000 in application configuration
for (int i = 0; i < _list.Count; i++)
{
_SendList.Add(_list[i]);
if (_SendList.Count == _batchSize)
{
System.Threading.Thread.Sleep(150);
UploadJsonStreamingSync(_datasetid, _target, _client, _SendList.ToArray());
Console.WriteLine("Offset: " + ((ubound + 1) * _batchSize).ToString());
ubound++;
_SendList.Clear();
}
}
if (_SendList.Count > 0)
{
System.Threading.Thread.Sleep(150);
UploadJsonStreamingSync(_datasetid, _target, _client, _SendList.ToArray());
Console.WriteLine("Offset: " + (_SendList.Count).ToString());
ubound++;
_SendList.Clear();
}
_list.Clear();
_list = null;
//Streaming Insert Function
public static void UploadJsonStreamingSync(string datasetId, string tableId, BigQueryClient client, BigQueryInsertRow[] _rows)
{
client.InsertRows(datasetId, tableId, _rows);
}
When facing infrequent socket exceptions/network issues as Graham pointed out you need to write logic to handle it.
There are libraries which you can use.
I used: https://github.com/App-vNext/Polly
Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.

Git In-Memory repository update target

I'm still fairly new to the whole libgit2 and libgit2sharp codebases, but I've been trying to tackle the issue of In-Memory repositories and Refdb storage of references.
I've gotten everything almost working in my new branch:
Create the In-Memory repository and attach Refdb and Odb instance to it.
Create initial commit and set the reference for refs/heads/master.
Create the second commit...
The problem I run into now is updating refs/heads/master to the second commit. The UpdateTarget call to refs/heads/master runs into an error in git_reference_set_target:
LibGit2Sharp.NameConflictException : config value 'user.name' was not found
at LibGit2Sharp.Core.Ensure.HandleError(Int32 result) in \libgit2sharp\LibGit2Sharp\Core\Ensure.cs:line 154
at LibGit2Sharp.Core.Ensure.ZeroResult(Int32 result) in \libgit2sharp\LibGit2Sharp\Core\Ensure.cs:line 172
at LibGit2Sharp.Core.Proxy.git_reference_set_target(ReferenceHandle reference, ObjectId id, String logMessage) in \libgit2sharp\LibGit2Sharp\Core\Proxy.cs:line 2042
at LibGit2Sharp.ReferenceCollection.UpdateDirectReferenceTarget(Reference directRef, ObjectId targetId, String logMessage) in \libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:line 476
at LibGit2Sharp.ReferenceCollection.UpdateTarget(Reference directRef, ObjectId targetId, String logMessage) in \libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:line 470
at LibGit2Sharp.ReferenceCollection.UpdateTarget(Reference directRef, String objectish, String logMessage) in \libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:line 498
at LibGit2Sharp.ReferenceCollection.UpdateTarget(String name, String canonicalRefNameOrObjectish, String logMessage) in \libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:line 534
at LibGit2Sharp.ReferenceCollection.UpdateTarget(String name, String canonicalRefNameOrObjectish) in \libgit2sharp\LibGit2Sharp\ReferenceCollection.cs:line 565
at LibGit2Sharp.Tests.RepositoryFixture.CanCreateInMemoryRepositoryWithBackends() in \libgit2sharp\LibGit2Sharp.Tests\RepositoryFixture.cs:line 791
I have not been able to debug down into the libgit2 level, but from what I can tell the issue is in git_reference_create_matching call to git_reference__log_signature.
Is there a call I am missing that can update a reference in a Bare repo that doesn't require a signature? If any libgit2 guys know of how to do this in libgit2, I can implement it on the C# side.
As a test, I created two unit tests that perform the same actions In-Memory and on disk, and the In-Memory fails when calling UpdateTarget after creating the second commit. This follows the code on the wiki:
private Commit CreateCommit(Repository repository, string fileName, string content, string message = null)
{
if (message == null)
{
message = "i'm a commit message :)";
}
Blob newBlob = repository.ObjectDatabase.CreateBlobFromContent(content);
// Put the blob in a tree
TreeDefinition td = new TreeDefinition();
td.Add(fileName, newBlob, Mode.NonExecutableFile);
Tree tree = repository.ObjectDatabase.CreateTree(td);
// Committer and author
Signature committer = new Signature("Auser", "auser#example.com", DateTime.Now);
Signature author = committer;
// Create binary stream from the text
return repository.ObjectDatabase.CreateCommit(
author,
committer,
message,
tree,
repository.Commits,
true);
}
[Fact]
public void CanCreateRepositoryWithoutBackends()
{
SelfCleaningDirectory scd = BuildSelfCleaningDirectory();
Repository.Init(scd.RootedDirectoryPath, true);
ObjectId commit1Id;
using (var repository = new Repository(scd.RootedDirectoryPath))
{
Commit commit1 = CreateCommit(repository, "filePath.txt", "Hello commit 1!");
commit1Id = commit1.Id;
repository.Refs.Add("refs/heads/master", commit1.Id);
Assert.Equal(1, repository.Commits.Count());
Assert.NotNull(repository.Refs.Head);
Assert.Equal(1, repository.Refs.Count());
}
using (var repository = new Repository(scd.RootedDirectoryPath))
{
Commit commit2 = CreateCommit(repository, "filePath.txt", "Hello commit 2!");
Assert.Equal(commit1Id, commit2.Parents.First().Id);
repository.Refs.UpdateTarget("refs/heads/master", commit2.Sha);
Assert.Equal(2, repository.Commits.Count());
Assert.Equal(1, repository.Refs.Count());
Assert.NotNull(repository.Refs.Head);
Assert.Equal(commit2.Sha, repository.Refs.Head.ResolveToDirectReference().TargetIdentifier);
}
}
[Fact]
public void CanCreateInMemoryRepositoryWithBackends()
{
OdbBackendFixture.MockOdbBackend odbBackend = new OdbBackendFixture.MockOdbBackend();
RefdbBackendFixture.MockRefdbBackend refdbBackend = new RefdbBackendFixture.MockRefdbBackend();
ObjectId commit1Id;
using (var repository = new Repository())
{
repository.Refs.SetBackend(refdbBackend);
repository.ObjectDatabase.AddBackend(odbBackend, 5);
Commit commit1 = CreateCommit(repository, "filePath.txt", "Hello commit 1!");
commit1Id = commit1.Id;
repository.Refs.Add("refs/heads/master", commit1.Id);
Assert.Equal(1, repository.Commits.Count());
Assert.NotNull(repository.Refs.Head);
Assert.Equal(commit1.Sha, repository.Refs.Head.ResolveToDirectReference().TargetIdentifier);
// Emulating Git, repository.Refs enumerable does not include the HEAD.
// Thus, repository.Refs.Count will be 1 and refdbBackend.References.Count will be 2.
Assert.Equal(1, repository.Refs.Count());
Assert.Equal(2, refdbBackend.References.Count);
}
using (var repository = new Repository())
{
repository.Refs.SetBackend(refdbBackend);
repository.ObjectDatabase.AddBackend(odbBackend, 5);
Commit commit2 = CreateCommit(repository, "filePath.txt", "Hello commit 2!");
Assert.Equal(commit1Id, commit2.Parents.First().Id);
//repository.Refs.UpdateTarget(repository.Refs["refs/heads/master"], commit2.Id);
//var master = repository.Refs["refs/heads/master"];
//Assert.Equal(commit1Id.Sha, master.TargetIdentifier);
repository.Refs.UpdateTarget("refs/heads/master", commit2.Sha); // fails at LibGit2Sharp.Core.Proxy.git_reference_set_target(ReferenceHandle reference, ObjectId id, String logMessage)
//repository.Refs.Add("refs/heads/master", commit2.Id); // fails at LibGit2Sharp.Core.Proxy.git_reference_create(RepositoryHandle repo, String name, ObjectId targetId, Boolean allowOverwrite, String logMessage)
Assert.Equal(2, repository.Commits.Count());
Assert.Equal(1, repository.Refs.Count());
Assert.NotNull(repository.Refs.Head);
Assert.Equal(commit2.Sha, repository.Refs.Head.ResolveToDirectReference().TargetIdentifier);
}
}