BigQuery Streaming with C# Client Library - google-bigquery

I've an Oracle table that needed to migrate to BigQuery. I wrote a simple console application with C# and started to streaming inserts. But sometimes aplication throws an error which is below. and my code is pretty simple that which is also below. Are there anyone have an idea what may cause this error? Thanks in advance.
Unhandled Exception: System.Net.Http.HttpRequestException: An error occurred whi
le sending the request. ---> System.Net.WebException: The underlying connection
was closed: A connection that was expected to be kept alive was closed by the se
rver. ---> System.IO.IOException: Unable to read data from the transport connect
ion: An existing connection was forcibly closed by the remote host. ---> System.
Net.Sockets.SocketException: An existing connection was forcibly closed by the r
emote host
at System.Net.Sockets.Socket.EndReceive(IAsyncResult asyncResult)
at System.Net.Sockets.NetworkStream.EndRead(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Net.Security._SslStream.EndRead(IAsyncResult asyncResult)
at System.Net.TlsStream.EndRead(IAsyncResult asyncResult)
at System.Net.PooledStream.EndRead(IAsyncResult asyncResult)
at System.Net.Connection.ReadCallback(IAsyncResult asyncResult)
--- End of inner exception stack trace ---
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at System.Net.Http.HttpClientHandler.GetResponseCallback(IAsyncResult ar)
--- End of inner exception stack trace ---
at Google.Apis.Http.ConfigurableMessageHandler.<SendAsync>d__58.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNot
ification(Task task)
at Google.Apis.Requests.ClientServiceRequest`1.<ExecuteUnparsedAsync>d__33.Mo
veNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Google.Apis.Requests.ClientServiceRequest`1.Execute()
at Google.Cloud.BigQuery.V2.BigQueryClientImpl.InsertRows(TableReference tabl
eReference, IEnumerable`1 rows, InsertOptions options)
at Google.Cloud.BigQuery.V2.BigQueryClient.InsertRows(String datasetId, Strin
g tableId, BigQueryInsertRow[] rows)
at BigQueryStreamer.Program.UploadJsonStreamingSync(String datasetId, String
tableId, BigQueryClient client, BigQueryInsertRow[] _rows) in C:\Projects\BigQue
ryStreamer\BigQueryStreamer\Program.cs:line 330
at BigQueryStreamer.Program.Main(String[] args) in C:\Projects\BigQueryStream
er\BigQueryStreamer\Program.cs:line 185
my code block is:
List<BigQueryInsertRow> _list = new List<BigQueryInsertRow>();
while (oracleReader.Read())
{
BigQueryInsertRow bigQueryInsertRow = new BigQueryInsertRow();
Dictionary<string, object> dictionary = new Dictionary<string, object>();
for (int ordinal = 0; ordinal < oracleReader.FieldCount; ++ordinal)
{
typeof(Decimal).ToString();
string str = oracleReader.GetValue(ordinal).GetType().ToString();
object obj = (str == "System.Decimal" || str== "System.Double" || str == "System.Float") ?
(object)double.Parse(oracleReader.GetValue(ordinal).ToString()) :
(str == "System.DBNull" ? (object)null : oracleReader.GetValue(ordinal));
dictionary.Add(oracleReader.GetName(ordinal), obj);
}
bigQueryInsertRow.Add(dictionary);
_list.Add(bigQueryInsertRow);
}
List<BigQueryInsertRow> _SendList = new List<BigQueryInsertRow>();
//To Stream in 1000 rows at a time, I set _batcSize to 1000 in application configuration
for (int i = 0; i < _list.Count; i++)
{
_SendList.Add(_list[i]);
if (_SendList.Count == _batchSize)
{
System.Threading.Thread.Sleep(150);
UploadJsonStreamingSync(_datasetid, _target, _client, _SendList.ToArray());
Console.WriteLine("Offset: " + ((ubound + 1) * _batchSize).ToString());
ubound++;
_SendList.Clear();
}
}
if (_SendList.Count > 0)
{
System.Threading.Thread.Sleep(150);
UploadJsonStreamingSync(_datasetid, _target, _client, _SendList.ToArray());
Console.WriteLine("Offset: " + (_SendList.Count).ToString());
ubound++;
_SendList.Clear();
}
_list.Clear();
_list = null;
//Streaming Insert Function
public static void UploadJsonStreamingSync(string datasetId, string tableId, BigQueryClient client, BigQueryInsertRow[] _rows)
{
client.InsertRows(datasetId, tableId, _rows);
}

When facing infrequent socket exceptions/network issues as Graham pointed out you need to write logic to handle it.
There are libraries which you can use.
I used: https://github.com/App-vNext/Polly
Polly is a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner.

Related

Redis Reactive Streams Subscriber Thread Hangs

Am trying to use Spring Boot Redis Reactive Stream to subscribe to the stream as a listener.When the data is inserted into the stream listener will pass to client using GRPC stream. I have pointer which is just redis value to keep track of last record which was delivered to the client.The thread is getting blocked randomly when i set pointer to redis and getting timeout.
Mainly am using this pointer if client re-connects after sometime I wanted to deliver data which was last sent and till current data template.opsForValue().set(pointerKey, msg.getId().toString()) .block(Duration.ofSeconds(5)); this place thread is getting blocked
Please let me know is anything wrong in the below code. If posted 10 record to the stream I got the error after receiving 5 record.
Code
public void subscribe(){
String channelId = this.streamRequest.getTopic();
String identifier = this.streamRequest.getIdentifier();
boolean isNew = this.streamRequest.getNew();
String pointerKey = channelId + "_" + identifier + "_pointer";
StreamOffset<String> stringStreamOffset = StreamOffset.fromStart(channelId);
if(isNew){
// If client want to read data from the start
// Removed the pointer
template.opsForValue().delete(pointerKey).block();
} else {
String id = template.opsForValue().get(pointerKey).block();
stringStreamOffset = id != null ? StreamOffset.create(channelId, ReadOffset.from(id)) : StreamOffset.fromStart(channelId);
}
logger.info("[SC] subscribed {}", this.streamRequest);
Flux<ObjectRecord<String, String>> receiver = this.streamReceive.receive(stringStreamOffset);
disposable = receiver.subscribe(msg -> {
logger.info("Processing message {}", msg.getValue());
String value = msg.getValue();
StreamResponse streamResponse = StreamResponse.newBuilder().setData(value).build();
try{
logger.info("[SC] posting data to the grpc client topic {}", this.streamRequest);
this.responseObserver.onNext(streamResponse);
logger.info("[SC] Successfully posted data to the grpc client {}", this.streamRequest);
logger.info("[SC] Updating pointer {}", pointerKey);
template.opsForValue().set(pointerKey, msg.getId().toString())
.block(Duration.ofSeconds(5));
logger.info("[SC] pointer update completed {}", pointerKey);
}catch (Exception ex){
logger.error("Error:{}", ex.getMessage());
this.responseObserver.onError(ex.getCause());
close();
}
});
}
Error:
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Thanks in advance.

Akka.net: Should I specify "split brain resolver" configuration for Lighthouse/Seed nodes

I have this application using Akka.net cluster feature. The people who wrote the code have left the company.
I am trying to understand the code and we are planning a deployment.
The cluster has 2 types of nodes
QueueServicer: supports sharding and only these nodes should participate in sharding.
LightHouse: They are just seed nodes, nothing else.
Lighthouse : 2 nodes
QueueServicer : 3 Nodes
I see one of the QueueServicer node unable to join the cluster. Both lighthouse nodes are refusing connection. It constantly tries to join and never succeeds. This has been happening for the last 5 days or so and the node is never dying also. Its CPU and memory usage is high. Also It doesn't have any queue processor actors running when filtered search through the log. It takes long hours for Garbage collection etc. I see in the log for this node, the following.
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-1:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-1:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-1:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-1:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
{"timestamp":"2021-09-08T22:26:59.025Z", "logger":"Akka.Event.DummyClassForStringSources", "message":Tried to associate with unreachable remote address [akka.tcp://myapp#lighthouse-0:7892]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://myapp#lighthouse-0:7892] Caused by: [System.AggregateException: One or more errors occurred. (Connection refused akka.tcp://myapp#lighthouse-0:7892) ---> Akka.Remote.Transport.InvalidAssociationException: Connection refused akka.tcp://myapp#lighthouse-0:7892 at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress) at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress) --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at Akka.Remote.Transport.ProtocolStateActor.<>c.<InitializeFSM>b__12_18(Task1 result) at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
There are other "Now supervising", "Stopping" "Started" logs which I am omitting here.
Can you please verify if the HCON config is correct for split brain resolver and Sharding?
I think LightHouse/SeeNodes should not have the sharding configuration specified. I think it is a mistake.
I also think, split brain resolver configuration might be wrong in LightHouse/SeedNodes and should not be specified for seed nodes.
I appreciate your help.
Here is the HOCON for QueueServicer Trimmed
akka {
    loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
    log-config-on-start = on
    loglevel = "DEBUG"
    actor {
        provider = cluster
        serializers {
            hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        }
        serialization-bindings {
            "System.Object" = hyperion
        }
    }
remote {
dot-netty.tcp {
….
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["QueueProcessor"]
sharding {
role = "QueueProcessor"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 20s
keep-majority {
role = "QueueProcessor"
}
}
down-removal-margin = 20s
}
extensions = ["Akka.Cluster.Tools.PublishSubscribe.DistributedPubSubExtensionProvider,Akka.Cluster.Tools"]
}
Here is the HOCON for Lighthouse
akka {
    loggers = ["Akka.Logger.log4net.Log4NetLogger, Akka.Logger.log4net"]
    log-config-on-start = on
    loglevel = "DEBUG"
    actor {
        provider = cluster
        serializers {
            hyperion = "Akka.Serialization.HyperionSerializer, Akka.Serialization.Hyperion"
        }
        serialization-bindings {
            "System.Object" = hyperion
        }
    }
remote {
dot-netty.tcp {
…
}
}
cluster {
seed-nodes = ["akka.tcp://myapp#lighthouse-0:7892",akka.tcp://myapp#lighthouse-1:7892"]
roles = ["lighthouse"]
sharding {
role = "lighthouse"
state-store-mode = ddata
remember-entities = true
passivate-idle-entity-after = off
}
downing-provider-class = "Akka.Cluster.SplitBrainResolver, Akka.Cluster"
split-brain-resolver {
active-strategy = keep-oldest
stable-after = 30s
keep-oldest {
down-if-alone = on
role = "lighthouse"
}
}
}
}
I meant to reply to this sooner.
Here is your problem: you're using two different split brain resolver configurations - one for the QueueServicer and one for Lighthouse. Therefore, how your cluster resolves itself is going to be quite different depending upon who is the leader of each half of the cluster.
I would stick with a simple keep-majority strategy and use it uniformly on all nodes throughout the cluster - we're very likely going to enable this by default in Akka.NET v1.5.
If you have any questions, please feel free to reach out to us: https://petabridge.com/

The data reader is incompatible with the specified 'IAPD_DBModel.Table_Price

I'm developing a Web API using ASP.NET Web API 2 and Entity Framework to access the database. I call my SQL Server stored procedures which is very simple and should be returning one column as follows:
SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[getnationalities]
AS
BEGIN
SET NOCOUNT ON;
SELECT DISTINCT
Table_Price.Price_Nationality AS 'name'
FROM
Table_Price;
END
and here is my vb code :
Namespace Controllers
Public Class NationalityController
Inherits ApiController
Public Function getcountries() As IHttpActionResult
Using entities As IAPD_DBEntities = New IAPD_DBEntities()
Return Ok(entities.getnationalities.ToList)
End Using
End Function
End Class
End Namespace
and here is the error I'm getting from postman
{
"message": "An error has occurred.",
"exceptionMessage": "The data reader is incompatible with the specified 'IAPD_DBModel.Table_Price'. A member of the type, 'Price_PK_ID', does not have a
corresponding column in the data reader with the same name.",
"exceptionType": "System.Data.Entity.Core.EntityCommandExecutionException",
"stackTrace": " at System.Data.Entity.Core.Query.InternalTrees.ColumnMapFactory.GetMemberOrdinalFromReader(DbDataReader storeDataReader, EdmMember member, EdmType currentType, Dictionary`2 renameList)\r\n at System.Data.Entity.Core.Query.InternalTrees.ColumnMapFactory.GetColumnMapsForType(DbDataReader storeDataReader, EdmType edmType, Dictionary`2 renameList)\r\n at System.Data.Entity.Core.Query.InternalTrees.ColumnMapFactory.CreateColumnMapFromReaderAndType(DbDataReader storeDataReader, EdmType edmType, EntitySet entitySet, Dictionary`2 renameList)\r\n at System.Data.Entity.Core.Query.InternalTrees.ColumnMapFactory.CreateFunctionImportStructuralTypeColumnMap(DbDataReader storeDataReader, FunctionImportMappingNonComposable mapping, Int32 resultSetIndex, EntitySet entitySet, StructuralType baseStructuralType)\r\n at System.Data.Entity.Core.EntityClient.Internal.EntityCommandDefinition.FunctionColumnMapGenerator.System.Data.Entity.Core.EntityClient.Internal.EntityCommandDefinition.IColumnMapGenerator.CreateColumnMap(DbDataReader reader)\r\n at System.Data.Entity.Core.Objects.ObjectContext.MaterializedDataRecord[TElement](EntityCommand entityCommand, DbDataReader storeReader, Int32 resultSetIndex, ReadOnlyCollection`1 entitySets, EdmType[] edmTypes, ShaperFactory`1 shaperFactory, MergeOption mergeOption, Boolean streaming)\r\n at System.Data.Entity.Core.Objects.ObjectContext.CreateFunctionObjectResult[TElement](EntityCommand entityCommand, ReadOnlyCollection`1 entitySets, EdmType[] edmTypes, ExecutionOptions executionOptions)\r\n at System.Data.Entity.Core.Objects.ObjectContext.<>c__DisplayClass47`1.<ExecuteFunction>b__46()\r\n at System.Data.Entity.Core.Objects.ObjectContext.ExecuteInTransaction[T](Func`1 func, IDbExecutionStrategy executionStrategy, Boolean startLocalTransaction, Boolean releaseConnectionOnSuccess)\r\n at System.Data.Entity.Core.Objects.ObjectContext.<>c__DisplayClass47`1.<ExecuteFunction>b__45()\r\n at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation)\r\n at System.Data.Entity.Core.Objects.ObjectContext.ExecuteFunction[TElement](String functionName, ExecutionOptions executionOptions, ObjectParameter[] parameters)\r\n at ProjectDataAccess.IAPD_DBEntities.getnationalities() in C:\\Users\\Junaida\\documents\\visual studio 2015\\Projects\\WebApplication7\\ProjectDataAccess\\DataAccessModel.Context.vb:line 5552\r\n at WebApplication7.Controllers.NationalityController.getcountries() in C:\\Users\\Junaida\\documents\\visual studio 2015\\Projects\\WebApplication7\\WebApplication7\\Controllers\\NationalityController.vb:line 13\r\n at lambda_method(Closure , Object , Object[] )\r\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ActionExecutor.<>c__DisplayClass10.<GetExecutor>b__9(Object instance, Object[] methodParameters)\r\n at System.Web.Http.Controllers.ReflectedHttpActionDescriptor.ExecuteAsync(HttpControllerContext controllerContext, IDictionary`2 arguments, CancellationToken cancellationToken)\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__0.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Controllers.AuthenticationFilterResult.<ExecuteAsync>d__0.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n at System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__1.MoveNext()"
}
What I really target is to get a dataset or array of nationalities
Sorry, I don't know Entity Framework but I can get a list of data from a stored procedure.
Dim lstNationalities As New List(Of String)
Using cn As New SqlClient.SqlConnection("Your connection string")
Using cmd As New SqlClient.SqlCommand With {
.Connection = cn,
.CommandType = CommandType.StoredProcedure,
.CommandText = "getnationalities"}
cn.Open
Using dr As SqlClient.SqlDataReader = cmd.ExecuteReader
Do While dr.Read
lstNationalities.Add(dr.GetString(0))
Loop
End Using
End Using
End Using

A resource manager with the same identifier is already registered with the specified transaction coordinator

I am consistently getting ResourceManager error. I use below code:
public static IDocumentStore Initialize()
{
const string id = "2a5434cf-56f6-4b33-9661-5b6cc53bd9a5";
_instance = new DocumentStore
{
Url = "http://localhost:8085",
DefaultDatabase = "Testing",
ResourceManagerId = new Guid(id)
};
_instance.Initialize();
return _instance;
}
Here's the call stack:
2015-06-04 15:39:08.366 INFO NServiceBus.Unicast.Transport.TransportReceiver Failed to process message
System.Runtime.InteropServices.COMException (0x8004D102): A resource manager with the same identifier is already registered with the specified transaction coordinator. (Exception from HRESULT: 0x8004D102)
at System.Transactions.Oletx.IDtcProxyShimFactory.CreateResourceManager(Guid resourceManagerIdentifier, IntPtr managedIdentifier, IResourceManagerShim& resourceManagerShim)
at System.Transactions.Oletx.OletxResourceManager.get_ResourceManagerShim()
at System.Transactions.Oletx.OletxResourceManager.EnlistDurable(OletxTransaction oletxTransaction, Boolean canDoSinglePhase, IEnlistmentNotificationInternal enlistmentNotification, EnlistmentOptions enlistmentOptions)
at System.Transactions.Oletx.OletxTransaction.EnlistDurable(Guid resourceManagerIdentifier, ISinglePhaseNotificationInternal singlePhaseNotification, Boolean canDoSinglePhase, EnlistmentOptions enlistmentOptions)
at System.Transactions.TransactionStatePromotedBase.EnlistDurable(InternalTransaction tx, Guid resourceManagerIdentifier, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.Transaction.EnlistDurable(Guid resourceManagerIdentifier, IEnlistmentNotification enlistmentNotification, EnlistmentOptionsenlistmentOptions)
at Raven.Client.Document.InMemoryDocumentSessionOperations.TryEnlistInAmbientTransaction() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\InMemoryDocumentSessionOperations.cs:line 1082
at Raven.Client.Document.InMemoryDocumentSessionOperations.PrepareForSaveChanges() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\InMemoryDocumentSessionOperations.cs:line 931
at Raven.Client.Document.DocumentSession.SaveChanges() in c:\Builds\RavenDB-Stable-3.0\Raven.Client.Lightweight\Document\DocumentSession.cs:line 707
at NServiceBus.RavenDB.SessionManagement.OpenSessionBehavior.Invoke(IncomingContext context, Action next) in c:\BuildAgent\work\c4d62ce02b983878\src\NServiceBus.RavenDB\SessionManagement\OpenSessionBehavior.cs:line 22
[…]
Has anyone else come across this issue?
Here is the solution I found:
I had created 6 endpoints for each of the endpoint I was using same ResourceManagerId. Once I created different ResourceManagerId (constant guid) for each endpoint this issue got resolved.

Exception when connecting to pop.gmail.com using EasyMail .Net 3.0 from QuikSoft

I am using this piece of code (the port used is 995) :
Private Shared Function ConnectToPop3Server(ByVal pop3Obj As POP3, ByVal sslObj As SSL) As Boolean
Utils.LogInformation(String.Format("Starting."))
Try
If My.Settings.UseSSL Then
' use SSL only if the configuration says so
sslObj.CredentialVerification = CredentialVerification.None
pop3Obj.Connect(My.Settings.EmailServer, My.Settings.IncomingPortSsl, sslObj.GetInterface())
Else
pop3Obj.Connect(My.Settings.EmailServer, My.Settings.IncomingPort)
End If
'Catch Easy Mail POP3 exception errors
Catch POP3LicenseExcep As Quiksoft.EasyMail.POP3.LicenseException
Utils.LogError("POP3 License Exception: " + POP3LicenseExcep.ToString()) : Return False
Catch AuthExcep As Quiksoft.EasyMail.POP3.POP3AuthenticationException
Utils.LogError("Authentication Exception: " + AuthExcep.ToString()) : Return False
Catch ConnectExcep As Quiksoft.EasyMail.POP3.POP3ConnectionException
Utils.LogError("Connection Exception: " + ConnectExcep.ToString()) : Return False
Catch ProtocolExcep As Quiksoft.EasyMail.POP3.POP3ProtocolException
Utils.LogError("Protocol Exception: " + ProtocolExcep.ToString()) : Return False
'Catch parse exception errors
Catch ParseLicenseExcep As Quiksoft.EasyMail.Parse.LicenseException
Utils.LogError("Parse License Exception: " + ParseLicenseExcep.ToString()) : Return False
Catch InputStreamExcep As Quiksoft.EasyMail.Parse.InputStreamException
Utils.LogError("Input Stream Exception: " + InputStreamExcep.ToString()) : Return False
Catch OutputStreamExcep As Quiksoft.EasyMail.Parse.OutputStreamException
Utils.LogError("Output Stream Exception: " + OutputStreamExcep.ToString()) : Return False
Catch ParseExcep As Quiksoft.EasyMail.Parse.ParsingException
Utils.LogError("Parsing Exception: " + ParseExcep.ToString()) : Return False
' catch SMTP exceptions
Catch ex As Exception
Utils.LogError("Exception: " + ex.Message + " - " + ex.StackTrace.ToString()) : Return False
End Try
Utils.LogInformation(String.Format("Finnished."))
Return True
End Function
which is basically following closely the samples provided with the library.
I am getting this when pop3Obj.Connect is called
Quiksoft.EasyMail.SSL.SSLConnectionException: Error receiving data from socket. ---> Quiksoft.EasyMail.POP3.POP3ConnectionException: Error reading from stream. ---> Quiksoft.EasyMail.SSL.ᜀ: Unable to decrypt message.-2146893055
at Quiksoft.EasyMail.SSL.Internal.ᜒ.᜕(Byte[] A_0, Int32 A_1, Int32 A_2, SocketFlags A_3)
at Quiksoft.EasyMail.SSL.Internal.ᜒ.ᜣ(Byte[] A_0, Int32 A_1, SocketFlags A_2)
at Quiksoft.EasyMail.Internal.᝭.ᜂ()
--- End of inner exception stack trace ---
at Quiksoft.EasyMail.Internal.᝭.ᜂ()
at Quiksoft.EasyMail.Internal.᝭.ᜄ()
at Quiksoft.EasyMail.SSL.Internal.ᜒ.ᜅ(String& A_0, Int32 A_1)
--- End of inner exception stack trace ---
at Quiksoft.EasyMail.SSL.Internal.ᜒ.ᜅ(String& A_0, Int32 A_1)
at Quiksoft.EasyMail.POP3.POP3.Connect(String POP3Server, Int32 Port)
at Quiksoft.EasyMail.POP3.POP3.Connect(String POP3Server, Int32 Port, Object SSLInterface)
at Calico.InboxMonitorService.Service.ConnectToPop3Server(POP3 pop3Obj, SSL sslObj) in C:\DefaultCollection\AlertCustomizations\Calico\WorkOrdersMonitor\Service.vb:line 205
Anybody that has experience with this package and can pitch in with some advice?
After finally getting in contact with Quiksoft it turned out that I have to update to the ver. 3.0.1.23 assemblies. All the Ssl connection errors disappeared.