Disassociated exception in Akka.Remoting - akka.net

Using Akka.net I am trying to implement simple scenario.
I have created 2 servers and 1 client, where Servers receives the messages sent from client and processes it.
Setup works fine sometimes and sometimes it gives following error, I am not able to figure out the cause:
**
No response from remote. Handshake timed out or transport failure detector triggered.
Cause: Unknown
Association with remote system akka.tcp://RemoteFSharp#172.27.**.94:8777 has
failed; address is now gated for 5000 ms. Reason is: [Akka.Remote.EndpointDisass
ociatedException: Disassociated
at Akka.Remote.EndpointWriter.PublishAndThrow(Exception reason, LogLevel leve
l)
at Akka.Remote.EndpointWriter.Unhandled(Object message)
at Akka.Actor.ActorCell.<>c__DisplayClass109_0.<Akka.Actor.IUntypedActorConte
xt.Become>b__0(Object m)
at Akka.Actor.ActorBase.AroundReceive(Receive receive, Object message)
at Akka.Actor.ActorCell.ReceiveMessage(Object message)
at Akka.Actor.ActorCell.AutoReceiveMessage(Envelope envelope)
at Akka.Actor.ActorCell.Invoke(Envelope envelope)
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at Akka.Actor.ActorCell.HandleFailed(Failed f)
at Akka.Actor.ActorCell.SystemInvoke(Envelope envelope)]
**
Client Config:
akka {
log-dead-letters-during-shutdown = off
actor {
handshake-timeout = 600 s
serializers {
wire = "Akka.Serialization.WireSerializer, Akka.Serialization.Wire"
}
serialization-bindings {
"System.Object" = wire
}
provider = "Akka.Remote.RemoteActorRefProvider, Akka.Remote"
}
remote {
helios.tcp {
maximum-frame-size = 20000000b
tcp-keepalive = on
transport-class =
"Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
transport-protocol = tcp
port = 8760
hostname = 172.27.**.94
}
}
log-remote-lifecycle-events = INFO
}
Server Config :
akka {
log-dead-letters-during-shutdown = off
actor {
handshake-timeout = 600 s
serializers {
wire = "Akka.Serialization.WireSerializer, Akka.Serialization.Wire"
}
serialization-bindings {
"System.Object" = wire
}
provider = "Akka.Remote.RemoteActorRefProvider, Akka.Remote"
}
remote {
helios.tcp {
maximum-frame-size = 20000000b
tcp-keepalive = on
transport-class =
"Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
transport-protocol = tcp
port = 8777
hostname = 172.27.**.94
}
}
log-remote-lifecycle-events = INFO
}
Also I am using Newtonsoft.Json for serialization as follows:
let CreateEmployeeActor (system: ActorSystem) actorName =
(spawn system actorName
(fun mailbox ->
let rec loop (count: int)=
actor {
let! message = mailbox.Receive()
let sender = mailbox.Sender()
let deserializedEmailData = JsonConvert.DeserializeObject<EmployeeActorMsgs> (message)
match deserializedEmailData with
| InItEmployee ->
//Some Logic
}
loop (0)
))

Related

akka.net cluster : how to send message from seed node to non-seed node

I am new to akka.net. I have 2 .net core2 console apps in a cluster , trying to send message from actor from one console app [ which is seed node ] to remote actor on another console app [which is non-seed node].
After starting/running both the console apps, the cluster is established and both the nodes are Up and seed node knows about the non-seed node and vice-versa, but no message is received by remote actor that is on the non-seed node. I am creating a round-robin router on the seed-node, but not sure what I am missing?
Please guide.
Below is the sample code of the both the apps i.e seed node and non-seed node.
// .net core2 console App with Seed Node
class Program
{
public static ActorSystem ClusterSystem;
private static IActorRef StartActor;
private static void Main(string[] args)
{
var config = ConfigurationFactory.ParseString(#"
akka
{
actor {
provider=cluster
deployment {
/tasker {
router = round-robin-pool # routing strategy
nr-of-instances = 5 # max number of total routees
cluster {
enabled = on
allow-local-routees = off
use-role = tasker
max-nr-of-instances-per-node = 1
}
}
}
}
remote
{
dot-netty.tcp {
port = 8081
hostname = ""localhost""
}
}
cluster {
seed-nodes = [""akka.tcp://ClusterSystem#localhost:8081""]
roles=[""main""]
}
}
ClusterSystem = ActorSystem.Create("ClusterSystem", config);
var taskActor = ClusterSystem.ActorOf(Props.Empty.WithRouter(FromConfig.Instance), "tasker");
StartActor = ClusterSystem.ActorOf(Props.Create(() => new StartActor(taskActor)), "startactor");
StartActor.Tell(new Initiate()); // call local actor
// actor on seed node (local actor)
class StartActor : ReceiveActor, ILogReceive
{
private IActorRef taskActor;
public StartActor(IActorRef router)
{
this.taskActor = router;
Receive<Initiate>(i => Start(i));
}
private void Start(Initiate initiate)
{
taskActor.Tell(new Initiate()); // calling remote actor
}
}
.net core2 Console app with Non seed node
class Program
{
public static ActorSystem ClusterSystem;
public static IActorRef TaskActor;
private static void Main(string[] args)
{
Console.Title = "BackEnd";
var config = ConfigurationFactory.ParseString(#"
akka
{
actor {
provider=cluster
}
remote
{
dot-netty.tcp {
port = 0
hostname = ""localhost""
}
}
cluster {
seed-nodes = [""akka.tcp://ClusterSystem#localhost:8081""]
roles=[""tasker""]
}
}
");
ClusterSystem = ActorSystem.Create("ClusterSystem", config);
TaskActor = ClusterSystem.ActorOf(Props.Create<TaskActor>(), "tasker");
Console.Read();
}
}
// Actor on Non-seed node (Remote Actor)
class TaskActor : ReceiveActor, ILogReceive
{
private readonly IActorRef manager;
public TaskActor()
{
this.Receive<Initiate>(i => this.Init(i));
}
private void Init(Initiate initiate)
{
Console.WriteLine($"Message Received"); //
}
}
I am myself answering to my question. So the first thing is that since the remote actor is created by/in another console application, the deployment configuration needs to be changed with routing strategy to "round robin group"
/tasker {
router = round-robin-group # routing strategy
routees.paths = [""/user/starter""]
nr-of-instances = 5 # max number of total routees
cluster {
enabled = on
allow-local-routees = off
use-role = tasker
}
}
And the the "startActor" from the seed node need to be as below
class StartActor : ReceiveActor, ILogReceive
{
private IActorRef router, self;
public StartActor(IActorRef router)
{
self = Self;
this.router = router;
Receive<Initiate>(i =>
{
var routee = router.Ask<Routees>(new GetRoutees()).ContinueWith(tr =>
{
if (tr.Result.Members.Count() > 0)
{
Start(i);
}
else
{
self.Tell(i);
}
});
});
}
private void Start(Initiate initiate)
{
router.Tell(initiate);
}
}
The above code within the "startActor" looks for the routees which once received then only the message is sent otherwise the message is blasted and not received by the remote actor.

RabbitMQ .NET Client and connection timeouts

I'm trying to test the AutomaticRecoveryEnabled property of the RabbitMQ ConnectionFactory. I'm connecting to a RabbitMQ instance on a local VM and on the client I'm publishing messages in a loop. The problem is if I intentionally break the connection, the client just waits forever and doesn't time out. How do I set the time out value? RequestedConnectionTimeout doesn't appear to have any effect.
I'm using the RabbitMQ client 3.5.4
Rudimentary publish loop:
// Client is a wrapper around the RabbitMQ client
for (var i = 0; i < 1000; ++i)
{
// Publish sequentially numbered messages
client.Publish("routingkey", GetContent(i)));
Thread.Sleep(100);
}
The Publish method inside the wrapper:
public bool Publish(string routingKey, byte[] body)
{
try
{
using (var channel = _connection.CreateModel())
{
var basicProps = new BasicProperties
{
Persistent = true,
};
channel.ExchangeDeclare(_exchange, _exchangeType);
channel.BasicPublish(_exchange, routingKey, basicProps, body);
return true;
}
}
catch (Exception e)
{
_logger.Log(e);
}
return false;
}
The connection and connection factory:
_connectionFactory = new ConnectionFactory
{
UserName = _userName,
Password = _password,
HostName = _hostName,
Port = _port,
Protocol = Protocols.DefaultProtocol,
VirtualHost = _virtualHost,
// Doesn't seem to have any effect on broken connections
RequestedConnectionTimeout = 2000,
// The behaviour appears to be the same with or without these included
// AutomaticRecoveryEnabled = true,
// NetworkRecoveryInterval = TimeSpan.FromSeconds(10),
};
_connection = _connectionFactory.CreateConnection();
It appears this is a bug in version 3.5.4. Version 3.6.3 does not wait indefinitely.

Replying an Ask in a clustered routee

I am creating a project that at this moment has an Actor (User) that calls to another actor (Concert) through a consistent-hash-group router. All works fine but my problem is that from the concert actor I can not answer the Ask message. Somehow the message is lost and nothing happens in the client. I have tried everything with no luck:
Sender.Tell <-- creates a temporal? sender
Passing the User IActorRef by reference in the message and using it.
Here is the full code: https://github.com/pablocastilla/AkkaConcert
The main details are the following:
User actor:
protected IActorRef concertRouter;
public User(IActorRef concertRouter, int eventId)
{
this.concertRouter = concertRouter;
this.eventId = eventId;
JobStarter = Context.System.Scheduler.ScheduleTellRepeatedlyCancelable(TimeSpan.FromMilliseconds(20),
TimeSpan.FromMilliseconds(1000), Self, new AttemptToStartJob(), Self);
Receive<AttemptToStartJob>(start =>
{
var self = Self;
concertRouter.Ask<Routees>(new GetRoutees()).ContinueWith(tr =>
{
if (tr.Result.Members.Count() > 0)
{
var m = new GetAvailableSeats() { User = self, ConcertId = eventId };
self.Tell(m);
// JobStarter.Cancel();
}
}, TaskContinuationOptions.ExecuteSynchronously);
});
Receive<GetAvailableSeats>(rs =>
{
rs.User = Self;
//get free seats
concertRouter.Ask(rs).ContinueWith(t=>
{
Console.WriteLine("response received!!");
}
);
});
Client HOCON configuration:
<akka>
<hocon>
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/eventpool {
router = consistent-hashing-group
routees.paths = ["/user/HugeEvent"]
virtual-nodes-factor = 8
cluster {
enabled = on
max-nr-of-instances-per-node = 2
allow-local-routees = off
use-role = cluster
}
}
}
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
#will be populated with a dynamic host-name at runtime if left uncommented
#public-hostname = "POPULATE STATIC IP HERE"
hostname = "127.0.0.1"
port = 0
}
}
cluster {
#will inject this node as a self-seed node at run-time
seed-nodes = ["akka.tcp://akkaconcert#127.0.0.1:8080"] #manually populate other seed nodes here, i.e. "akka.tcp://lighthouse#127.0.0.1:4053", "akka.tcp://lighthouse#127.0.0.1:4044"
roles = [client]
auto-down-unreachable-after = 60s
}
}
]]>
</hocon>
In the backend side:
The actor is created
private ActorSystem actorSystem;
private IActorRef event1;
public bool Start(HostControl hostControl)
{
actorSystem = ActorSystem.Create("akkaconcert");
SqlServerPersistence.Init(actorSystem);
event1 = actorSystem.ActorOf(
Props.Create(() => new Concert(1,100000)), "HugeEvent");
return true;
}
Concert actor message processing
private void ReadyCommands()
{
Command<GetAvailableSeats>(message => GetFreeSeatsHandler(message));
Command<ReserveSeats>(message => ReserveSeatsHandler(message));
Command<BuySeats>(message => Persist(message, BuySeatsHandler));
}
private bool GetFreeSeatsHandler(GetAvailableSeats message)
{
var freeSeats = seats.Where(s => s.Value.State == Actors.Seat.SeatState.Free).Select(s2 => s2.Value).ToList();
//1. Trying passing the user actor
//message.User.Tell(new GetFreeSeatsResponse() { FreeSeats = freeSeats }, Context.Self);
//2. Trying with the sender
Context.Sender.Tell(new GetAvailableSeatsResponse() { FreeSeats = freeSeats }, Context.Self);
printMessagesPerSecond(messagesReceived++);
printfreeSeats(freeSeats);
return true;
}
HOCON config at backend side:
<akka>
<hocon>
<![CDATA[
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-remote-lifecycle-events = DEBUG
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
#will be populated with a dynamic host-name at runtime if left uncommented
#public-hostname = "POPULATE STATIC IP HERE"
hostname = "127.0.0.1"
port = 8080
}
}
cluster {
#will inject this node as a self-seed node at run-time
seed-nodes = ["akka.tcp://akkaconcert#127.0.0.1:8080"] #manually populate other seed nodes here, i.e. "akka.tcp://lighthouse#127.0.0.1:4053", "akka.tcp://lighthouse#127.0.0.1:4044"
roles = [cluster]
auto-down-unreachable-after = 10s
}
}
]]>
</hocon>
Thanks!
The problem comes because of message sizes, the message was too big and it was dropped.
Configuration for receiving bigger messages:
akka {
helios.tcp {
# Maximum frame size: 4 MB
maximum-frame-size = 4000000b
}
}

Netty (UDP) SSLException : Received close_notify during handskake,close channel

I want to write an UDP Netty Server/Client with SSL . I did it in the way similar to TCP Netty Server/Client which worked well. But I meet an exception:
javax.net.ssl.SSLException:Received close_notify during handskake,close channel...
My netty is 3.x , SSL configure works well . Here is some code of my Udp Server and Client. Server:
serverBootstrap = new ConnectionlessBootstrap(
new NioDatagramChannelFactory(Executors.newCachedThreadPool(),maxThreads));
serverBootstrap.setOption("receiveBufferSizePredictorFactory",
new FixedReceiveBufferSizePredictorFactory(8192));
ChannelPipelineFactory fac = null;
try {
ServiceDecoder serviceProcessor = (ServiceDecoder)Class.forName(serviceDecoderName).newInstance();
Class<? extends ChannelPipelineFactory> clazz = (Class<? extends ChannelPipelineFactory>) Class
.forName(msgFactoryName);
Constructor ctor = clazz.getConstructor(ChannelProcessor.class,
ChannelGroup.class, CounterGroup.class, CounterGroupExt.class, String.class,ServiceDecoder.class,
String.class, Integer.class, String.class, String.class, Boolean.class,Integer.class,Boolean.class,Boolean.class,Context.class);
logger.info("Using channel processor:{}", getChannelProcessor().getClass().getName());
fac = (ChannelPipelineFactory) ctor.newInstance(
getChannelProcessor(), allChannels, counterGroup, counterGroupExt, "udp", serviceProcessor,
messageHandlerName, maxMsgLength, topic, attr, filterEmptyMsg, maxConnections, isCompressed,enableSsl,context);
} catch (Exception e) {
logger.error(
"Simple Udp Source start error, fail to construct ChannelPipelineFactory with name {}, ex {}",
msgFactoryName, e);
stop();
throw new FlumeException(e.getMessage());
}
serverBootstrap.setPipelineFactory(fac);
try {
if (host == null) {
nettyChannel = serverBootstrap
.bind(new InetSocketAddress(port));
} else {
nettyChannel = serverBootstrap.bind(new InetSocketAddress(host,
port));
}
Pipeline in Server:
if(enableSsl) {
cp.addLast("ssl", sslInit());
}
if (processor != null) {
try {
Class<? extends SimpleChannelHandler> clazz = (Class<? extends SimpleChannelHandler>) Class
.forName(messageHandlerName);
Constructor<?> ctor = clazz.getConstructor(
ChannelProcessor.class, ServiceDecoder.class, ChannelGroup.class,
CounterGroup.class, CounterGroupExt.class, String.class, String.class,
Boolean.class, Integer.class, Integer.class, Boolean.class);
SimpleChannelHandler messageHandler = (SimpleChannelHandler) ctor
.newInstance(processor, serviceProcessor, allChannels,
counterGroup, counterGroupExt, topic, attr,
filterEmptyMsg, maxMsgLength, maxConnections, isCompressed);
cp.addLast("messageHandler", messageHandler);
} catch (Exception e) {
e.printStackTrace();
}
}
if (this.protocolType.equalsIgnoreCase(ConfigConstants.UDP_PROTOCOL)) {
cp.addLast("execution", executionHandler);
}
client:
private ConnectionlessBootstrap clientBootstrap;
clientBootstrap = new ConnectionlessBootstrap(
new NioDatagramChannelFactory(Executors.newCachedThreadPool()));
clientBootstrap.setPipelineFactory(new ChannelPipelineFactory() {
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("sslHandler", sslInit());
pipeline.addLast("orderHandler",new ExecutionHandler(
new OrderedMemoryAwareThreadPoolExecutor(cores * 2,
1024 * 1024, 1024 * 1024)));
return pipeline;
}
});
Two function to send message in the client:
public void sendMessage(byte[] data) {
ChannelBuffer buffer = ChannelBuffers.wrappedBuffer(data);
sendMessage(buffer);
}
public void sendMessage(ChannelBuffer buffer) {
Random random = new Random();
Channel channel = channelList.get(random.nextInt(channelList.size()));
if(!channel.isConnected()){
channel.close();
ChannelFuture cf = clientBootstrap
.connect(new InetSocketAddress(ip, port));
if(cf.awaitUninterruptibly(3000, TimeUnit.MILLISECONDS)){
channel = cf.getChannel();
}else {
channelList.remove(channel);
return;
}
}
ChannelFuture future = channel.write(buffer);
if(!future.awaitUninterruptibly(3, TimeUnit.SECONDS)){
logger.warn("send failed!{}",future.getCause());
}else {
sendCnt.incrementAndGet();
}
}
I suspect whether UDP Netty Server/Client support SSL. Any tips are appreciated.
UDP does not guarantee the order of packets, as opposed to TCP, since there is no session. Thus, during the SSL negotiation, there could be an issue, depending on the order of the UDP packets.
According to what I read, you might have a look to DTLS which suppose to add a sort of order control in UDP packets, but lot of SSL libraries do not support it.
Since Netty only implements TLS, it may not work with UDP.

Outgoing connection stream closed

I have an actor with the behaviour:
def receive: Receive = {
case Info(message) =>
val res = send("INFO:" + message)
installAckHook(res)
case Warning(message) =>
val res = send("WARNING:" + message)
installAckHook(res)
case Error(message) =>
val res = send("ERROR:" + message)
installAckHook(res)
}
private def installAckHook[T](fut: Future[T]): Unit = {
val answerTo = sender()
fut.onComplete {
case Success(_) => answerTo ! "OK"
case Failure(ex) => answerTo ! ex
}
}
private def send(message: String): Future[HttpResponse] = {
import context.system
val payload: Payload = Payload(text = message,
username = slackConfig.username, icon_url = slackConfig.iconUrl,
icon_emoji = slackConfig.iconEmoji, channel = slackConfig.channel)
.validate
Http().singleRequest(RequestBuilding.Post(slackConfig.hookAddress, payload))
}
And a test
val actorRef = system.actorOf(SlackHookActor.props(SlackEndpointConfig(WebHookUrl,iconEmoji = Some(":ghost:"))))
actorRef ! Error("Some error message")
actorRef ! Warning("Some warning message")
actorRef ! Info("Some info message")
receiveN(3)
and in the afterAll() method I do a shutdown on the actor system using TestKit.
It works, the request makes it to the server, but there are errors from the akka streams part:
[ERROR] [06/26/2015 11:34:55.118] [SlackHookTestingSystem-akka.actor.default-dispatcher-10] [ActorSystem(SlackHookTestingSystem)] Outgoing request stream error (akka.stream.AbruptTerminationException)
[ERROR] [06/26/2015 11:34:55.120] [SlackHookTestingSystem-akka.actor.default-dispatcher-13] [ActorSystem(SlackHookTestingSystem)] Outgoing request stream error (akka.stream.AbruptTerminationException)
[ERROR] [06/26/2015 11:34:55.121] [SlackHookTestingSystem-akka.actor.default-dispatcher-8] [ActorSystem(SlackHookTestingSystem)] Outgoing request stream error (akka.stream.AbruptTerminationException)
Seems like since I have a Future completed the outgoing connection should be already closed, so is this a bug or am I missing sth?
You need to also shut down the http connection pools, something like
Http().shutdownAllConnectionPools().onComplete{ _ =>
system.shutdown()
}
Maybe the akka http testkit provides some helpers