In akka does a child process run in the same process as its parent actor? - akka.net

In akka does a child process always run in the same process as its parent actor?
Is there documentation?

By default, all actors are created within the same process (and AppDomain) as their parent. There are few situations where this is not the case:
Remote deployment
Cluster pool routers
In both of those, all parameters send to an actor constructor will be serialized and passed to another machine, which will work as a physical host for an actor instance.

Related

How does akka.net restart a remote node if that process crashes?

Suppose that, I have an actor that has a child actor that runs remotely. And suppose that, the child actor's process crashed with an unrecoverable exception like StackOverflowException for some reason or due to a bug.
In this case, how does the parent node in akka.net restart the crashed remote child?
It won't. Child actor lives under a specific address, and if it cannot be resurrected there, it will stay dead. All of the messages directed to it will land in dead letters.
If this is not the case for you, you can always use Akka.Cluster.Sharding, which is a higher level abstraction that automatically manages actor lifecycle.

Queue declaration on all nodes in RabbitMQ

I have a Rabbitmq cluster setup (without HA). My consumers are spring applications and it provides a failover mechanism out of the box where it connects to the next available node.
Since the queues are not mirrored, is it okay if I declare the queues up front and when the first node goes down, the connection will be established to the second node. Does this make sense?
Another question, lets say I have a load balance on top of Rabbitmq cluster. My applications connect using the load balance. Will the queues be declared on all nodes or will it be declared on the node based on the routing strategy by LB.
For the first scenario, yes, the queues will be declared on the failover broker instance when the connection is established.
If you want to pre-declare on all nodes you will need a connection factory for each node, and a RabbitAdmin for each connection factory.
You will also need something to cause a connection to be opened on each (the RabbitAdmins register themselves as connection listeners).
You can do that by adding a bean the implements SmartLifecycle and call createConnection() on each connection factory.
You can also selectively declare elements. See Conditional Declaration.
By default, all queues, exchanges, and bindings are declared by all RabbitAdmin instances (that have auto-startup="true") in the application context.
Starting with the 1.2 release, it is possible to conditionally declare these elements. This is particularly useful when an application connects to multiple brokers and needs to specify with which broker(s) a particular element should be declared.

Akka.net cluster sharding: Unable to register coordinator

I am trying to setup akka.net cluster sharding by creating a simple project.
Project layout:
Actors - class library that defines one actor and message. Is reference by other projects
Inbound - Starts ShardedRegion and is the only node participating in cluster sharding. And should be the one hosting the coordinator too.
MessageProducer - Will host only shardedregion proxy to send messages to the ProcessorActor.
Lighthouse - seed node
Uploaded images show that the coordinator singleton is not initialized and messages send through sharedregion proxy are not delivered.
Based on the blog post by petabridge, petabridge.com/blog/cluster-sharding-technical-overview-akkadotnet/, I have excluded lighthouse, by setting akka.cluster.sharding.role, from participating in cluster sharding so that coordinator is not created on it.
Not sure what am I missing to get this to work.
This was already answered on gitter, but here's the tl;dr:
Shard region proxy needs to share the same role as a corresponding shard region. Otherwise proxy may not be able to find shard coordinator, and therefore not able to find initial location of a shard, it wants to send message to.
IMessageExtractor.GetMessage method is used to extract an actual message, that is going to be send to sharded actor. In example message extractor was used to extract string property from enveloping message, yet a receiver actor has Receive handler set for envelope, not a string.

sharing a BlockingQueue in a storm spout

I am programming a big data application in which two threads running concurrently. Thread A receives data from network and puts them as JSONOBJECT in a BlockingQueue. Thread B, a storm spout, then reads from the BlockingQueue and process them.
I pass the BlockingQueue object to the spout class in the class constructor. The problem I found is that the BlockingQueue in the spout is empty. Could you please let me know how can I solve this problem?
You start a storm application by running some class that builds and configures the topology as a set of objects and then submits that collection of objects (along with the jar file) to the Nimbus server. Some of those objects are instances of the spouts and bolts which are serialized as part of the topology submission. Each instance of the bolt and spout on the cluster is one of these deserialized objects. So all bolts and spouts are constructed when you first start the topology (usually on an edge node) and not on the cluster.
What this means to you is that any objects referenced by the spout during class initialization and object construction are serialized along with the spout instance. This would include the BlockingQueue. Your BlockingQueue is being serialized and distributed to the cluster and it sounds like it's not surviving the trip.
What you want to do is leave the blocking queue variable null in the constructor and instead set the variable in the open() method. When you create the actual queue object you might store it in a public static variable somewhere so that it's available to the spout's open() method.

How to successfully set up a simple cluster singleton in Akka.NET

I was running into a problem attempting to set up a Cluster Singleton within an Akka.NET cluster where more than one instance of the singleton was starting up and running within my cluster. The cluster consists of Lighthouse (the seed node) and x number of instances of the main cluster node of which there are cluster shards as well as this singleton that exist within this node.
In order to reproduce the problem I was having I set up an example solution in GitHub but unfortunately I'm having a different problem here as I always get Singleton not available messages and my singleton never receives a message. This is sort of opposite problem that I was getting originally but nonetheless I would like to sort out a working example of cluster singleton.
[DEBUG][8/22/2016 3:06:18 PM][Thread
0015][[akka://singletontest/user/my-singleton-proxy#1237572454]]
Singleton not available, buffering message type [System.String]
In the Lighthouse process I see the following messags.
Akka.Remote.EndpointWriter: Dropping message
[Akka.Actor.ActorSelectionMessage] for non-local recipient
[[akka.tcp://sync#127.0.0.1:4053/]] arriving at
[akka.tcp://sync#127.0.0.1:4053] inbound addresses
[akka.tcp://singletontest#127.0.0.1:4053]
Potentially related:
https://github.com/akkadotnet/akka.net/issues/1960
It appears that the only bit that was missing was that the actor system specified in the actor path for my seed node did not match the actor system name specified in both Lighthouse and my Cluster Node processes. After ensuring that it matches in all three places the cluster is now behaving as expected.
https://github.com/jpierson/x-akka-cluster-singleton/commit/77ae63209042841c144f69d4cd70e9925b68a79a
Special thanks to Chris G. Stevens for his assistance.