I know a parent process can have multiple children. But can a child have multiple parents? why or why not?
can a child have multiple parents?
No. By definition of what we understand as parent process, it's the process that spawned the child process. One process has to do that. And that's the parent.
why or why not?
Because that's how we define the parent-child relationship.
Related
I have a kogito setup(data-index, jobs-service, infinispan, kafka etc.) and works with my Quarkus application.
My application has a parent .bpmn2 process, and in that process, I create a subprocess when a signal is received. See the process image:
And I have the same logic in child process, it goes like 3 levels down. Thus, I have a hierarchy in data index resources. When I fetch the parent process via data-index in GraphQL, I see the childProcessInstances etc. I also persist the data in infinispan, so when restarting the application, my data persists.
But now I only have a small application and I do not know how it works in a big project, working many parent and child processes as nested.
And I wonder how my Quarkus application handles a big workload of these processes? Is it possible to run infinite multiple procesess? Thanks in advance.
We have a particular scenario in our application - All the child actors in this application deals with huge volume of data (Around 50 - 200 MB).
Due to this, we have decided to create the child actors in the same machine (worker process) in which parent actor was created.
Currently, this is achieved by the use of Roles. We also use .NET memory cache to transfer the data (Several MBs) between child actors.
Question : Is it ok to turn off clustering in the child actors to achieve the result we are expecting?
Edit: To be more specific, I have explained the our application setup in detail, below.
The whole process happens inside a Akka.NET cluster of around 5 machines
Worker processes (which contains both parent and child actors) are deployed in each of those machines
Both parent and child actors are cluster enabled, in this setup
When we found out the network overhead caused by distributing the child actors across machines, we decided to restrict child actor creation to the corresponding machines which received the primary request, and distribute only the parent actor across machines.
While approaching an Akka.NET expert with this problem, we were advised to use "Roles" in order to restrict the child actor creation to a single machine in a cluster system. (E.g., Worker1Child, Worker2Child instead of "Child" role)
Question (Contd.) : I just want to know, if simply by disabling cluster option in child actors will achieve the same result; and is it a best practice to do so?
Please advise.
Sounds to me like you've been using a clustered pool router to remotely deploy worker actors across the cluster - you didn't explicitly mention this in your description, but that's what it sounds like.
It also sounds like, what you're really trying to do here is take advantage of local affinity: have child worker actors for the same entities all work together inside the same process.
Here's what I would recommend:
Have all worker actors created as children of parents, locally, inside the same process, but either using something like the child-per-entity pattern or a LOCAL pool router.
Distribute work between the worker nodes using a clustered group router, using roles etc.
Any of the work in that high volume workload should all flow directly from parent to children, without needing to round-trip back and forth between the rest of the cluster.
Given the information that you've provided here, this is as close to a "general" answer as I can provide - hope you find it helpful!
I am trying to load data from Apache Kafka to SQL Server database. Apache Kafka has separate topics for parent and child entities along with data. I have managed to load data from Kafka to SQL Server parallelly for all entities, by creating spouts and bolts for all entities. But that results in null values for child entities as some child records gets loaded before parent entities.
Why does it append? and how can I solve it?
PS: I am using Apache Storm 0.10 and Apache Kafka 0.80
You could queries for the parent entities before inserting children. If parent is not inserted yet, delay inserting the child until the parent is available.
I am parsing JSON string to create new managed objects in a separate thread and in a separate managed object context. Later I want to merge the changes in the main thread by listening to NSManagedObjectContextObjectsDidChangeNotification.
The problem is that I want to establish the relationships between the newly parsed objects and the other objects in the main moc. However I know it is illegal to make relationships between objects in different contexts.
What's the best practice to accomplish this task?
If the objects on the main thread are saved they will be available to the new context on a secondary thread because the new context shares access to the persistent store.
If you are creating new objects simultaneously on both threads, you will need to merge the context with each other before each will be aware of the objects created on the other.
Merging essentially makes the context copies of each other at the time of the merge.
In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco).
As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.)
However I did not find documentation about this feature and behavior.
Questions:
Is my understanding of child sessions correct?
How and in which scenarios should I use them?
Are they documented somewhere?
Could they be used for session "scoping"?
(For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?)
Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios?
Thanks in advance for any info.
Stefando,
NHibernate has not knowledge of child sessions, you can reuse an existing one or open a new one.
For instance, you will get an exception if you try to load the same entity into two different sessions.
The reason why it is mentioned in the blog, is because in preupdate and preinsert, you cannot load more objects in the session, you can change an allready loaded instance, but you may not navigate to a relationship property for instance.
So in the blog it is needed to open a new session just because we want to add a new auditlog entity. So in the end it's the transaction (unit of work) that manages the data.