I have one question about akka-persistence and event migration. I do have read the "Schema Evolution for Event Sourced Actors" chapter. However, this does not give an answer to my question.
Given I have one persistent actor ChildActor that produce Created event. But, later we discover that ChildActor should be a child of ParentActor. And ParentActor has to update his state based on the creation of ChildActor (to maintains a collection of childs).
We can add a new command CreateChild for ParentActor that will create the ChildActor. However, the parent will never receive the Created event emitted by his child. Thus it will not be able to update his state. Of course, ParentActor can create a ChildCreated event for himself.
But, what about the Created events already persisted by FirstActor?
How can we "send" (and, ideally adapt) them to the ParentActor?
So, my question is:
Can we "route" persisted events from one actor to another?
Thanks
It is possible to watch the events persisted by a given persistence ID with the events by persistence ID query. Since this query is very much like what Akka Persistence must do in replaying events to rebuild a persistent actor's state, it's available in all the commonly used plugins: you'll need to check the documentation for your plugin for how to summon a ReadJournal. Once summoned, assuming that the ReadJournal is further an instance of EventsByPersistenceIdQuery, you would use (Scala):
readJournal.eventsByPersistenceId(childActorPersistenceId, fromOffset, Long.MaxValue)
which would give you an Akka Streams Source of events in order starting at fromOffset. Your subscribing actor may (probably will) want to save in its state the last-seen sequence number as part of its state so if it resumes it doesn't see the event it processed (ideally the event updating the sequence number would be in the same batch or otherwise atomically part of the state update).
Note that there will be an observable delay from persisting the event to when ParentActor sees the event, though many of the recent iterations of plugins (e.g. Cassandra or R2DBC) can directly propagate the event or at least the notification that there's an event for the persistence ID to the query.
Related
I'm new to Mass Transit and I would like to understand if it can helps with my scenario.
I'm building a sample application implemented with a CQRS event sourcing architecture and I need a service bus in order to dispatch the events created by the command stack to the query stack denormalizers.
Let's suppose of having a single aggregate in our domain, let's call it Photo, and two different domain events: PhotoUploaded and PhotoArchived.
Given this scenario, we have two different message types and the default Mass Transit behaviour is creating two different RabbitMq exchanges: one for the PhotoUploaded message type and the other for the PhotoArchived message type.
Let's suppose of having a single denormalizer called PhotoDenormalizer: this service will be a consumer of both message types, because the photo read model must be updated whenever a photo is uploaded or archived.
Given the default Mass Transit topology, there will be two different exchanges so the message processing order cannot be guaranteed between events of different types: the only guarantee that we have is that all the events of the same type will be processed in order, but we cannot guarantee the processing order between events of different type (notice that, given the events semantic of my example, the processing order matters).
How can I handle such a scenario ? Is Mass Transit suitable with my needs ? Am I completely missing the point with domain events dispatching ?
Disclaimer: this is not an answer to your question, but rather a preventive message why you should not do what you are planning to do.
Whilst message brokers like RMQ and messaging middleware libraries like MassTransit are perfect for integration, I strongly advise against using message brokers for event-sourcing. I can refer to my old answer Event-sourcing: when (and not) should I use Message Queue? that explains the reasons behind it.
One of the reasons you have found yourself - event order will never be guaranteed.
Another obvious reason is that building read models from events that are published via a message broker effectively removes the possibility for replay and to build new read models that would need to start processing events from the beginning of time, but all they get are events that are being published now.
Aggregates form transactional boundaries, so every command needs to guarantee that it completes within one transaction. Whilst MT supports the transaction middleware, it only guarantees that you get a transaction for dependencies that support them, but not for context.Publish(#event) in the consumer body, since RMQ doesn't support transactions. You get a good chance of committing changes and not getting events on the read side. So, the rule of thumb for event stores that you should be able to subscribe to the stream of changes from the store, and not publish events from your code, unless those are integration events and not domain events.
For event-sourcing, it is crucial that each read-model keeps its own checkpoint in the stream of events it is projecting. Message brokers don't give you that kind of power since the "checkpoint" is actually your queue and as soon as the message is gone from the queue - it is gone forever, there's no coming back.
Concerning the actual question:
You can use the message topology configuration to set the same entity name for different messages and then they'll be published to the same exchange, but that falls to the "abuse" category like Chris wrote on that page. I haven't tried that but you definitely can experiment. Message CLR type is part of the metadata, so there shouldn't be deserialization issues.
But again, putting messages in the same exchange won't give you any ordering guarantees, except the fact that all messages will land in one queue for the consuming service.
You will have to at least set the partitioning filter based on your aggregate id, to prevent multiple messages for the same aggregate from being processed in parallel. That, by the way, is also useful for integration. That's how we do it:
void AddHandler<T>(Func<ConsumeContext<T>, string> partition) where T : class
=> ep.Handler<T>(
c => appService.Handle(c, aggregateStore),
hc => hc.UsePartitioner(8, partition));
AddHandler<InternalCommands.V1.Whatever>(c => c.Message.StreamGuid);
I am studying Lagom and try to understand how persistent entities work.
I've read the following description:
Every PersistentEntity has a fixed identifier (primary key) that can
be used to fetch the current state and at any time only one instance
(as a “singleton”) is kept in memory.
Makes sense.
Then there is the following example to create a customer:
#Override
public ServiceCall<CreateCustomerMessage, Done> createCustomer() {
return request -> {
log.info("===> Create or update customer {}", request.toString());
PersistentEntityRef<CustomerCommand> ref = persistentEntityRegistry.refFor(CustomerEntity.class, request.userEmail);
return ref.ask(new CustomerCommand.AddCustomer(request.firstName, request.lastName, request.birthDate, request.comment));
};
}
This confuses me:
Does that mean that the persistentEntityRegistry contain multiple singleton persistentEntities? How exactly does the persistentEntityRegistry get filled and what is in it? Say we have 10k users that are created, does the registry contain 10k persistentEntities, or just 1?
In this case we want to create a new customer. So when we request a persistentEntity using persistentEntityRegistry.refFor(CustomerEntity.class, request.userEmail);, this shouldn't return anything from the registry since the customer doesn't exist yet (?).
Can you shine a light on how this works?
Documentation is good but there a few holes in my understanding that I haven't been able to fill.
Great questions. I'm not sure how far you are along with concepts relating to persistent entities that aren't mentioned here, so I'll start from the beginning.
When doing event sourcing, generally, for a given entity (eg, a single customer), you need a single writer. This is because generally reading and then writing to the event log is not done in a single transaction, so you read some events to load your state, validate an incoming command, and then emit one or more new events to be persisted. If two operations came in for the same entity at the same time, then they would both be validated with the same state - not taking into account the state change that the other might get in before they are executed. Hence, event sourcing requires a single writer principle, only one operation can be handled at a time so there's only one writer.
In Lagom, this is implemented using actors. Each entity (ie each instance of a customer) is loaded and managed by an actor. An actor has a mailbox (ie, a queue), where commands are placed, and it processes them one at a time, in order. For each entity, there is a singleton actor managing it (so, one actor per customer, many actors for many customers). Due to the single writer principle, it's very important that this is true.
But, how does a system like this scale? What happens if you have multiple nodes, do you then have multiple instances of each entity? No. Lagom uses Akka clustering with Akka cluster sharding to shard your entities across many nodes, ensuring that across all of your deployed nodes, you only have one of each entity. So when a command comes in to a node, the entity may live on the same node, in which case it just gets sent straight to the local actor for it to be processed, or it may live on a different node, in which case it gets serialised, sent to the node it lives on, and processed there, with the response being serialised and sent back.
This is one of the reasons why it's a PersistentEntityRef, due to the location transparency (you don't know where the entity lives), you can't hold onto the entity directly, you can only have a reference to it. The same terminology is used for actors, you have the actual Actor that does the behaviour, and an ActorRef is used to communicate with it.
Now, logically, when you get a reference for a customer that according to the domain model of your system doesn't exist yet (eg, they haven't registered), they don't exist. But, the persistent entity for them can, and must exist. There is actually no concept in Lagom of a persistent entity not existing, you can always instantiate a persistent entity of any id, it will load. It's just that there might be no events yet for that entity, in which case, when it loads, it will just have its initialState, with no events applied. For a customer, the state of the customer might be Optional<Customer>. So, when the entity is first created before any events are emitted for a customer, the state will be Optional.empty(). The first event emitted for the customer will be a CustomerRegistered event, and this will cause the state to change to an Optional.of(someCustomer).
To understand why logically this must be so, you don't want the same customer to be able to register themselves twice, right? You want to ensure that there is only one CustomerRegistered event for each customer. To do that, you need to have a state for the customer in their unregistered state, to validate that they are not already registered before they do register.
So, to make clear the answer to your first question, if you have 10K users, then there will be 10K persistent entity instances, one for each user. Though, that is only logically (there will be events for 10K different users in the database physically). In memory, the actual loaded entities will depend on which entities are active, when an entity goes for, by default, 2 minutes without receiving a command, Lagom will passivate that entity, that means, it drops it from memory, so the next time a command comes in for it will have to be loaded by loading the events for it from the database. This helps to ensure that you don't run out of memory by holding all your data in memory if you don't want.
I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.
This question is regarding rxpy.
I am trying to build a reactive system that handles messages from a source observable. In addition to that, I am trying to integrate it with a leader election system based on zookeeper.
This combination will allow only one leader in a farm of processes to handle the message stream. Below is the gist of the code I am trying to construct.
# event_source is an observable of messages
# manager.leaders is an observable of leader election events
# manager.followers is an observable of leader relinquish events
event_source\
.skip_until(manager.leaders)\
.take_until(manager.followers)\
.subscribe(observer)
It works fine and all, but I need to inject between skip_until and take_until a piece to handle backfill. This is designed to handle potential gap between a leader process failure and another process assuming leadership. Every processed message will leave a record so that a new leader can catch up on missing messages, if any, before proceeding with the stream.
I tried start_with operator without success. Am I not approaching it in a manner it is not meant to be used for?
Ultimately, the solution I am looking for is to inject a specific number of items in the stream triggered by an event from another stream.
What about this:
manager.leaders \
.flat_map(lambda e: event_source
.start_with(...)
.take_until(manager.followers))
Every time manager.leaders emits a message event_source will be subscribed to, starting with injected items, until manager.followers emits.
We've got a CQRS project and are thinking about a way to implement a "catchup", e.g. a new event handler is started and tells the eventstore to replay all events for him.
We're not sure if we should do the replay over the NServiceBus, as there is a real 1:1 connection and no publish/subscribe situation. Also we think that our new consumer is not able to keep up with the publish-speed and its input queue would get stuck.
What's the best practice here?
I've heard of people doing the following:
Have a system of replaying/rebroadcasting the events. Event handlers that have produced projections that have already seen these events ignore the events.
Allow events to be queried directly by the Event Handler when resetting it or when starting a new projection from scratch. This can be done in some systems by reading directly from the event store and in other actor based system an actor abstraction around the source of events may be queried.
From my understanding, option 2 allows for better performance as events can be queried in batches as opposed to being replayed to all listeners individually. These are just my observations without any practical experience to draw on yet.