I want to develop an Emailer microservice which provides a SendEmail command. Inside the microservice I have an aggregate in mind which represents the whole email process with the following events:
Aggregate Email:
(EmailCreated)
EmailDeliveryStarted
EmailDeliveryFailed
EmailRecipientDelivered when one of the recipients received the email
EmailRecipientDeliveryFailed when one of the recipients could not receive the email
etc.
In the background the email delivery service SendGrid is used; my microservice works like a facade for that with my own events. The incoming webhooks from SendGrid are translated to proper domain events.
The process would look like this:
Command SendEmail ==> EmailCreated
EmailCreatedHandler ==> Email.Send (to SendGrid)
Incoming webhook ==> EmailDeliveryStarted
Further webhooks ==> EmailRecipientDelivered, EmailRecipientDeliveryFailed, etc.
Of course if I'd want to replace the external webservice and it would apply other messaging strategies I would adapt to that but keep my domain model with it's events. I want to let the client not worry about the concrete email delivery strategy.
Now the crucial problem I face: I want to accept the SendEmail commands even if SendGrid is not available at that very moment, which entails storing the whole email data (with attachments) and then, with an event handler, start the sending process. On the other hand I don't want to bloat my initial EmailCreated event with this BLOB data. And I want to be able to clean up this data after SendGrid has accepted my send email request.
I could also try both sending the email to SendGrid and storing an initial EmailDeliveryStarted event in the SendEmail command. But this feels like a two-phase commit: if SendGrid accepted my call but somehow my repository was unable to store the EmailDeliveryStarted event the client would be informed that something went wrong and it tries again which would be a disaster.
So I don't know how to design my aggregate and, more important, my EmailCreated event since it should not contain the BLOB data like attachments.
I found this question interesting and it took a little bit to reflect on that.
First things first - I do not see an obligation to store the email attachments in the event. You can simply store the fully qualified name of the files attached. That would keep the event log smaller and perhaps rule out the need for "deleting" the event (and you know that, in an event source model, you should not do that).
Secondly, assuming that the project is not building an e-mail client, I don't see a need to model an e-mail as an aggregate root. I see AggregateRoots represent business-relevant domains, not for a utility task like sending an e-mail. You could model this far more easily using a database table / document that keeps track of what has been sent and what not yet. I see sending e-mails through SendGrid as a reaction to a business event, certainly to be tracked, but not an AggregateRoot in its own right.
Lastly, if you want to accept SendEmail commands also when SendGrid is offline, the aggregate emits an EmailQueued event. The EmailQueuedHandler will produce a line on the read model of the process in charge taking all the Emails in queued state and batch them for sending. If the communication with SendGrid fails, you can either:
Do nothing, the sender process will pick the email at the next attempt
Emit a EmailSendFailed, intercepted by a Handler that will increase the retry count (if you want to stop after a number of retries).
Hope that is sufficiently clear and best of luck with your project.
Related
I'm interested in practical scenarios of authentication/login in web application when CQRS pattern is used to build the system.
Say we using HTTP services for commands/queries. And authentication with JWT (or any other authentication token)
We send command LogInUser with credentials (HTTP request).
Server command handler checks credentials, writes events in the store (if using Event Sourcing).
What then? What should we return as the result of the command? Just ok result with authToken? Then client should query the state in the read service? In this case we just make the whole process longer. And this concern actually refers not only to authentication scenarios but also other scenarios when we send a command and expect to get the result of it execution as soon as possible.
I just would like to hear from people who implemented such things. Want to understand possible practical data/requests flows for authentication using CQRS.
Since you are using CQRS, you have decided to separate writing to the application from reading from the application.
To write to the application, you use commands.
To read from the application, you either wait for events, or you query the read model.
This diagram shows the relation between the different options:
(The diagram is taken from the documentation of wolkenkit, a CQRS and event-sourcing framework for JavaScript and Node.js.)
So, when you send your LogInUser command, the command itself does not return anything (of course, when using HTTP there must be a response, but it should just be a 200 OK, so that you can verify that the server received the command and will care about it sooner or later).
Now the server process the login, verifies the sent credentials, and so on, and generates an appropriate UserLoggedIn event. This event gets stored in the event store, and is then sent to the read model.
The read model does two things with this event:
It simply forwards it to the client.
It updates any denormalized tables you may have that are interested in this event, so you can query them later.
So your client has two options:
It can wait for the event after having sent the command. Once the event is being received, the client has the JWT.
It can query the read model, until a given record was updated.
As you need to make sure that only the sender of the command is able to receive the JWT, option 1 is actually the only viable way. You can make sure that an event gets only delivered to the client that sent the appropriate command, but you can't have a table that contains all JWTs where people can only read their JWTs before being authenticated. With the read model, you have a chicken-and-egg problem here.
So, to cut a long story short: The client should wait for the appropriate event, and the event contains the JWT. That's it.
So I've been reading about EventStore and NServiceBus and I like the idea of having a transactional log of my data that can help me build views based on that data.
What I don't understand right now is how to distinguish between an event that will write to your read storage and the same event which might trigger an email to get sent.
ex. Creating a customer
CreateUserCommand -> CreateUserCommandHandler -> CreatedUserEvent
Should I be using the CreatedUserEvent to trigger both my write to my data storage and sending an email to a user?
In the last few years, Eric Evans has recognized an update to his DDD pattern: Domain Events (aka External Events concept).
Internal events in Event Sourcing patterns is what we've been focusing on, such as UserCreatedEvent in your example. Keep these explicit with an IEvent marker interface.
While IEvents are stull published on the bus, IDomainEvents are more notability for larger external-to-the-domain notifications that don't effect a state of an aggregate per say.
So...
CreateUser (ICommand)
^- CreateUserCommandHandler
UserCreated (IEvent)
^- UserCreatedEventHandler
SendNewUserEmail (ICommand)
^- SendNewUserEmailCommandHandler
NewUserEmailSent (IDomainEvent)
^- UserRegistrationService or some other AC
I am still pretty new to event sourcing myself; but, I would guess that you can have the UserRegistrationService register on the bus to listen for the SendNewUserEmail ICommand.
Either way you go, I would concentrate on creating additional commands/events for sending an email and the email was sent. Then, later on you can view the transaction log as to when it was queued to send, how long it took to send, was there any retries in sending, how many was sent at the same time and did it effect time delays (datetime diffs) to show any bottlenecks?, install a queue for sending emails and break it out into a smaller independent service, etc etc.
We've created a web application that is an a e-book reader. So one thing to keep in mind is that the domain is not exactly that of reading a physical book. We are now trying to gather users' reading behavior by storing information about e-book pages accessed by our users. Since this information goes to a data warehouse we thought raising an event from the bookcontroller is the right way to do it.
bus.Publish()
But we are not sure if it should be a publish or a send since there is really only one consumer to this event and that is our business intelligence team. We've also read that it is not advisable to publish from the web app (http://www.make-awesome.com/2010/10/why-not-publish-nservicebus-messages-from-a-web-application/). So now the alternative is to use bus.Send(RecordPageAccessedCommand)
But the above command does not change our application state in anyway. So is it truly a command? I have a feeling that the mistake we are making is using NServiebus's features (Publish,Send) and trying to equate it with what a command or event is.
Please let me know what the solution to this is.
Based on the information you provided, I would recommend "sending" to your endpoint.
Sending a command implies that the endpoint handling the message should do something. In your case, recording that the page was accessed is the thing the endpoint should do.
Publishing an event implies that you are notifying 0..n subscribers that something occurred. You could publish an event from your command handler if some other service in your system was interested in the fact that a page was accessed. The key point here is that it's not a "fact" until you've recorded it.
I've found that consumers tend to grow once data is available. Having the ability to publish an event from your command handler will make it trivial to notify new consumers without changing/risking your existing code base.
The RecordPageAccessedCommand is a command as it is commanding the system to do something, in this case, record that a page has been accessed.
If I've understood your scenario correctly. A message should be sent from your controller to the "Business intelligence Team Service" telling the system to record that a page has been accessed. This service would store this information and would be the owner/technical authority of this information.
No other services should store or require this information in its pure form, they can however subscribe to events from this service, in highly contrived scenario for example, when a user reads 1000 pages the "Business intelligence Team Service" can publish an event that a 1000 pages have been read ie Bus.Publish(), which may be handled by a billing service that gives a discount for the user on their next purchase.
The data warehouse can have access to this information stored in your "Business intelligence Team Service" as it would fall under IT/OPS.
In our scenario I'm thinking of using the pub sub technique. However I don't know which is the better option.
1 ########
A web service of ours will publish a message that something has happened when it is called externally, ExternalPersonCreatedMessage!
This message will contain a field that represents the destinations to process the message into (multiple allowed).
Various subscribers will subscribe. These subscribers will filter the message to see if any action is required by checking the destination field.
2 ########
A web service of ours will parse the incoming call and publish specific types of messages depending on the destinations supplied in the field. i.e. many Destination[n]PersonCreatedMessage messages would be created.
Subscribers will subscribe to only the specific message they care for. i.e. not having to filter any messages
QUESTIONS
Which of the above is the better option and why? And how do I stop myself from making RequestMessages. From what I've read/seen I should be trying to structure this in a way of PersonCreated, PersonDeleted i.e. SOMETHING HAS HAPPENED and NOT in the REQUEST SOMETHING TO HAPPEN form such as CreatePerson or DeletePerson
Are my thoughts correct? I've been looking for guidance on how to structure messages and making sure I don't go down a wrong path but have found no guidance out there on do's and dont's. Can any one help and guide? I want to try and get this correct from the off :)
Based on the integration scenario in the referenced article, it appears to me that you may need a Saga to complete the workflow of accept message -> operate on message -> send confirmation. In the case that the confirmation is sent immediately after the operation, you could use NSBs message handler pipeline feature which allows you to chain handlers in a specified sequence such as...
First<FilterHandler>.Then<DoWorkHandler>().AndThen<SendConfirmationHandler>();
In terms of the content filtering, you can do this although you incur some transport overhead, meaning the queue will have to accept the message and the process will always call the first handler on every message(you can short-circuit the above pipeline at any point). It may be the case that what you really want is a Distributor/Worker setup where all Workers are the same and you can handle some load.
If you truly have different endpoints with completely different logic, then I would have the Publisher process(only accepts and Publishes message) do the work of translating the inbound message to something else a Subscriber can then be interested in. If then you find that a given Published message only ever has 1 Subscriber, then you don't need to Publish at all, you need to just Bus.Send() to the correct endpoint.
The way NServiceBus handles pub-sub is more like your option two.
A publisher service has an input queue and a subscription store.
A subscriber service has an input queue
The subscriber, on start-up will send a subscription message to the input queue of the publisher
The subscription message contains the type of message subscriber is interested in and the subscribers queue address
The publisher records the subscription in the subscription store.
The publisher receives a message.
The publisher evaluates the message type against the list of subscriptions
For each match found the publisher sends the message to the queue address.
In my opinion, you should stop thinking about destinations. Messages are messages. They should not have any inherent destination information in them. The subscription mechanism defines the addressing/routing requirements for the solution.
Disclaimer: This is a follow-on question from my other question about NServiceBus which was answered very thoroughly.
My current question is this: If a website is built to be 'dumb' like the article referred to, above, suggests then how does the following scenario work?
A user registers on a website by filling out a form with relevant details. When the user clicks the 'submit' button on the form the web application takes the form data and creates a message which it sends to the application tier using NServiceBus and Bus.Send(). The application tier goes about the business of creating the new user and publishing the event that the user has been created (Bus.Publish()) so that other processes can do their thing (email the new user, add the user to a search index, etc, etc).
Now, since the web application in this scenario relies entirely on the application tier for the creation of the new user instance, how does it get to know about the user's id? If I didn't use NServiceBus in this scenario but, rather, let the website issue an in-process call to a DAL I'd use NHibernate's GuidComb() strategy to create the identifier for the new user before persisting the new row in the database. If the message handler application which receives the command to create a new user (in the current scenario) uses the same strategy, how is the userId communicated back to the web application?
Do I have to apply a different strategy for managing identifiers in a scenario such as this?
You're free to come up with an ID to use as a correlation identifier by putting it in your message in the web application, allowing it to be carried around whatevery processes are initiated by the message.
That way, you can correlate the request with other events around your system, if only they remember to supply the correlation ID.
But it sounds like you want your user ID to be fed back to you in the same web request - that cannot easily be done with an asynchronous backend, which is what messaging gives you.
Wouldn't it be acceptable to send an email to the user when the user has been created, containing a (secret) link to some kind of gateway, that resumes the user's session?
Wouldn't the UI be able to listen to the bus for the "user created" event? And then you could correlate either by having the event include some kind of event ID linking back to the "user creation requested" event or against some other well known data in the event (like the user name). Though you probably also have to listen to multiple events, such as "user creation failure" event.
This is not unlike normal AJAX processing in a web browser. Technically, you don't block on the out of band call back to the web server. You invoke the call and you asynchronously wait for a callback.