Disclaimer: This is a follow-on question from my other question about NServiceBus which was answered very thoroughly.
My current question is this: If a website is built to be 'dumb' like the article referred to, above, suggests then how does the following scenario work?
A user registers on a website by filling out a form with relevant details. When the user clicks the 'submit' button on the form the web application takes the form data and creates a message which it sends to the application tier using NServiceBus and Bus.Send(). The application tier goes about the business of creating the new user and publishing the event that the user has been created (Bus.Publish()) so that other processes can do their thing (email the new user, add the user to a search index, etc, etc).
Now, since the web application in this scenario relies entirely on the application tier for the creation of the new user instance, how does it get to know about the user's id? If I didn't use NServiceBus in this scenario but, rather, let the website issue an in-process call to a DAL I'd use NHibernate's GuidComb() strategy to create the identifier for the new user before persisting the new row in the database. If the message handler application which receives the command to create a new user (in the current scenario) uses the same strategy, how is the userId communicated back to the web application?
Do I have to apply a different strategy for managing identifiers in a scenario such as this?
You're free to come up with an ID to use as a correlation identifier by putting it in your message in the web application, allowing it to be carried around whatevery processes are initiated by the message.
That way, you can correlate the request with other events around your system, if only they remember to supply the correlation ID.
But it sounds like you want your user ID to be fed back to you in the same web request - that cannot easily be done with an asynchronous backend, which is what messaging gives you.
Wouldn't it be acceptable to send an email to the user when the user has been created, containing a (secret) link to some kind of gateway, that resumes the user's session?
Wouldn't the UI be able to listen to the bus for the "user created" event? And then you could correlate either by having the event include some kind of event ID linking back to the "user creation requested" event or against some other well known data in the event (like the user name). Though you probably also have to listen to multiple events, such as "user creation failure" event.
This is not unlike normal AJAX processing in a web browser. Technically, you don't block on the out of band call back to the web server. You invoke the call and you asynchronously wait for a callback.
Related
I have a working monolith application (deployed in a container), for which I want to add notifications feature as a separate microservice.
I'm planning for the monolith to emit events to a message bus (RabbitMQ) where they will be received by the new service, which will send the notification to user. In order to compose a notification, it will need other information about the user from the monolit, so it will call monolith's REST API in order to obtain it.
The problem is, that access to the monolith's API requires authentication in form of a token. I was thinking of:
using the secret from the monolith to issue a never-expiring token - I don't think this is a great idea from the security perspective, and also I know that sometimes the keys rotate in which case the token would became invalid eventually anyway
using the message bus to retrieve the information - this does not seem a good idea either as the asynchrony would make it very complicated
providing all the info the notification service needs in the event - this would make them more coupled together, and moreover, I plan to also send notifications based on the state on the monolith not triggered by an event
removing the authentication from the monolith and implementing it differently (not sure how yet)
My question is, what are some of the good ways this kind of problem can be solved, and also, having just started learning about microservices, is what I am trying to do right in the first place?
When dealing with internal security you should always consider the deployment and how the APIs are exposed to the outside world, an API gateway might be used to simply make it impossible to access internal APIs. In that case, a fixed token might be good enough to ensure that the client is authorized.
In general, though, I would suggest looking into OAuth2 or a JWT-based solution as it helps to validate the identities of the calling system as well as their access grants.
As for your architecture doubts, you need to consider the following scenarios when building out the solution:
The remote call can fail, at any time for unknown reasons, as such you shouldn't acknowledge the notification event until you're certain that the notification has been processed successfully.
As you've mentioned RabbitMQ, you should aim to keep the notification queue as small as possible, to that effect, a cache that contains the user details might help speed things along (and help you reduce the chance of failure due to the external system not being available).
If your application sends a lot of notifications to potentially millions of different users, you could consider having a read-only database replica of the users which is accessible to the notification service, and directly read from the database cluster in batches. This reduces the load on the monolith and shift it to the database layer
I've been trying for a few days, and struggling with a best practice for this - any ideas?
Contrived private message example:
Multiple users logged into blazor server
Server subscribes to an event bus/message queue to receive NewMessageEvent
Only the user that is the intended recipient should be updated.
I can create a singleton to subscribe to the message queue.
I can then use a singleton that I inject to the required blazor component to add the message to a list and issue a stateHasChanged event.
That would update all connected clients (not ideal, the service injected to the components should be scoped).
Options so far:
I could verify the recipient for the message inside the blazor component, but it sort of feels the wrong place
Subscribe to the queue once per circuit (The queue still holds all messages though)
What I was hoping to do, was possibly create a service locator based on the circuit Id and connected userId using a circuit handler, and call a function like: NewMessageReceivedFor(userId), if that finds a matched circuit, then call the scoped service function.
This means that I should call a scoped service from a singleton (not allowed by DI through the constructor), by some form of GetRequiredService, but can I get that scoped service by specifying a circuit Id?
I currently feel Im either 90% there, or in the wrong forest, let alone up the wrong tree.
You could have a Singleton service for dealing with all messages, and then a Scoped service that subscribes to an event on the Singleton and then only triggers its own event if the message is for the current user (you'd need a service registered as Scoped to get the current user ID).
That way each user will only get a notification when the message is meant for them.
Don't forget to implement IDisposable on the Scoped service, so you can unsubscribe from the Singleton service.
I'm interested in practical scenarios of authentication/login in web application when CQRS pattern is used to build the system.
Say we using HTTP services for commands/queries. And authentication with JWT (or any other authentication token)
We send command LogInUser with credentials (HTTP request).
Server command handler checks credentials, writes events in the store (if using Event Sourcing).
What then? What should we return as the result of the command? Just ok result with authToken? Then client should query the state in the read service? In this case we just make the whole process longer. And this concern actually refers not only to authentication scenarios but also other scenarios when we send a command and expect to get the result of it execution as soon as possible.
I just would like to hear from people who implemented such things. Want to understand possible practical data/requests flows for authentication using CQRS.
Since you are using CQRS, you have decided to separate writing to the application from reading from the application.
To write to the application, you use commands.
To read from the application, you either wait for events, or you query the read model.
This diagram shows the relation between the different options:
(The diagram is taken from the documentation of wolkenkit, a CQRS and event-sourcing framework for JavaScript and Node.js.)
So, when you send your LogInUser command, the command itself does not return anything (of course, when using HTTP there must be a response, but it should just be a 200 OK, so that you can verify that the server received the command and will care about it sooner or later).
Now the server process the login, verifies the sent credentials, and so on, and generates an appropriate UserLoggedIn event. This event gets stored in the event store, and is then sent to the read model.
The read model does two things with this event:
It simply forwards it to the client.
It updates any denormalized tables you may have that are interested in this event, so you can query them later.
So your client has two options:
It can wait for the event after having sent the command. Once the event is being received, the client has the JWT.
It can query the read model, until a given record was updated.
As you need to make sure that only the sender of the command is able to receive the JWT, option 1 is actually the only viable way. You can make sure that an event gets only delivered to the client that sent the appropriate command, but you can't have a table that contains all JWTs where people can only read their JWTs before being authenticated. With the read model, you have a chicken-and-egg problem here.
So, to cut a long story short: The client should wait for the appropriate event, and the event contains the JWT. That's it.
I want to develop an Emailer microservice which provides a SendEmail command. Inside the microservice I have an aggregate in mind which represents the whole email process with the following events:
Aggregate Email:
(EmailCreated)
EmailDeliveryStarted
EmailDeliveryFailed
EmailRecipientDelivered when one of the recipients received the email
EmailRecipientDeliveryFailed when one of the recipients could not receive the email
etc.
In the background the email delivery service SendGrid is used; my microservice works like a facade for that with my own events. The incoming webhooks from SendGrid are translated to proper domain events.
The process would look like this:
Command SendEmail ==> EmailCreated
EmailCreatedHandler ==> Email.Send (to SendGrid)
Incoming webhook ==> EmailDeliveryStarted
Further webhooks ==> EmailRecipientDelivered, EmailRecipientDeliveryFailed, etc.
Of course if I'd want to replace the external webservice and it would apply other messaging strategies I would adapt to that but keep my domain model with it's events. I want to let the client not worry about the concrete email delivery strategy.
Now the crucial problem I face: I want to accept the SendEmail commands even if SendGrid is not available at that very moment, which entails storing the whole email data (with attachments) and then, with an event handler, start the sending process. On the other hand I don't want to bloat my initial EmailCreated event with this BLOB data. And I want to be able to clean up this data after SendGrid has accepted my send email request.
I could also try both sending the email to SendGrid and storing an initial EmailDeliveryStarted event in the SendEmail command. But this feels like a two-phase commit: if SendGrid accepted my call but somehow my repository was unable to store the EmailDeliveryStarted event the client would be informed that something went wrong and it tries again which would be a disaster.
So I don't know how to design my aggregate and, more important, my EmailCreated event since it should not contain the BLOB data like attachments.
I found this question interesting and it took a little bit to reflect on that.
First things first - I do not see an obligation to store the email attachments in the event. You can simply store the fully qualified name of the files attached. That would keep the event log smaller and perhaps rule out the need for "deleting" the event (and you know that, in an event source model, you should not do that).
Secondly, assuming that the project is not building an e-mail client, I don't see a need to model an e-mail as an aggregate root. I see AggregateRoots represent business-relevant domains, not for a utility task like sending an e-mail. You could model this far more easily using a database table / document that keeps track of what has been sent and what not yet. I see sending e-mails through SendGrid as a reaction to a business event, certainly to be tracked, but not an AggregateRoot in its own right.
Lastly, if you want to accept SendEmail commands also when SendGrid is offline, the aggregate emits an EmailQueued event. The EmailQueuedHandler will produce a line on the read model of the process in charge taking all the Emails in queued state and batch them for sending. If the communication with SendGrid fails, you can either:
Do nothing, the sender process will pick the email at the next attempt
Emit a EmailSendFailed, intercepted by a Handler that will increase the retry count (if you want to stop after a number of retries).
Hope that is sufficiently clear and best of luck with your project.
We've created a web application that is an a e-book reader. So one thing to keep in mind is that the domain is not exactly that of reading a physical book. We are now trying to gather users' reading behavior by storing information about e-book pages accessed by our users. Since this information goes to a data warehouse we thought raising an event from the bookcontroller is the right way to do it.
bus.Publish()
But we are not sure if it should be a publish or a send since there is really only one consumer to this event and that is our business intelligence team. We've also read that it is not advisable to publish from the web app (http://www.make-awesome.com/2010/10/why-not-publish-nservicebus-messages-from-a-web-application/). So now the alternative is to use bus.Send(RecordPageAccessedCommand)
But the above command does not change our application state in anyway. So is it truly a command? I have a feeling that the mistake we are making is using NServiebus's features (Publish,Send) and trying to equate it with what a command or event is.
Please let me know what the solution to this is.
Based on the information you provided, I would recommend "sending" to your endpoint.
Sending a command implies that the endpoint handling the message should do something. In your case, recording that the page was accessed is the thing the endpoint should do.
Publishing an event implies that you are notifying 0..n subscribers that something occurred. You could publish an event from your command handler if some other service in your system was interested in the fact that a page was accessed. The key point here is that it's not a "fact" until you've recorded it.
I've found that consumers tend to grow once data is available. Having the ability to publish an event from your command handler will make it trivial to notify new consumers without changing/risking your existing code base.
The RecordPageAccessedCommand is a command as it is commanding the system to do something, in this case, record that a page has been accessed.
If I've understood your scenario correctly. A message should be sent from your controller to the "Business intelligence Team Service" telling the system to record that a page has been accessed. This service would store this information and would be the owner/technical authority of this information.
No other services should store or require this information in its pure form, they can however subscribe to events from this service, in highly contrived scenario for example, when a user reads 1000 pages the "Business intelligence Team Service" can publish an event that a 1000 pages have been read ie Bus.Publish(), which may be handled by a billing service that gives a discount for the user on their next purchase.
The data warehouse can have access to this information stored in your "Business intelligence Team Service" as it would fall under IT/OPS.