How should I setup EventStore's RavenPersistence in a multi-tenant application?
I have an Azure worker role that processes commands received through service bus.
Each message may belong to a different tenant. The actual tenant is sent in the message header, which means that I know which database to use only after I receive each message.
I'm using CommonDomain so my command handlers have IRepository injected.
Right now I build a new store while processing each message (I set DefaultDatabase) but I have a feeling this may not be the most optimal way.
Is there a way to create a single event store and then just switch databases?
If not, can I cache the stores for each tenant?
Do you know about any multi-tenant sample that uses EventStore with RavenDB?
We do exactly the same - spawn new instance of EventStore for every request. JOliver EventStore was designed without multi-tenancy support in mind. So this is the only way ...
Related
Using ASP.NET Core microservices, both API and worker roles, running in Azure Service Fabric.
We use Service Bus to do inter-microservice communication.
Consider the following situation;
Each microservice holds a local, in-mem copy of cached objects of type X.
One worker role is responsible for processing a message that would result in a rebuild of this cache for all instances.
We are having multiple nodes, and thus multiple instances of each microservice in Service Fabric.
What would be the best approach to trigger this update?
I though of the following approaches:
Calling SF for all service replica's and firing an HTTP POST on each replica to trigger the update
This however does not seem to work as worker roles don't expose any APIs
Creating a specific 'broadcast' topic where each instance registers a subscription for, and thus using pub/sub mechanism
I fail to see how I can make sure each instance has it's own subscription, but also I don't end up with ghost subscriptions when something happens like a crash
You can use the OSS library Service Fabric Pub Sub for this.
Every service partition can create its own subscription for messages of a given type.
It uses the partition identifier for subscriptions, so crashes and moves won't result in ghost subscriptions.
It uses regular SF remoting, so you won't need to expose API's for messaging.
I have a working monolith application (deployed in a container), for which I want to add notifications feature as a separate microservice.
I'm planning for the monolith to emit events to a message bus (RabbitMQ) where they will be received by the new service, which will send the notification to user. In order to compose a notification, it will need other information about the user from the monolit, so it will call monolith's REST API in order to obtain it.
The problem is, that access to the monolith's API requires authentication in form of a token. I was thinking of:
using the secret from the monolith to issue a never-expiring token - I don't think this is a great idea from the security perspective, and also I know that sometimes the keys rotate in which case the token would became invalid eventually anyway
using the message bus to retrieve the information - this does not seem a good idea either as the asynchrony would make it very complicated
providing all the info the notification service needs in the event - this would make them more coupled together, and moreover, I plan to also send notifications based on the state on the monolith not triggered by an event
removing the authentication from the monolith and implementing it differently (not sure how yet)
My question is, what are some of the good ways this kind of problem can be solved, and also, having just started learning about microservices, is what I am trying to do right in the first place?
When dealing with internal security you should always consider the deployment and how the APIs are exposed to the outside world, an API gateway might be used to simply make it impossible to access internal APIs. In that case, a fixed token might be good enough to ensure that the client is authorized.
In general, though, I would suggest looking into OAuth2 or a JWT-based solution as it helps to validate the identities of the calling system as well as their access grants.
As for your architecture doubts, you need to consider the following scenarios when building out the solution:
The remote call can fail, at any time for unknown reasons, as such you shouldn't acknowledge the notification event until you're certain that the notification has been processed successfully.
As you've mentioned RabbitMQ, you should aim to keep the notification queue as small as possible, to that effect, a cache that contains the user details might help speed things along (and help you reduce the chance of failure due to the external system not being available).
If your application sends a lot of notifications to potentially millions of different users, you could consider having a read-only database replica of the users which is accessible to the notification service, and directly read from the database cluster in batches. This reduces the load on the monolith and shift it to the database layer
I'm currently developing a REST api. The api performs basic crud operations. Data is synced to a legacy system using RabbitMQ. The api is running on SQL Server as a DB.
I'm wondering how to make sure data is saved in the DB and a message is put on the bus.
The fact you are missing distributed transactions looks like a very general issue to me so I'm wondering if there are any best practices using NServiceBus to solve this issue?
RabbitMQ doesn't support distributed transactions on its own, so there isn't much NServiceBus can do in this scenario. One option though is:
The endpoint is configured to use the Outbox feature
when the HTTP request is received by the REST endpoint a message is sent locally to self. No DB operations are performed at this stage
when the sent-to-self message is received you're now in the context of an incoming message and you can:
execute CRUD operations
send outgoing messages
The outbox will guarantee consistency even if there are no distributed transactions
as I was reading thru documentation on nservicebus, I wasnt able to find what is persisted under Persistence section.
If nservicebus is a loosely coupled distributed library sending self-contained messages, what is there to persist? I dont understand.
With web app, when a user has a Session, we may choose to persist the Session in SQL Server, in Memory or somehow else, but with nservicebus there is no session to persist.
So, what is actually the Persistence in nservicebus?
What sort of data that could be persisted and for what reason?
Transports like RabbitMQ and Azure Service Bus natively support publish/subscribe. If an endpoint wants to receive published events, a 'subscription' to those events is stored inside those queuing technologies. Other queuing technologies don't support publish/subscribe natively, like MSMQ and Azure Storage Queues. NServiceBus mimics the behavior, but needs to store those subscriptions somewhere else.
Other things we can store are timeout and deferred messages and saga state. A saga is kind of a state machine (a workflow) and this state needs to be stored somewhere. Another feature NServiceBus supports is the outbox, which removes the need for distributed transactions by putting the message transaction and business transaction in the same database.
If you only use certain features, some transports allow you to do that natively. This removes the need for a persister. Sagas and Outbox always need persistence.
The scenario is a follow:
I have multiple clients in which they can register themselves on a workflow server, using WCF requests, to receive some kind of notifications. The information of the notifications will be received from an external system using another receive activity. The workflow then should get the notification information and callback all registered clients using send activity and callback correlations (the clients are exposing callback interfaces implemented in there and the end-point addresses passed initially with the registration requests). "Log-running workflow service" approach is used with a persistent storage.
Now, I'm looking for some way to correlate the incoming information of the notifications received from the external system with the persisted workflow instances created previously when the registration requests, so that all clients will be notified using end-points that already passed with the registration requests. Is WF 4.0 capable of resuming and executing multiple workflow instances when the information of the notification received without storing end-points somehow manually and go though them? If yes, how can I do that?
Also, if my approach of doing so is not correct, then please advice me about the best practice of doing such system using WCF services.
Your help is highly appreciated.
When you use request correlation with workflow services the correlation key must always match a single workflow instance, you can't have multiple workflow instances react to a single message. So you either need to multicast the message using all the different correlation keys or resume you workflow instances in some other way. That other way could be to store the request somewhere, like a SQL table, and have the workflows periodically check that location if they need to notify the client.