We are building a web API and using nServiceBus for messaging under the hood for all asynchronous and long running processes.
Question is when we spin off a new version of the API should we use a new set of queues?
Like, for the API version 1,
blobstore.v1.inbound
blobstore.v1.outbound
blobstore.v1.timeout
blobstore.v1.audit
and for the API version 2,
blobstore.v2.inbound
blobstore.v2.outbound
blobstore.v2.timeout
blobstore.v2.audit
Or should we strive to use the same set of queues with multiple message formats and handlers (assuming change of requirements and evolving message formats)?
I am trying to understand pros and cons in the long run from the architecture standpoint. Having a separate set of queues gives the flexibility of building, deploying and managing different API versions in isolation without worrying about compatibility and sociability.
Personally I am leaning towards to the latter but the challenges around compatibility and upgrades are not clearly understood.
If you have dealt with a similar scenario in the past, please share your experiences, thoughts, suggestions and recommendations.
Your time is much appreciated!
The more frequent your releases, the less appropriate a queue-per-version strategy becomes, and the more important backwards-compatibility becomes (both in structure and in behavior).
The decision between going with a different set of queues or a single queue to support different versions of messages depends on the extent of the difference between messages. In the versioning sample the V2 message is a pure extension of the V1 message which can be represented by interface inheritance. Subscribers of V1 messages can receive V2 messages which are proper super sets of V1 messages. In this case, it makes sense to keep the same queue and only update subscribers as needed. If the messages are drastically different it may be easier to deploy a second set of queues. This has the benefits you described, namely isolation. You don't have to worry about messing up dependent components. However, this will have a bigger impact on your system because you have to consider everything that may depend on the queues. It may be that you have to deploy multiple endpoints and services at once to make the V2 roll out complete.
Related
We are trying to implement Microservice Architecture creating our new applications in our current environment using Asp.NET Core. The first generation of our Microservices will use Request/Reply communication pattern and there is no need for any Message Broker. However we are going to have a Message Broker after 2 years.
Will it take much efforts in terms av development to adapt our Microservices to use the Message Broker and go for a Publish/Subscribe communication pattern after two years?
What is the good approach? Should we use something like Akka.NET already now without having a Message Broker? Our should we implment Akka.net later to make the Microservices use pub/sub communication pattern?
Thanks and appriciate all kind of advice.
Take it right from start. The major purpose of microservices is loosely coupled services. you may not realize initially but at some point you may need it. technically req/resp is refactored monolithic. using event driven architecture with message broker is little more complex but the benefits are far reaching. imagine you have more and more microservices joining the club it be very easy with pub sub.
coming back to your second point it could be significant effort to refactor and include message broker later. e.g. you decide to go for CQRS and event sourcing which is very common patterns for distributed applications. you would require major re architect of your system. but for simple applications these patterns may not be required and depending on your business need you have to decide how resilient, available and decoupled your services should be and will it be worth the effort when requirements could be met simply.
If you want to go for truly microservices architecture then it starts with async communication that is possible with message broker.
Hope that helps.
I'm working with RabbitMQ 3.7, and I'm finding that my microservice architecture is starting to feel tangled and coupled.
I'm finding that I'm publishing messages from within my consumer's received event to other queues. This feels wrong. But I'm not sure what the alternative is, since I benefit from the efficiency in passing the data from the consumer directly to the next queue/task.
Note that the above is just an example, and the service I'm running are similar, and fairly work-flow dependent (although they can be ran independently!)
Questions:
How is data normally passed from process to process (or consumer to publisher) in situations where the micro-services are fairly dependent on each other. Not that they can't be ran individually, but that they work best in a work-flow scenario?
If the solution involves not publishing new messages from within the received event of a consumer, then what is the proper way to get the data to that microservice/process?
I find that chaining workflows across queues can create more complex workflows than desired, where on the other hand, creating simpler consumer applications can make for more maintainable code.
Do you gain or lose any scalability or simplicity in your code by splitting the first two steps? Without more detailed info to consider, I probably would not split up the first two parts of the functionality. I don't see anything wrong with directly storing the scraping results.
I like your isolated consumer for sending email, though you might consider making a generic email sending consumer that any of your applications could use and have the message format contain the proper mail parts and have the consumer construct the mail and deliver it.
I don't think there's a "right" answer to your architecture here other than to think about finding the right balance of simplicity/complexity, scalability, and maintainability.
Could you, please, suggest - if I need maximum intersystem/language compatibility, while the main server will be implemented on .NET, could I use NServiceBus or MassTransit, or is it better to use pure RabbitMQ instead?
NServiceBus or MassTransit would give pretty nice abstracrion level, but the easiness of communication between different solutions and environments are critical. I'm thinking more of using pure RabbitMQ now, but would be really gratefull for pointing put some pros and cons if I'm taking it into account wrong.
If you have multiple applications with different languages internally that must send and receive messages I would not recommend NServiceBus or MassTransit. They require certain message headers that they themselves add. You would never be able to leverage all the functionality offered by these messaging frameworks. However, if internally you are all .NET and you have multiple applications that will use the messaging infrastructure then NServiceBus and MassTransit will add a lot of value.
Regarding interoperability with third parties. A strength and weakness of both NServiceBus and MassTransit is that you must send and receive messages as strongly typed classes or interfaces. These get serialised to JSON/XML/BSON etc and deserialised back to types again.
Because of this they require message headers that indicate the type of the message for deserialisation purposes. Without the message type headers they won't work.
Working with types is much easier but it can cause interoperability issues. When integrating with third parties that send XML or JSON messages with no types requires you to create a translation tier between your services and the third party.
You could translate to your own type that maps onto the XML/JSON or translate to a simple type that contains a string property for the XML/JSON. Either way, your translation tier would publish the messages internally using MassTransit/NServiceBus and so the messages would contain all the necessary headers to fully exploit all the functionality they offer.
For sending messages to the third party, the translation tier would translate the messages to the XML/JSON expected by the third party.
How much of your messaging system is involved with 3rd party/inter-system integrations? If the answer is very little then NServiceBus and MassTransit would be great options as they offer lots of great functionality.
If the answer is a lot, then they might still be good options. Having a translation tier will shield your internal services from being exposed to the requirements and changing schemas of your third parties. It will give you greater control and flexibility at a cost of extra moving parts.
Ultimately, the translation tier is not as complex as implementing the patterns offered out-of-the-box by NServiceBus and MassTransit. So I would seriously consider them as a viable option.
Some links regarding inter-operability
https://docs.particular.net/nservicebus/messaging/third-party-integration
http://masstransit-project.com/MassTransit/advanced/interoperability.html
My team is currently in the initial stages of designing implementations using NServiceBus (v4, possibly v5) in a number of different contexts to facilitate integration between a number of our custom applications. However we would also like to utilize NServiceBus to raise business events triggered from some of our off-the-shelf third-party systems. These systems do not provide inherent messaging or eventing apis, so our current thinking is to hook into their underlying databases using triggers and potentially SQL Service Broker as a bridge to NServiceBus.
I've looked at ServiceBroker.net but that seems to use NServiceBus v2 or v3 api's, interfaces, etc., by creating a totally new ITransport. We're planning on using more recent versions of NServiceBus though, so this doesn't seem to be a solid option. Other somewhat similar questions here on SO (all from a few years ago) seem to be answered with guidance to simply use the SQL Transport. That uses table-based pseudo-queues instead of MSMQ, but what's not clear is if it is then advisable to have SQL triggers hand-craft NServiceBus message records and manually INSERT them into the pseudo-queue tables directly, or whether there would still be some usage of SQL Service Broker in the middle that somehow more natively pops the NServiceBus messages onto the bus. And if somehow using the SQLTransport is the answer, what would be best practice to bridge the messages over to the main MSMQTransport-based bus?
It seemed like there was some concerted movement on SQL Service Broker bridging over to NServiceBus several years ago, but was deprecated once the native NServiceBus SQLTransport was introduced. I feel like maybe I'm missing something in terms of the modern NServiceBus approach to generating data-driven events in a design that is more real-time than a looped polling design.
You may want to take a look at the Gateway feature. You should be able to run 2 different transports and use the Gateway feature to bridge the two via HTTP.
We have a similar system, although it's slightly easier in that we control the underlying databases and applications (i.e. not 3rd party) and the current proof of concept uses the ServiceBroker / SQLDependency / ServiceBus as part of its architecture.
If you go this route, I would also advise using triggers to populate a common table, then monitoring that.
I didn't know about ServiceBroker.Net until today, so can't comment. I also haven't looked at CLR stored procs / triggers; whether there's any possibilities there.
Somebody else asked a question about nServiceBus and ServiceBroker which I answered here which may be useful for anyone looking to implement this.
In the distributed message transaction world, am trying to understand the different parts that are involved in developing distributed systems. From what I understand you can design messaging system using enterprise bus backed with a message queue system. Why is it a good idea to use both? Can the same be achieved by programming against just the message queuing system? What are the advantages of using both together?
You certainly can code directly against the messaging infrastructure and you will find that there are pros and cons w.r.t. each transport. There are many decisions that you will need to make along the way, though, and this is where a service bus may assist.
Developing directly against the queuing system will inevitably lead to various abstractions that you will require to prevent duplication.
A service bus will provide opinions/implementations for:
Message delivery
exactly-once (distributed transactions - distributed transactions are not supported by all queuing systems)
at-least-once (non-transactional)
at-most-once (will probably require some transactional processing but you can get away with no distributed transactions)
Retrying failed messages
Request / Response
Message distribution
Publish/Subscribe (probably quite easy with RabbitMQ directly, not so much with MSMQ directly)
Message Idempotence
Dependency Injection
Some service bus implementations provide a framework for implementing process managers (called sagas by most). My current opinion is that a process manager needs to be a first-class citizen as any other entity is but that may change :)
Anyhow, if you as still evaluating options you could also take a look at my FOSS project: http://shuttle.github.io/shuttle-esb/
So a service bus may buy you quite a bit out-of-the-box whereas coding against the queues directly may be a bit of work to get going.
I can't comment directly on MassTransit, having only tinkered with it.
I use NServiceBus and am a fan of it. I think there are valid reasons for directly using queuing technology, but I think rolling your own ESB using MSMQ/RabbitMQ would cost a lot more than simply using a commercial product (or open source product e.g. MassTransit).
So do you need it? No. Will it make your life much easier if the features match your requirements? Absolutely.