Migrating from IBM MQ to RabbitMQ - rabbitmq

Can we migrate from IBM MQ to Rabbit MQ ? Is this possible?
Are there any dependencies.What are the factors that we need to look at.

That's like asking can you migrate from Java to c++. Both things do the same thing but are different by structure (at least).
You can do it of course, but I wouldn't call it migrating, but setting up new. They are both message queue brokers and you could keep some concepts (architecturally looking) but the whole "physical" infrastructure would needed to be set up from scratch.

You need to re-look at the current messaging design pattern and re-design it for RabbitMQ. And design the roll-out strategy. Secondly all the producers and consumers need to be re-written. You need to take into consideration how much effort the producer and consumer applications need to migrate to the new messaging system and secondly Messaging Format also slightly different here in RabbitMQ. So you need to take various aspect into consideration before migrating.

Related

Microservice development with or without Akka.NET

We are trying to implement Microservice Architecture creating our new applications in our current environment using Asp.NET Core. The first generation of our Microservices will use Request/Reply communication pattern and there is no need for any Message Broker. However we are going to have a Message Broker after 2 years.
Will it take much efforts in terms av development to adapt our Microservices to use the Message Broker and go for a Publish/Subscribe communication pattern after two years?
What is the good approach? Should we use something like Akka.NET already now without having a Message Broker? Our should we implment Akka.net later to make the Microservices use pub/sub communication pattern?
Thanks and appriciate all kind of advice.
Take it right from start. The major purpose of microservices is loosely coupled services. you may not realize initially but at some point you may need it. technically req/resp is refactored monolithic. using event driven architecture with message broker is little more complex but the benefits are far reaching. imagine you have more and more microservices joining the club it be very easy with pub sub.
coming back to your second point it could be significant effort to refactor and include message broker later. e.g. you decide to go for CQRS and event sourcing which is very common patterns for distributed applications. you would require major re architect of your system. but for simple applications these patterns may not be required and depending on your business need you have to decide how resilient, available and decoupled your services should be and will it be worth the effort when requirements could be met simply.
If you want to go for truly microservices architecture then it starts with async communication that is possible with message broker.
Hope that helps.

Where to create queues and exchanges?

I'm using RabbitMQ as message broker in first time and now I have a question about when to declare queues and exchanges using rabbit's own management tool and when to do it in the code of the software? In my opinion is that it is much better to create queues and exchanges using the management tool, because it's a centralized place to add new or remove useless queues without the need to modify the actual software. I am asking some advice and opinions.
Thank you.
The short answer is: whatever works best for you.
I've worked with message brokers that required external tools for defining the topology (exchanges, queues, bindings, etc) and with RabbitMQ that allows me to define them at runtime, as needed.
I don't think either scenario is "the right way". Rather, it depends entirely on your situation.
Personally, I see a lot of value in letting my software define the topology at runtime with RabbitMQ. But there are still times when it gets frustrating because I often end up duplicating my definitions between producers and consumers.
But then, moving from development to production is easier when the software itself defines the topology. No need to pre-configure things before moving code to production.
It's all tradeoffs.
Try it however you're comfortable. Then try it the other way. See what happens, and learn which you prefer and when. Just remember that you don't have to do one or the other. You can do both if you want.

What is the recommended approach for raising database-triggered events with NServiceBus? Is direct SQL Service Broker integration no longer viable?

My team is currently in the initial stages of designing implementations using NServiceBus (v4, possibly v5) in a number of different contexts to facilitate integration between a number of our custom applications. However we would also like to utilize NServiceBus to raise business events triggered from some of our off-the-shelf third-party systems. These systems do not provide inherent messaging or eventing apis, so our current thinking is to hook into their underlying databases using triggers and potentially SQL Service Broker as a bridge to NServiceBus.
I've looked at ServiceBroker.net but that seems to use NServiceBus v2 or v3 api's, interfaces, etc., by creating a totally new ITransport. We're planning on using more recent versions of NServiceBus though, so this doesn't seem to be a solid option. Other somewhat similar questions here on SO (all from a few years ago) seem to be answered with guidance to simply use the SQL Transport. That uses table-based pseudo-queues instead of MSMQ, but what's not clear is if it is then advisable to have SQL triggers hand-craft NServiceBus message records and manually INSERT them into the pseudo-queue tables directly, or whether there would still be some usage of SQL Service Broker in the middle that somehow more natively pops the NServiceBus messages onto the bus. And if somehow using the SQLTransport is the answer, what would be best practice to bridge the messages over to the main MSMQTransport-based bus?
It seemed like there was some concerted movement on SQL Service Broker bridging over to NServiceBus several years ago, but was deprecated once the native NServiceBus SQLTransport was introduced. I feel like maybe I'm missing something in terms of the modern NServiceBus approach to generating data-driven events in a design that is more real-time than a looped polling design.
You may want to take a look at the Gateway feature. You should be able to run 2 different transports and use the Gateway feature to bridge the two via HTTP.
We have a similar system, although it's slightly easier in that we control the underlying databases and applications (i.e. not 3rd party) and the current proof of concept uses the ServiceBroker / SQLDependency / ServiceBus as part of its architecture.
If you go this route, I would also advise using triggers to populate a common table, then monitoring that.
I didn't know about ServiceBroker.Net until today, so can't comment. I also haven't looked at CLR stored procs / triggers; whether there's any possibilities there.
Somebody else asked a question about nServiceBus and ServiceBroker which I answered here which may be useful for anyone looking to implement this.

Why do we need service bus frameworks like NService Bus/MassTransit on top of message queuing systems like MSMQ/RabbitMQ etc?

In the distributed message transaction world, am trying to understand the different parts that are involved in developing distributed systems. From what I understand you can design messaging system using enterprise bus backed with a message queue system. Why is it a good idea to use both? Can the same be achieved by programming against just the message queuing system? What are the advantages of using both together?
You certainly can code directly against the messaging infrastructure and you will find that there are pros and cons w.r.t. each transport. There are many decisions that you will need to make along the way, though, and this is where a service bus may assist.
Developing directly against the queuing system will inevitably lead to various abstractions that you will require to prevent duplication.
A service bus will provide opinions/implementations for:
Message delivery
exactly-once (distributed transactions - distributed transactions are not supported by all queuing systems)
at-least-once (non-transactional)
at-most-once (will probably require some transactional processing but you can get away with no distributed transactions)
Retrying failed messages
Request / Response
Message distribution
Publish/Subscribe (probably quite easy with RabbitMQ directly, not so much with MSMQ directly)
Message Idempotence
Dependency Injection
Some service bus implementations provide a framework for implementing process managers (called sagas by most). My current opinion is that a process manager needs to be a first-class citizen as any other entity is but that may change :)
Anyhow, if you as still evaluating options you could also take a look at my FOSS project: http://shuttle.github.io/shuttle-esb/
So a service bus may buy you quite a bit out-of-the-box whereas coding against the queues directly may be a bit of work to get going.
I can't comment directly on MassTransit, having only tinkered with it.
I use NServiceBus and am a fan of it. I think there are valid reasons for directly using queuing technology, but I think rolling your own ESB using MSMQ/RabbitMQ would cost a lot more than simply using a commercial product (or open source product e.g. MassTransit).
So do you need it? No. Will it make your life much easier if the features match your requirements? Absolutely.

API Versioning and long running processes with nServiceBus and REST API

We are building a web API and using nServiceBus for messaging under the hood for all asynchronous and long running processes.
Question is when we spin off a new version of the API should we use a new set of queues?
Like, for the API version 1,
blobstore.v1.inbound
blobstore.v1.outbound
blobstore.v1.timeout
blobstore.v1.audit
and for the API version 2,
blobstore.v2.inbound
blobstore.v2.outbound
blobstore.v2.timeout
blobstore.v2.audit
Or should we strive to use the same set of queues with multiple message formats and handlers (assuming change of requirements and evolving message formats)?
I am trying to understand pros and cons in the long run from the architecture standpoint. Having a separate set of queues gives the flexibility of building, deploying and managing different API versions in isolation without worrying about compatibility and sociability.
Personally I am leaning towards to the latter but the challenges around compatibility and upgrades are not clearly understood.
If you have dealt with a similar scenario in the past, please share your experiences, thoughts, suggestions and recommendations.
Your time is much appreciated!
The more frequent your releases, the less appropriate a queue-per-version strategy becomes, and the more important backwards-compatibility becomes (both in structure and in behavior).
The decision between going with a different set of queues or a single queue to support different versions of messages depends on the extent of the difference between messages. In the versioning sample the V2 message is a pure extension of the V1 message which can be represented by interface inheritance. Subscribers of V1 messages can receive V2 messages which are proper super sets of V1 messages. In this case, it makes sense to keep the same queue and only update subscribers as needed. If the messages are drastically different it may be easier to deploy a second set of queues. This has the benefits you described, namely isolation. You don't have to worry about messing up dependent components. However, this will have a bigger impact on your system because you have to consider everything that may depend on the queues. It may be that you have to deploy multiple endpoints and services at once to make the V2 roll out complete.