RabbitMq Rpc: EventingBasicConsumer or QueueingBasicConsumer - rabbitmq

The tutorials on RabbitMq's site are pretty straight forward, but I noticed that in the Rpc example, the developers choose to use the thread-blocking call consumer.Queue.Dequeue() instead of using the EventingBasicConsumer and the event handling model used elsewhere.
Looking through the current documentation it is stated that
As of version 3.5.0 application callback handlers can invoke blocking operations (such as IModel.QueueDeclare or IModel.BasicCancel). IBasicConsumer callbacks are invoked concurrently.
Where as the old documentation (v. 1.5.0) states that it is not supported
Application callback handlers must not invoke blocking AMQP operations (such as
IModel.QueueDeclare or IModel.BasicCancel). If they do, the channel will deadlock. [...] For this reason, QueueingBasicConsumer is the safest way of subscribing to a queue.
Could it be that the RPC example hasn't been updated? Or am I missing something? I would very much appreciate to be pointed to some documentation about this.

You're right, there is no need to use QueueingBasicConsumer.
There is an issue in RabbitMQ tutorials repo about this.
I've sent a pull request and it was merged, hope the documentation will be updated soon.

Related

Socket connection ActiveMQ with .Net

I have a task where I need to establish a connection between an ActiveMQ queue and a .Net application. I am using the AMQP.Net Lite plugin for this. But I have a need for the receiver of the .Net application to be called the moment the message goes into the queue.
Is there any solution where there is no need for the .Net application to stay from time to time by checking the MQ queue to see if there is any new message?
Any direct connection using socket? how should I proceed in this case?
Adan-
Sounds like you want to setup a standard consumer (or it may be called a receiver). Your use case is exactly the purpose of the consumer-side of the AMQP API.. see below
Note: Messaging systems often deploy a 'callback' or 'listener' model for asynchronous processing when receiving messages. That will still feel "instantaneous" from a data processing perspective. It is a different programming paradigm that is simpler to code as it does not require logic to break out of an infinite loop in the receiver/consumer pattern.
AMQP.NET lite receiver sample

Akka.Net custom Mailbox, custom IMessageQueue, or something else

We are using Akka.Net and in some cases we need actors to communicate reliably while preserving order over a message queue (i.e. Oracle Advanced Queues or WebSphere MQ, but any message queuing system would work such as RabbitMQ).
We have various requirements why we are using the message queue, so the question isn't if we should be using this with Akka, the question is how.
How would we go about connecting the queue up to Akka so that it is as seamless as possible?
Is a a custom Mailbox the route to go down? Do we need to right a custom IMessageQueue implementation? Or maybe we need a custom router? Are there any specific tests we can run to be sure our Mailbox/IMessageQueue works well with Akka.Net?
EDIT:
Should we maybe looking to implement a custom Transport?
Can any pointers be offered on where to start?
In general implementing custom mailbox based on some reliable queue is not feasible solution - actually it has been already done on the Akka JVM side, and it failed all hopes.
One of the basic reasons is usually the misunderstanding of the basic idea - when people are talking about reliable delivery (that MQ-systems offers), what they really mean, is reliable processing. What if your messages has been send with 100% delivery ratio, but ultimately receiving actor/node has crashed while processing them? From the mailbox point of view everything went smooth...
For this reason, usually the way to go is a dedicated actor - or hierarchy of them - working as a gateway to external messaging system. This way you can not only send message them but also mark them as receive after explicit acknowledgement from successfully completed process. One of the examples may be akka-rabbitmq (written in Scala).

Persisting Data in a Twisted App

I'm trying to understand how to persist data in a Twisted application. Let's say I've decided to write a Twisted server that:
Accepts inbound SMTP requests
Sends the message to a 3rd party system for modification
Relays the modified message to its destination
A typical Twisted tutorial would have you build this app using Deferreds and callbacks, roughly:
A Factory handles inbound requests
Each time a full email is received a call is sent to the remote message processor, returning a deferred
Add an errback that substitutes the original message if anything goes wrong in the modify call.
Add a callback to send the message on to the recipient, which again returns a deferred.
A real server would add/include additional call/errbacks to retry or notify the sender or whatnot. Again for simplicity, assume we consider this an acceptable amount of effort and just log errors.
Of course, this persists NO data in the event of a crash/restart/something else. I get that a solution involves a 3rd party persistent datastore (RabbitMQ is often mentioned) and could probably come up with a dozen random ways to achieve the outcome.
However, I imagine there are a few approaches that work best in a Twisted app. What do they look like? How do they store (and restore in the event of a crash) the in-process messages?
If you found this question, you probably already know that Twisted is event-based. It sounds simple, but the "hardest" part of the answer is to get the persistence platform generating the events we need when we need them. Naturally, you can persist the data in a DB or a message queue, but some platforms don't naturally generate events. For example:
ZeroMQ has (or at least had) no callback for new data. It's also relatively poor at persistence.
In other cases, events are easy but reliability is a problem:
pgSQL can be configured to generate events using triggers, but they're one-time things so you can't resume incomplete events
The light at the end of the tunnel seems to be something like RabbitMQ.
RabbitMQ can persist the message in a database to survive a crash
We can use acknowledgements on both legs (incoming SMTP to RabbitMQ and RabbitMQ to outgoing SMTP) to ensure the application is reliable. Importantly, RabbitMQ supports acknowledgements.
Finally, several of the RabbitMQ clients provide full asynchronous support (see for example pika, txampq, and puka)
It's enough for our purposes that the RabbitMQ client provides us an event-based interface.
At a more theoretical level, however, this need not be the case. In fact, despite the "notification" issue above, ZeroMQ has an event-based client. Even if our software is elegantly event-based, we will run into systems that aren't. In these cases, we have no choice but to fall back on polling. In principle, if not in practice, we just query the message provider for messages. When we exhaust the current queue (and immediately if there are no messages), we use a callLater command to check again in the future. It may feel anti-pattern, but it's (to the best of my knowledge anyway) the right way to handle this particular case.

Is it appropriate to use message queues for synchronous rpc calls via ajax

I have a web application that uses the jquery autocomplete plugin, which essentially sends via ajax a request containing text that has been typed into a textbox to our web server, once the web server receives this request, it is then handed off to rabbitmq.
I know that we do get benefits from using messaging, but it seems like using it for blocking rpc calls is a misuse and that something like WCF is far more appropriate in this instance, is this the case or is it considered acceptable architecture?
It's possible to perform RPC synchronous requests with RabbitMQ. Here it's explained very well, with its drawback included! So it's considered an acceptable architecture. Discouraged, but acceptable whenever the synchronous response is mandatory.
As a possible counter-effect is that adding RabbitMQ in the middle, you will add some latency to the solution.
However you have the possibility to gain in terms of reliability, flexibility, scalability,...
What benefit would you get from it? And in fairness if you put the message in the queue how is is synchronous? unless the same process that placed the message in the queue is the one removing it, but that is pretty much useless no?
Now, if all you want to do is place the message in the queue and process it later on is grand.
Also the fact that you had WCF to the mixture is IMHO a symptom that something is perhaps not clear enough. You could use WCF as an API gateway and use it to write the message to the queue so this is not really about WCF or Queues, but more like sync vs async.
The way you are putting your ideas, does not look alright to me.

how to configure nservicebus using gateway when MSDTC not available either end

I am new to NServiceBus, trying to introduce messaging into a WCF/RPC solution.
Because of architectural constraints and overhead (memory and cpu usage already high) IT Operations will not allow MSDTC. (I'm also keen to avoid 2PC fwiw). I also require messages over http so the NSB bridge looks like a great solution.
Based on these posts (how-i-avoid-two-phase-commit and extending-nservicebus-avoiding-two-phase-commits) it looks to me as though it's possible to use NSB with the DTC disabled.
It sounds like EventStore does manage to avoid 2PC in the same way that I want to setup NSB but at the moment I just want to get NSB to work rather than adding event sourcing into the mix.
Questions:
Are there any examples of configuring NSB to work this way? I'm quite happy to add the extra complexity (custom message handler with local message state storage) - without 2PC there isn't really another option. I already know of this example (IdempotentConsumer) but the test projects for this repo contain no code. It would be even more helpful if there was an example using nosql storage.
Will I need to alter the NSB bridge to deal with no DTC? I'm guessing no - bridge transactions are only against the local queue but the process that consumes the local queue will obviously need to coded to avoid 2PC. Correct?
Are there any other useful resources/posts around using NSB without MSDTC? The solution (how-i-avoid-two-phase-commit) seems not too complex but given I'm just starting out with NSB it would be great to find a quickstart for this...
I would have thought this would be a common scenario - but there doesn't seem to be much written about avoiding MSDTC while still using NSB. Surely there are others who are using a message bus but aren't allowed to use MSDTC... Is there another obvious way that I've missed?
thanks
2) Yes you should be fine. Since you're doing the deduplication your self you don't need the gateway to do it for you. Just configure it to use the InMemory persistence and you should be fine.