When using RabbitMQ and its channel model, how often am I supposed to close channels?
For example is it best practise to
Close the channel at the end of the method it got opened?
Reuse the channel globally between different methods?
On a sidenote: I am using RabbitMQ in Clojure through the Langohr Library and thus prefer to not have any global state, which prompts me to declare channels at the start of relevant messages and then to close them at the end again. I am just not sure if this is intended.
If it's easiest to open channel, execute a method, then close it, by all means do so. If your performance requirements are such that this causes too much slowdown, then start investigating channel re-use.
Opening / closing channels is not nearly as resource intensive as opening and closing connections.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Related
I am using amqplib in Node.js, and I am not clear about the best practices in my code.
Basically, my current code calls the amqp.connect() when the Node server starts up, and then uses a different channel for each producer and each consumer, never actually closing any of them. I'd like to know if that makes any sense, or should I create the channel, publish and close it every time I want to publish a message. And what about the connection? Is that a "good practice" to connect once, and then keep it open for the lifetime of my server?
On the Consumer side - can I use a single connection and a single channel to listen on multiple queues?
Thank you for any clarifications
In general, it's not a good practice to open and close connections and channels per message. Connections are long lived and it takes resources to keep opening and closing them. For channels, they share the TCP connection with the connection so they are more lightweight, but they will still consume memory and definitely should not be left open after done using them.
It is recommended to have a channel per thread, and a channel per consumer. But for publishing it is totally ok to use the same channel. But keep in mind that depending on the operations, the protocol might kill the channel in certain situations (e.g. queue existence check), so prepare for that. There is also soft (configurable) and hard (usually 65535) limits on the maximum number of channels on many of the client implementations.
So to sum up, depending on your use case use one to a few connections, open channels when you need them and share them when it makes sense, but remember to close them when done.
The rabbitmq documentation explains the nature of the connections and channels (end of the document). And the accepted answer on this question has good information on the subject.
What closing a kotlinx.coroutines channel using channel.close() does and what the negative effect of not manually closing channels would be? If I don't manually close a channel will there be some unnecessary processing? Will there be a reference to the channel somewhere that prevents it from being GCd? Or does the close function just exist as a way of informing potential users of the channel that it can no longer be used.
(Question reposted from Kotlin forum https://discuss.kotlinlang.org/t/closing-coroutine-channels/2549)
Closing a channel conceptually works by sending a special "close token" over this channel. You close a channel when you have a finite sequence of elements to be processed by consumers and you must signal to the consumers that this sequence is over. You don't have to close a channel otherwise.
Channels are not tied to any native resource and they don't have to be closed to release their memory. Simply dropping all the references to a channel is fine. GC will come to do clean it up.
We are using Akka.Net and in some cases we need actors to communicate reliably while preserving order over a message queue (i.e. Oracle Advanced Queues or WebSphere MQ, but any message queuing system would work such as RabbitMQ).
We have various requirements why we are using the message queue, so the question isn't if we should be using this with Akka, the question is how.
How would we go about connecting the queue up to Akka so that it is as seamless as possible?
Is a a custom Mailbox the route to go down? Do we need to right a custom IMessageQueue implementation? Or maybe we need a custom router? Are there any specific tests we can run to be sure our Mailbox/IMessageQueue works well with Akka.Net?
EDIT:
Should we maybe looking to implement a custom Transport?
Can any pointers be offered on where to start?
In general implementing custom mailbox based on some reliable queue is not feasible solution - actually it has been already done on the Akka JVM side, and it failed all hopes.
One of the basic reasons is usually the misunderstanding of the basic idea - when people are talking about reliable delivery (that MQ-systems offers), what they really mean, is reliable processing. What if your messages has been send with 100% delivery ratio, but ultimately receiving actor/node has crashed while processing them? From the mailbox point of view everything went smooth...
For this reason, usually the way to go is a dedicated actor - or hierarchy of them - working as a gateway to external messaging system. This way you can not only send message them but also mark them as receive after explicit acknowledgement from successfully completed process. One of the examples may be akka-rabbitmq (written in Scala).
My app has multiple threads that publish messages to a single RabbitMQ cluster.
Reading the rabbit docs: i read the following:
For applications that use multiple threads/processes for processing, it is very common to open a new channel per thread/process and not share channels between them.
And I understand that instead of opening multiple connection (expensive)
it is better to open multiple channels.
But why not use a single channel to all threads?
What are the benefits of using multiple channels over a single channel?
AMQP has the concept of Channel to provide more flexibility over reliable TCP connections. Opening a TCP connection per message would be extremely expensive, so they came up with the idea of logical Channels within a connection.
It is not a good idea to use a Channel for all the threads because if anything fails in a particular thread and the Channel dies, the rest of the threads will throw the exception AlreadyClosedException. A channel can die for multiple reasons: for example for trying to declare something that is already declared with other parameters or trying to cancel a consumer which doesn't exist, publishing to an exchange that doesn't exist, etc...
My best advice would be to have an object that holds a Channel in a local variable and also implements ShutdownListener interface, so every time the channel fails, it is able to recover and create a new one from a connection. So I would say that the main benefit is failure tolerance and scalability, since if a Channel dies it won't affect the rest.
I'm working with GameKit.framework and I'm trying to create a reliable communication between two iPhones.
I'm sending packages with the GKMatchSendDataReliable mode.
The documentation says:
GKMatchSendDataReliable
The data is sent continuously until it is successfully received by the intended recipients or the connection times out.
Reliable transmissions are delivered in the order they were sent. Use this when you need to guarantee delivery.
Available in iOS 4.1 and later. Declared in GKMatch.h.
I have experienced some problems on a bad WiFi connection. The GameKit does not declare the connection lost, but some packages never arrive.
Can I count on a 100% reliable communication when using GKMatchSendDataReliable or is Apple just using fancy names for something they didn't implement?
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
It nearly 100% reliable but maybe not what you need sometimes… For example you dropped out of network all the stuff that you send via GKMatchSendDataReliable will be sent in the order you've send them.
This is brilliant for turn-based games for example, but if fast reaction is necessary a dropout of the network would not just forget the missed packages he would get all the now late packages till he gets to realtime again.
The case GKMatchSendDataReliable doesn't send the data is a connection time out.
I think this would be also the case when you close the app