Should node-ampqlib rabbitMQ channel be closed in following case? - rabbitmq

I have defined enqueueMessage function to enqueue the message to rabbitMQ queue. In the first line of the function I am creating channel. Which means every time I call enqueueMessage a channel is created. Is this the right approach or should I created top level channel variable and reuse it?
If I create top level channel should I do some more coding to handle case channel gets closed automatically for some reason (network broke)
export async function enqueueMessage(queue: string, message: object) {
const channel = await connection.createChannel();
await channel.assertQueue(queue);
channel.sendToQueue(queue, Buffer.from(JSON.stringify(message)));
}

Creating a channel is cheap but not that much. It depends on the rate of call of this method. but generally, you should create a top-level channel but be aware that channels are not thread-safe so you have to care.
And regarding your next question, yes sure, you have to handle it. there are some Auto Recovery options available in some client libraries. for example in
Java you can use this code :
factory.setAutomaticRecoveryEnabled(true);
And in my experience, there are some cases that the connection gets closed on the server so I had to handle AlreadyClosedException too.

Related

What is the correct way to perform a single blocking, synchronous receive with Pika?

I would like to use Pika / RabbitMQ in a pattern similar to a standard socket: that is, set up the connection, then make blocking synchronous calls to receive a single message each time I'm ready to do more work.
Option A: basic_get
The basic_get method of the BlockingConnection offers the ability to receive a message, but it returns immediately if there is no message available to receive. This is like a socket recv call with blocking disabled. I could use this approach with a timeout to poll continuously, but that's not efficient.
Option B: basic_consume
The basic_consume method of BlockingConnection could do the job, but it has the strange requirement that I have start_consuming() somewhere else, in a thread by itself. Since my callers of my receive method are already expecting to block, waiting for a message, this seems like a waste of a thread.
Is it possible with Pika to do the equivalent of socket.recv(blocking=True)?
Run Pika on its own thread and basic_consume with a prefetch value of 1 (if you really want a single message at a time). Insert messages into some sort of synchronized data structure on which your callers can block.
Be sure to acknowledge your messages correctly from other threads (example)
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
Use the channel's basic_get method like in this example:
credentials = pika.PlainCredentials('username', 'password')
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost', credentials=credentials))
channel = connection.channel()
inmessage = channel.basic_get("your_queue_name", auto_ack=True)
inmessage is a tuple of 3 elements, element with index of 2 is your message's body.

Azure service bus multiple instances for the same subscriber

I have a situation where I have an asp.net core application which registers a subscription client to a topic on startup (IHostedService), this subscription client essentially has a dictionary of callbacks that need to be fired whenever it detects a new message in a topic with an id (this id is stored on the message properties). This dictionary lives throughout the lifetime of the application, and is in memory.
Everything works fine on a single instance of the asp.net core app service on azure, as soon as I scale up to 2, I notice that sometimes the callbacks in the subscription are not firing. This makes sense, as we have two instances now, each with its own dictionary store of callbacks.
So I updated the code to check if the id of the subscription exists, if not, abandon message, if yes, get the callback and invoke it.
public async Task HandleMessage(Microsoft.Azure.ServiceBus.Message message, CancellationToken cancellationToken)
{
var queueItem = this.converter.DeserializeItem(message);
var sessionId = // get the session id from the message
if (string.IsNullOrEmpty(sessionId))
{
await this.subscriptionClient.AbandonAsync(message.SystemProperties.LockToken);
return;
}
if (!this.subscriptions.TryGetValue(sessionId, out var subscription))
{
await this.subscriptionClient.AbandonAsync(message.SystemProperties.LockToken);
return;
}
await subscription.Call(queueItem);
// subscription was found and executed. Complete message
await this.subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
}
However, the problem still occurs. My only guess is that when calling AbandonAsync, the same instance is picking up the message again?
I guess what I am really trying to ask is, if I have multiple instances of a topic subscription client all pointing to the same subscriber for the topic, is it possible for all the instances to get a copy of the message? Or is that not guaranteed.
if I have multiple instances of a topic subscription client all pointing to the same subscriber for the topic, is it possible for all the instances to get a copy of the message? Or is that not guaranteed.
No. If it's the same subscription all clients are pointing to, only one will be receiving that message.
You're running into an issue of scaling out with competing consumers. If you're scaling out, you never know what instance will pick the message. And since your state is local (in memory of each instance), this will fail from time to time. Additional downside is the cost. By fetching messages on the "wrong" instance and abandoning, you're going to pay higher cost on the messaging side.
To address this issue you either need to have a shared/centralized or change your architecture around this.
I managed to solve the issue by making use of service bus sessions. What I was trying to do with the dictionary of callbacks is basically a session manager anyway!
Service bus sessions allow me to have multiple instances of a session client all pointing to the same subscription. However, each instance will only know or care about the sessions it is currently dealing with.

MessageBus: wait when processing is done and send ACK to requestor

We work with external TCP/IP interfaces and one of the requirements is to keep connection open, wait when processing is done and send ACK with the results back.
What would be best approach to achieve that assuming we want to use MessageBus (masstransit/nservicebus) for communication with processing module and tracing message states: received, processing, succeeded, failed?
Specifically, when message arrives to handler/consumer, how it will know about TCP/IP connection? Should I store it in some custom container and inject it to consumer?
Any guidance is appreciated. Thanks.
The consumer will know how to initiate and manage the TCP connection lifecycle.
When a message is received, the handler can invoke the code which performs some action based on the message data. Whether this involves displaying an green elephant on a screen somewhere or opening a port, making a call, and then processing the ACK, does not change how you handle the message.
The actual code which is responsible for performing the action could be packaged into something like a nuget package and exposed over some kind of generic interface if that makes you happier, but there is no contradiction with a component having a dual role as a message consumer and processor of that message.
A new instance of the consumer will be created for each message
receiving. Also, in my case, consumer can’t initiate TCP/IP
connection, it has been already opened earlier (and stored somewhere
else) and consumer needs just have access to use it.
Sorry, I should have read your original question more closely.
There is a solution to shared access to a resource from NServiceBus, as documented here.
public class SomeEventHandler : IHandleMessages<SomeEvent>
{
private IMakeTcpCall _caller;
public SomeEventHandler(IMakeTcpCalls caller)
{
_caller = caller;
}
public Task Handle(SomeEvent message, IMessageHandlerContext context)
{
// Use the caller
var ack = _caller.Call(message.SomeData);
// Do something with ack
...
return Task.CompletedTask;
}
}
You would ideally have a DI container which would manage the lifecycle of the IMakeTcpCall instance as a singleton (though this might get weird in high volume scenarios), so that you can re-use the open TCP channel.
Eg, in Castle Windsor:
Component.For<IMakeTcpCalls>().ImplementedBy<MyThreadsafeTcpCaller>().LifestyleSingleton();
Castle Windsor integrates with NServiceBus

How to handle asynchronous errors in Go?

I am working on my first real Go project, a messaging API. I use channels to pass messages and other data between user goroutines and library goroutines that use a thread-unsafe, event-based C protocol library. For details https://github.com/apache/qpid-proton/blob/master/proton-c/bindings/go/README.md
My question is in 2 related parts:
1. What are common idioms for handling errors across channels?
The goroutine at one end blows up, how do I ensure the other end unblocks, gets an error value and doesn't get blocked again later?
For readers:
I can close the channel, but no error info.
I could pass a struct { data, error }
or use a second channel.
Pros & cons? Other ideas?
For writers: I can't close without a panic so I guess I need a second channel. Is this idiomatic?
select {
case sendChan <- data: sentOk()
case err := <- errChan: oops(err)
}
I also can't write after close so I need to store the error somewhere and check before trying to write. Any other approaches?
2. Exposing channels in APIs.
I need channels to pass error info: should I make those channels public fields or hide them in methods?
There is a tradeoff, and I don't have the experience to evaluate it:
Exposing channels lets users select directly, but it requires them to correctly impement the error handling patterns (check for errors before write, select for error as well as write). This seems complex and error-prone but maybe that because I'm not seasoned in go.
Hiding channels in a method simplifies and enforces correct use of the library. But now an async user must create their own goroutine and channel(s). They may just duplicate what the library does already, which is silly. Also there is an extra goroutine and channel on the path. Maybe that's not a big deal, but the data channel is the critical path for my library and I think it has to be hidden along with the error channel.
I could do both: expose the channels for power users and provide a simple method wrapper for people with simple needs. That's more to support but worth it if neither alone can fit all cases.
The standard net.Conn uses blocking methods, not channels, and I wrote goroutines to pump data to my C event-loop channel so I know it can be done, but I did not find it trivial. net.Conn is wrapping sytem calls not channels underneath so "exposing the channels" is not an option. Do any of the standard libraries export channels with error handling? (time.After doesn't count, there are no errors)
Thanks a lot!
Alan
Your question is a bit on the broad side but I'll try to give some guidance based on my experience writing highly concurrent code...
Personally I think making the channel a property of the object that gets initialized in a nice helpful NewMyObject() *MyObject method is good design pattern. It makes it so code using the object doesn't have to do boiler plate set up every time it wants to call some asynchronous method the type offers.
For readers: I can close the channel, but no error info. I could pass a struct { data, error } or use a second channel. Pros & cons? Other ideas?
Let the reader signal to return by closing the abort channel. The reader should simply use the temp, err := <-FromChannel paradigm and move on with execution if the data or error channel has closed. This should prevent the 'send on closed channel' panics error from the workers since they will close their channel and return. When err != nil the reader will know to move on.
For writers: I can't close without a panic so I guess I need a second channel. Is this idiomatic?
Yes. Sadly I was quite pissed of with the uni-directional behavior of channels and though it should be abstracted. Regardless, it's not. In my code I would not define this on the object that does work asynchronously. The paradigm I prefer is to use the closing signal (since sending a on a channel is not one-to-many, only one goroutine will read that). Instead, I allocate the abort channel in the calling code and if things need to shut down you close the abort channel and all the goroutines doing asynchronous work who are listening on that channel do their clean up and return. You should also use a WaitGroup so you can wait for the goroutines to return before moving on.
So my basic summary;
1) let the caller of asynchronous methods signal it's time to stop, not the other way around. A waitgroup is better used to coordinate their returns
2) use a sync.WaitGroup in the calling code to know when your goroutines are finished so you can move on
3) allocate your error channel in the calling code and take advantage of the one-to-many signal produced by closing the channel; if you send on a channel you allocate in the caller, only a single instance will read from it. If you put one on each instance you have to iterate a collection of instances to send the on each.
4) if you have a type that provide async methods that do work in the background, set up the channels to read off of in it's initializer, document the async methods saying where to listen for data, provide an example of a non-blocking select that passes an abort channel into the async method and listens on the methods data and error channels. If you need to kill a single routine you could accomplish this by closing one of the channels it owns rather than killing them all by closing the callers abort channel.
Hopefully that all makes sense.

WCF service clear buffer

I am currently working on a WCF service and have a small issue. The service is a Polling Duplex service. I initiate data transfer through a message sent to the server. Then the server sends large packets of data back through the callback channel to the client fairly quickly.
To stop the I send a message to the sever telling it do stop. Then it sends a message over the callback channel acknowledging this to let the client know.
The problem is that a bunch of packets of data get buffered up to be sent through the callback channel to the client. This causes a long wait for the acknowledgement to make it back because it has to wait for all the data to go through first.
Is there any way that I can clear the buffer for the callback channel on the server side? I don't need to worry about loosing the data, I just need to throw it away and immediately send the acknowledgement message.
I'm not sure if this can lead you into the correct direction or not... I have a similar service where when I look in my Subscribe() method, I can access this:
var context = OperationContext.Current;
var sessionId = context.SessionId;
var currentClient = context.GetCallbackChannel<IClient>();
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Now, if you had a way of using your IClient object, and to access the context where you got the instance of IClient from (resolve it's context), could running the following two statements do what you want?
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Just a quick ramble from my thoughts. Would love to know if this would fix it or not, for personal information if nothing else. Could you cache the OperationContext as part of a SubscriptionObject which would contain 2 properties, the first being for the OperationContext, and the second being your IClient object.