I am using request reply model of NServiceBUs for one of my project.There is a self hosted service bus listening for a request message and reply back with a request message.
So in WCF message code i have coded like
// sent the message to bus.
var synchronousMessageSent =
this._bus.Send(destinationQueueName, requestMessage)
.Register(
(AsyncCallback)delegate(IAsyncResult ar)
{
// process the response from the message.
NServiceBus.CompletionResult completionResult = ar.AsyncState as NServiceBus.CompletionResult;
if (completionResult != null)
{
// set the response messages.
response = completionResult.Messages;
}
},
null);
// block the current thread.
synchronousMessageSent.AsyncWaitHandle.WaitOne(1000);
return response;
The destinaton que will sent the reply.
I am getting the resault one or tweo times afetr that the reply is not coming to the client. Am i doing anything wrong
Why are you trying to turn an a-synchronous framework into a synchronous one? There is a fundamental flaw with what you are trying to do.
You should take a long hard look at your design and appreciate the benefits of a-sync calls. The fact that you are doing
// block the current thread.
synchronousMessageSent.AsyncWaitHandle.WaitOne(1000);
Is highly concerning. What are you trying to achieve with this? Design your system based on a-synchronous messaging communication and you will have a MUCH better system. Otherwise you might as well just use some kind of blocking tcp/ip sockets.
Related
We work with external TCP/IP interfaces and one of the requirements is to keep connection open, wait when processing is done and send ACK with the results back.
What would be best approach to achieve that assuming we want to use MessageBus (masstransit/nservicebus) for communication with processing module and tracing message states: received, processing, succeeded, failed?
Specifically, when message arrives to handler/consumer, how it will know about TCP/IP connection? Should I store it in some custom container and inject it to consumer?
Any guidance is appreciated. Thanks.
The consumer will know how to initiate and manage the TCP connection lifecycle.
When a message is received, the handler can invoke the code which performs some action based on the message data. Whether this involves displaying an green elephant on a screen somewhere or opening a port, making a call, and then processing the ACK, does not change how you handle the message.
The actual code which is responsible for performing the action could be packaged into something like a nuget package and exposed over some kind of generic interface if that makes you happier, but there is no contradiction with a component having a dual role as a message consumer and processor of that message.
A new instance of the consumer will be created for each message
receiving. Also, in my case, consumer can’t initiate TCP/IP
connection, it has been already opened earlier (and stored somewhere
else) and consumer needs just have access to use it.
Sorry, I should have read your original question more closely.
There is a solution to shared access to a resource from NServiceBus, as documented here.
public class SomeEventHandler : IHandleMessages<SomeEvent>
{
private IMakeTcpCall _caller;
public SomeEventHandler(IMakeTcpCalls caller)
{
_caller = caller;
}
public Task Handle(SomeEvent message, IMessageHandlerContext context)
{
// Use the caller
var ack = _caller.Call(message.SomeData);
// Do something with ack
...
return Task.CompletedTask;
}
}
You would ideally have a DI container which would manage the lifecycle of the IMakeTcpCall instance as a singleton (though this might get weird in high volume scenarios), so that you can re-use the open TCP channel.
Eg, in Castle Windsor:
Component.For<IMakeTcpCalls>().ImplementedBy<MyThreadsafeTcpCaller>().LifestyleSingleton();
Castle Windsor integrates with NServiceBus
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ) -- sometimes it might be necessary when interfacing with legacy systems.
In case of RPC over RabbitMQ, clients send a message to the broker, broker routes the message to a worker, worker returns the result through the broker to the client. However, if a worker implements more than one remote method, then somehow the different calls need to be routed to different listeners.
What is the general practice in this case? All RPC over MQ examples show only one remote method. It would be nice and easy to just set the method name as the routing rule/queue name, but I don't know whether this is the right way to do it.
Let's just accept for a moment that it is not a horrible idea to implement RPC over message queues (like RabbitMQ)
it's not horrible at all! it's common, and recommended in many situations - not just legacy integration.
... ok, to your actual question now :)
from a very high level perspective, here is what you need to do.
Your request and response need to have two key pieces of information:
a correlation-id
a reply-to queue
These bits of information will allow you to correlate the original request and the response.
Before you send the request
have your requesting code create an exclusive queue for itself. This queue will be used to receive the replies.
create a new correlation id - typically a GUID or UUID to guarantee uniqueness.
When Sending The Request
Attach the correlation id that you generated, to the message properties. there is a correlationId property that you should use for this.
store the correlation id with the associated callback function (reply handler) for the request, somewhere inside of the code that is making the request. you will need to this when the reply comes in.
attach the name of the exclusive queue that you created, to the replyTo property of the message, as well.
with all this done, you can send the message across rabbitmq
when replying
the reply code needs to use both the correlationId and the replyTo fields from the original message. so be sure to grab those
the reply should be sent directly to the replyTo queue. don't use standard publishing through an exchange. instead, send the reply message directly to the queue using the "send to queue" feature of whatever library you're using, and send the response directly to the replyTo queue.
be sure to include the correlationId in the response, as well. this is the critical part to answer your question
when handling the reply
The code that made the original request will receive the message from the replyTo queue. it will then pull the correlationId out of the message properties.
use the correlation id to look up the callback method for the request... the code that handles the response. pass the message to this callback method, and you're pretty much done.
the implementation details
this works, from a high level perspective. when you get down into the code, the implementation details will vary depending on the language and driver / library you are using.
most of the good RabbitMQ libraries for any given language will have Request/Response built in to them. If yours doesn't, you might want to look for a different library. Unless you are writing a patterns based library on top of the AMQP protocol, you should look for a library that has common patterns implemented for you.
If you need more information on the Request/Reply pattern, including all of the details that I've provided here (and more), check out these resources:
My own RabbitMQ Patterns email course / ebook
RabbitMQ Tutorials
Enterprise Integration Patterns - be sure to buy the book for the complete description / implementation pattern. it's worth having this book
If you're working in Node.js, I recommend using the wascally library, which includes the Request/Reply feature you need. For Ruby, check out bunny. For Java or .NET, look at some of the many service bus implementations around. In .NET, I recommend NServiceBus or MassTransit.
I've found that using a new reply-to queue per request can get really inefficient, specially when running RabbitMQ on a cluster.
As suggested in the comments direct reply-to seems to be the way to go. I've documented here all the options I tried before settling to that one.
I wrote an npm package amq.rabbitmq.reply-to.js that:
Uses direct reply-to - a feature that allows RPC (request/reply) clients with a design similar to that demonstrated in tutorial 6 (https://www.rabbitmq.com/direct-reply-to.html) to avoid declaring a response queue per request.
Creates an event emitter where rpc responses will be published by correlationId
as suggested by https://github.com/squaremo/amqp.node/issues/259#issuecomment-230165144
Usage:
const rabbitmqreplyto = require('amq.rabbitmq.reply-to.js');
const serverCallbackTimesTen = (message, rpcServer) => {
const n = parseInt(message);
return Promise.resolve(`${n * 10}`);
};
let rpcServer;
let rpcClient;
Promise.resolve().then(() => {
const serverOptions = new rabbitmqreplyto.RpcServerOptions(
/* url */ undefined,
/* serverId */ undefined,
/* callback */ serverCallbackTimesTen);
return rabbitmqreplyto.RpcServer.Create(serverOptions);
}).then((rpcServerP) => {
rpcServer = rpcServerP;
return rabbitmqreplyto.RpcClient.Create();
}).then((rpcClientP) => {
rpcClient = rpcClientP;
const promises = [];
for (let i = 1; i <= 20; i++) {
promises.push(rpcClient.sendRPCMessage(`${i}`));
}
return Promise.all(promises);
}).then((replies) => {
console.log(replies);
return Promise.all([rpcServer.Close(), rpcClient.Close()]);
});
//['10',
// '20',
// '30',
// '40',
// '50',
// '60',
// '70',
// '80',
// '90',
// '100',
// '110',
// '120',
// '130',
// '140',
// '150',
// '160',
// '170',
// '180',
// '190',
// '200']
I am currently working on a WCF service and have a small issue. The service is a Polling Duplex service. I initiate data transfer through a message sent to the server. Then the server sends large packets of data back through the callback channel to the client fairly quickly.
To stop the I send a message to the sever telling it do stop. Then it sends a message over the callback channel acknowledging this to let the client know.
The problem is that a bunch of packets of data get buffered up to be sent through the callback channel to the client. This causes a long wait for the acknowledgement to make it back because it has to wait for all the data to go through first.
Is there any way that I can clear the buffer for the callback channel on the server side? I don't need to worry about loosing the data, I just need to throw it away and immediately send the acknowledgement message.
I'm not sure if this can lead you into the correct direction or not... I have a similar service where when I look in my Subscribe() method, I can access this:
var context = OperationContext.Current;
var sessionId = context.SessionId;
var currentClient = context.GetCallbackChannel<IClient>();
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Now, if you had a way of using your IClient object, and to access the context where you got the instance of IClient from (resolve it's context), could running the following two statements do what you want?
context.OutgoingMessageHeaders.Clear();
context.OutgoingMessageProperties.Clear();
Just a quick ramble from my thoughts. Would love to know if this would fix it or not, for personal information if nothing else. Could you cache the OperationContext as part of a SubscriptionObject which would contain 2 properties, the first being for the OperationContext, and the second being your IClient object.
The below text is an effort to expand and add color to this question:
How do I prevent a misbehaving client from taking down the entire service?
I have essentially this scenario: a WCF service is up and running with a client callback having a straight forward, simple oneway communication, not very different from this one:
public interface IMyClientContract
{
[OperationContract(IsOneWay = true)]
void SomethingChanged(simpleObject myObj);
}
I'm calling this method potentially thousands of times a second from the service to what will eventually be about 50 concurrently connected clients, with as low latency as possible (<15 ms would be nice). This works fine until I set a break point on one of the client apps connected to the server and then everything hangs after maybe 2-5 seconds the service hangs and none of the other clients receive any data for about 30 seconds or so until the service registers a connection fault event and disconnects the offending client. After this all the other clients continue on their merry way receiving messages.
I've done research on serviceThrottling, concurrency tweaking, setting threadpool minimum threads, WCF secret sauces and the whole 9 yards, but at the end of the day this article MSDN - WCF essentials, One-Way Calls, Callbacks and Events describes exactly the issue I'm having without really making a recommendation.
The third solution that allows the service to safely call back to the client is to have the callback contract operations configured as one-way operations. Doing so enables the service to call back even when concurrency is set to single-threaded, because there will not be any reply message to contend for the lock.
but earlier in the article it describes the issue I'm seeing, only from a client perspective
When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call
I can only assume that the reverse is true, the number of queued messages to the client has exceeded the queue capacity and the threadpool is now filled with threads attempting to call this client that are now all blocked.
What is the right way to handle this? Should I research a way to check how many messages are queued at the service communication layer per client and abort their connections after a certain limit is reached?
It almost seems that if the WCF service itself is blocking on a queue filling up then all the async / oneway / fire-and-forget strategies I could ever implement inside the service will still get blocked whenever one client's queue gets full.
Don't know much about the client callbacks, but it sounds similar to generic wcf code blocking issues. I often solve these problems by spawning a BackgroundWorker, and performing the client call in the thread. During that time, the main thread counts how long the child thread is taking. If the child has not finished in a few milliseconds, the main thread just moves on and abandons the thread (it eventually dies by itself, so no memory leak). This is basically what Mr.Graves suggests with the phrase "fire-and-forget".
Update:
I implemented a Fire-and-forget setup to call the client's callback channel and the server no longer blocks once the buffer fills to the client
MyEvent is an event with a delegate that matches one of the methods defined in the WCF client contract, when they connect I'm essentially adding the callback to the event
MyEvent += OperationContext.Current.GetCallbackChannel<IFancyClientContract>().SomethingChanged
etc... and then to send this data to all clients, I'm doing the following
//serialize using protobuff
using (var ms = new MemoryStream())
{
ProtoBuf.Serializer.Serialize(ms, new SpecialDataTransferObject(inputData));
byte[] data = ms.GetBuffer();
Parallel.ForEach(MyEvent.GetInvocationList(), p => ThreadUtil.FireAndForget(p, data));
}
in the ThreadUtil class I made essentially the following change to the code defined in the fire-and-foget article
static void InvokeWrappedDelegate(Delegate d, object[] args)
{
try
{
d.DynamicInvoke(args);
}
catch (Exception ex)
{
//THIS will eventually throw once the client's WCF callback channel has filled up and timed out, and it will throw once for every single time you ever tried sending them a payload, so do some smarter logging here!!
Console.WriteLine("Error calling client, attempting to disconnect.");
try
{
MyService.SingletonServiceController.TerminateClientChannelByHashcode(d.Target.GetHashCode());//this is an IContextChannel object, kept in a dictionary of active connections, cross referenced by hashcode just for this exact occasion
}
catch (Exception ex2)
{
Console.WriteLine("Attempt to disconnect client failed: " + ex2.ToString());
}
}
}
I don't have any good ideas how to go and kill all the pending packets the server is still waiting to see if they'll get delivered on. Once I get the first exception I should in theory be able to go and terminate all the other requests in some queue somewhere, but this setup is functional and meets the objectives.
I am investigating NServiceBus and I am unsure how (or even if) I could use it to handle this scenario:
I have multiple clients sending work requests, which the distributor farms out to workers. The work will take a long time to complete and I would like the workers to report progress back to the client that sent the original request.
I have looked at the full duplex sample and also how to add the distributor to that sample. I've got these working, but when I modify them to reply with a series of progress messages (with a delay between the messages, as per code shown below), the client receives all the progress messages at the same time.
public class RequestDataMessageHandler : IHandleMessages<RequestDataMessage>
{
public IBus Bus { get; set; }
public void Handle(RequestDataMessage message)
{
for (var i = 0; i < 10; i++)
{
var count = i;
var response = this.Bus.CreateInstance<DataResponseMessage>(m =>
{
m.DataId = message.DataId;
m.Progress = count * 10;
});
this.Bus.Reply(response);
Thread.Sleep(1000);
}
}
}
I suspect I've not understood something basic about how NServiceBus works. Could someone explain where I've gone wrong, or point me at some examples and/or documentation?
What you have constructed will always send the messages as part of the same transaction. Since there is one transaction per handler, you won't be able to communicate progress this way. You would have to have a separate endpoint for each chunk of processing that would communicate progress. We've implemented communicating progress by updating something externally that is not involved in the transaction. That could be done by sending a non-transactional message to another endpoint to update progress or something like an RPC call. From there you could have something poll that progress data store.
Have your workers use bus.Reply() to send messages back to your clients. Reply will automatically send the message to the endpoint that sent the original message