WCF: Message Framing and Custom Channels - wcf

I am trying to understand how I would implement message framing with WCF. The goal is to create a server in WCF that can handle proprietary formats over Tcp. I can't use the net.Tcp binding because that is only for SOAP.
I need to write a custom channel that would receive messages in the following format
. An example message would be "5 abcde". In particular I am not sure how to do framing in my custom channel.
Here is some sample code
class CustomChannel: IDuplexSessionChannel
{
private class PendingRead
{
public NetworkStream Stream = null;
public byte[] Buffer = null;
public bool IsReading = false;
}
private CommunicationState state = CommunicationState.Closed;
private TcpClient tcpClient = null;
private MessageEncoder encoder = null;
private BufferManager bufferManager = null;
private TransportBindingElement bindingElement = null;
private Uri uri = null;
private PendingRead pendingRead;
public CustomChannel(Uri uri, TransportBindingElement bindingElement, MessageEncoderFactory encoderFactory, BufferManager bufferManager, TcpClient tcpClient)
{
this.uri = uri;
this.bindingElement = bindingElement;
this.tcpClient = tcpClient;
this.bufferManager = bufferManager;
state = CommunicationState.Created;
}
public IAsyncResult BeginTryReceive(TimeSpan timeout, AsyncCallback callback, object state)
{
if (this.state != CommunicationState.Opened) return null;
byte[] buffer = bufferManager.TakeBuffer(tcpClient.Available);
NetworkStream stream = tcpClient.GetStream();
pendingRead = new PendingRead { Stream = stream, Buffer = buffer, IsReading = true };
IAsyncResult result = stream.BeginRead(buffer, 0, buffer.Length, callback, state);
return result;
}
public bool EndTryReceive(IAsyncResult result, out Message message)
{
int byteCount = tcpClient.Client.EndReceive(result);
string content = Encoding.ASCII.GetString(pendingRead.buffer)
// framing logic here
Message.CreateMessage( ... )
}
}
So basically the first time around EndTryReceive could just get a piece of the message from the pending read buffer "5 ab". Then the second time around it could get the rest of the message. The problem is when EndTryReceive gets called the first time, I am forced to create a Message object, this means that there will be a partial Message going up the channel stack.
What I really want to do is to make sure that I have my full message "5 abcde" in the buffer, so that when I construct the message in EndTryReceive it is a full message.
Does anyone have any examples of how they are doing custom framing with WCF?
Thanks,
Vadim

Framing at the wire level is not something that the WCF channel model really cares about; it's pretty much up to you to handle it.
What I mean by this is that it is your responsibility to ensure that your transport channel returns "entire" messages on a receive (streaming changes that a bit, but only up to a point).
In your case, it seems you're translating receive operations on your channel directly into receive operations on the underlying socket, and that just won't do, because that won't give you a chance to enforce your own framing rules.
So really, a single receive operation on your channel might very well translate to more than one receive operation on the underlying socket, and that's fine (and you can still do all that async, so it doesn't need to affect that part).
So basically the question becomes: what's your protocol framing model look like? Wild guess here, but it looks like messages are length prefixed, with the length encoded as a decimal string? (looks annoying).
I think your best bet in that case would be to have your transport buffer incoming data (say, up to 64KB of data or whatever), and then on each receive operation check the buffer to see if it contains enough bytes to extract the length of the incoming message. If so, then either read as many bytes as necessary from the buffer, or flush the buffer and read as many bytes from the socket. You'll have to be careful as, depending on how your protocol works, I'm assuming you might end up reading partial messages before you actually need them.

I agree with the thomasr. You can find some basic inspiration in Microsoft Technology Sample "ChunkingChannel".

Related

#JmsListener throwing MessageConversionException

I'm trying to receive a message asynchronously from IBM MQ:
#JmsListener(destination = "queue", containerFactory = "Factory", id = "start")
public Mono<Void> requestProcess(Message message) {return Mono.just("").then();
}
Catch:
Caused by: org.springframework.jms.support.converter.MessageConversionException: Cannot convert object of type [reactor.core.publisher.MonoLift] to JMS message. Supported message payloads are: String, byte array, Map<String,?>, Serializable object.
If I switch method type to simple void it works as supposed to. How can I set listener to receive messages in non-blocking reactive way?
The conversion that is failing is the input to requestProcess.
#JmsListener has a found a publisher of type reactor.core.publisher.MonoLift, but requestProcess is expecting a Message, and it doesn't have a conversion.
The way round this would be to changed the input signature for requestProcess to
#JmsListener(destination = "queue", containerFactory = "Factory", id = "start")
public Mono<Void> requestProcess(MonoLift<?> publisher) {
...
}
and modify the method body accordingly.

order reactive extension events

I am receiving messages on UDP in multiple threads. After each reception I raise MessageReceived.OnNext(message).
Because I am using multiple threads the messages raised unordered which is a problem.
How can I order the raise of the messages by the message counter?
(lets say there is a message.counter property)
Must take in mind a message can get lost in the communication (lets say if we have a counter hole after X messages that the hole is not filled I raise the next message)
Messages must be raised ASAP (if the next counter received)
In stating the requirement for detecting lost messages, you haven't considered the possibility of the last message not arriving; I've added a timeoutDuration which flushes the buffered messages if nothing arrives in the given time - you may want to consider this an error instead, see the comments for how to do this.
I will solve this by defining an extension method with the following signature:
public static IObservable<TSource> Sort<TSource>(
this IObservable<TSource> source,
Func<TSource, int> keySelector,
TimeSpan timeoutDuration = new TimeSpan(),
int gapTolerance = 0)
source is the stream of unsorted messages
keySelector is a function that extracts an int key from a message. I assume the first key sought is 0; amend if necessary.
timeoutDuration is discussed above, if omitted, there is no timeout
tolerance is the maximum number of messages held back while waiting for an out of order message. Pass 0 to hold any number of messages
scheduler is the scheduler to use for the timeout and is supplied for test purposes, a default is used if not given.
Walkthrough
I'll present a line-by-line walkthrough here. The full implementation is repeated below.
Assign Default Scheduler
First of all we must assign a default scheduler if none was supplied:
scheduler = scheduler ?? Scheduler.Default;
Arrange Timeout
Now if a time out was requested, we will replace the source with a copy that will simply terminate and send OnCompleted if a message doesn't arrive in timeoutDuration.
if(timeoutDuration != TimeSpan.Zero)
source = source.Timeout(
timeoutDuration,
Observable.Empty<TSource>(),
scheduler);
If you wish to send a TimeoutException instead, just delete the second parameter to Timeout - the empty stream, to select an overload that does this. Note we can safely share this with all subscribers, so it is positioned outside the call to Observable.Create.
Create Subscribe handler
We use Observable.Create to build our stream. The lambda function that is the argument to Create is invoked whenever a subscription occurs and we are passed the calling observer (o). Create returns our IObservable<T> so we return it here.
return Observable.Create<TSource>(o => { ...
Initialize some variables
We will track the next expected key value in nextKey, and create a SortedDictionary to hold the out of order messages until they can be sent.
int nextKey = 0;
var buffer = new SortedDictionary<int, TSource>();
Subscribe to the source, and handle messages
Now we can subscribe to the message stream (possibly with the timeout applied). First we introduce the OnNext handler. The next message is assigned to x:
return source.Subscribe(x => { ...
We invoke the keySelector function to extract the key from the message:
var key = keySelector(x);
If the message has an old key (because it exceeded our tolerance for out of order messages) we are just going to drop it and be done with this message (you may want to act differently):
// drop stale keys
if(key < nextKey) return;
Otherwise, we might have the expected key, in which case we can increment nextKey send the message:
if(key == nextKey)
{
nextKey++;
o.OnNext(x);
}
Or, we might have an out of order future message, in which case we must add it to our buffer. If we do this, we must also ensure our buffer hasn't exceeded our tolerance for storing out of order messages - in this case, we will also bump nextKey to the first key in the buffer which because it is a SortedDictionary is conveniently the next lowest key:
else if(key > nextKey)
{
buffer.Add(key, x);
if(gapTolerance != 0 && buffer.Count > gapTolerance)
nextKey = buffer.First().Key;
}
Now regardless of the outcome above, we need to empty the buffer of any keys that are now ready to go. We use a helper method for this. Note that it adjusts nextKey so we must be careful to pass it by reference. We simply loop over the buffer reading, removing and sending messages as long as the keys follow on from each other, incrementing nextKey each time:
private static void SendNextConsecutiveKeys<TSource>(
ref int nextKey,
IObserver<TSource> observer,
SortedDictionary<int, TSource> buffer)
{
TSource x;
while(buffer.TryGetValue(nextKey, out x))
{
buffer.Remove(nextKey);
nextKey++;
observer.OnNext(x);
}
}
Dealing with errors
Next we supply an OnError handler - this will just pass through any error, including the Timeout exception if you chose to go that way.
Flushing the buffer
Finally, we must handle OnCompleted. Here I have opted to empty the buffer - this would be necessary if an out of order message held up messages and never arrived. This is why we need a timeout:
() => {
// empty buffer on completion
foreach(var item in buffer)
o.OnNext(item.Value);
o.OnCompleted();
});
Full Implementation
Here is the full implementation.
public static IObservable<TSource> Sort<TSource>(
this IObservable<TSource> source,
Func<TSource, int> keySelector,
int gapTolerance = 0,
TimeSpan timeoutDuration = new TimeSpan(),
IScheduler scheduler = null)
{
scheduler = scheduler ?? Scheduler.Default;
if(timeoutDuration != TimeSpan.Zero)
source = source.Timeout(
timeoutDuration,
Observable.Empty<TSource>(),
scheduler);
return Observable.Create<TSource>(o => {
int nextKey = 0;
var buffer = new SortedDictionary<int, TSource>();
return source.Subscribe(x => {
var key = keySelector(x);
// drop stale keys
if(key < nextKey) return;
if(key == nextKey)
{
nextKey++;
o.OnNext(x);
}
else if(key > nextKey)
{
buffer.Add(key, x);
if(gapTolerance != 0 && buffer.Count > gapTolerance)
nextKey = buffer.First().Key;
}
SendNextConsecutiveKeys(ref nextKey, o, buffer);
},
o.OnError,
() => {
// empty buffer on completion
foreach(var item in buffer)
o.OnNext(item.Value);
o.OnCompleted();
});
});
}
private static void SendNextConsecutiveKeys<TSource>(
ref int nextKey,
IObserver<TSource> observer,
SortedDictionary<int, TSource> buffer)
{
TSource x;
while(buffer.TryGetValue(nextKey, out x))
{
buffer.Remove(nextKey);
nextKey++;
observer.OnNext(x);
}
}
Test Harness
If you include nuget rx-testing in a console app, the following will run given you a test harness to play with:
public static void Main()
{
var tests = new Tests();
tests.Test();
}
public class Tests : ReactiveTest
{
public void Test()
{
var scheduler = new TestScheduler();
var xs = scheduler.CreateColdObservable(
OnNext(100, 0),
OnNext(200, 2),
OnNext(300, 1),
OnNext(400, 4),
OnNext(500, 5),
OnNext(600, 3),
OnNext(700, 7),
OnNext(800, 8),
OnNext(900, 9),
OnNext(1000, 6),
OnNext(1100, 12),
OnCompleted(1200, 0));
//var results = scheduler.CreateObserver<int>();
xs.Sort(
keySelector: x => x,
gapTolerance: 2,
timeoutDuration: TimeSpan.FromTicks(200),
scheduler: scheduler).Subscribe(Console.WriteLine);
scheduler.Start();
}
}
Closing comments
There's all sorts of interesting alternative approaches here. I went for this largely imperative approach because I think it's easiest to follow - but there's probably some fancy grouping shenanigans you can employ to do this to. One thing I know to be consistently true about Rx - there's always many ways to skin a cat!
I'm also not entirely comfortable with the timeout idea here - in a production system, I would want to implement some means of checking connectivity, such as a heartbeat or similar. I didn't get into this because obviously it will be application specific. Also, heartbeats have been discussed on these boards and elsewhere before (such as on my blog for example).
Strongly consider using TCP instead if you want reliable ordering - that's what it's for; otherwise, you'll be forced to play a guessing game with UDP and sometimes you'll be wrong.
For example, imagine that you receive the following datagrams in this order: [A, B, D]
When you receive D, how long should you wait for C to arrive before pushing D?
Whatever duration you choose you may be wrong:
What if C was lost during transmission and so it will never arrive?
What if the duration you chose is too short and you end up pushing D but then receive C?
Perhaps you could choose a duration that heuristically works best, but why not just use TCP instead?
Side Note:
MessageReceived.OnNext implies that you're using a Subject<T>, which is probably unnecessary. Consider converting the async UdpClient methods into observables directly instead, or convert them by writing an async iterator via Observable.Create<T>(async (observer, cancel) => { ... }).

Is there an easy way to subscribe to the default error queue in EasyNetQ?

In my test application I can see messages that were processed with an exception being automatically inserted into the default EasyNetQ_Default_Error_Queue, which is great. I can then successfully dump or requeue these messages using the Hosepipe, which also works fine, but requires dropping down to the command line and calling against both Hosepipe and the RabbitMQ API to purge the queue of retried messages.
So I'm thinking the easiest approach for my application is to simply subscribe to the error queue, so I can re-process them using the same infrastructure. But in EastNetQ, the error queue seems to be special. We need to subscribe using a proper type and routing ID, so I'm not sure what these values should be for the error queue:
bus.Subscribe<WhatShouldThisBe>("and-this", ReprocessErrorMessage);
Can I use the simple API to subscribe to the error queue, or do I need to dig into the advanced API?
If the type of my original message was TestMessage, then I'd like to be able to do something like this:
bus.Subscribe<ErrorMessage<TestMessage>>("???", ReprocessErrorMessage);
where ErrorMessage is a class provided by EasyNetQ to wrap all errors. Is this possible?
You can't use the simple API to subscribe to the error queue because it doesn't follow EasyNetQ queue type naming conventions - maybe that's something that should be fixed ;)
But the Advanced API works fine. You won't get the original message back, but it's easy to get the JSON representation which you could de-serialize yourself quite easily (using Newtonsoft.JSON). Here's an example of what your subscription code should look like:
[Test]
[Explicit("Requires a RabbitMQ server on localhost")]
public void Should_be_able_to_subscribe_to_error_messages()
{
var errorQueueName = new Conventions().ErrorQueueNamingConvention();
var queue = Queue.DeclareDurable(errorQueueName);
var autoResetEvent = new AutoResetEvent(false);
bus.Advanced.Subscribe<SystemMessages.Error>(queue, (message, info) =>
{
var error = message.Body;
Console.Out.WriteLine("error.DateTime = {0}", error.DateTime);
Console.Out.WriteLine("error.Exception = {0}", error.Exception);
Console.Out.WriteLine("error.Message = {0}", error.Message);
Console.Out.WriteLine("error.RoutingKey = {0}", error.RoutingKey);
autoResetEvent.Set();
return Task.Factory.StartNew(() => { });
});
autoResetEvent.WaitOne(1000);
}
I had to fix a small bug in the error message writing code in EasyNetQ before this worked, so please get a version >= 0.9.2.73 before trying it out. You can see the code example here
Code that works:
(I took a guess)
The screwyness with the 'foo' is because if I just pass that function HandleErrorMessage2 into the Consume call, it can't figure out that it returns a void and not a Task, so can't figure out which overload to use. (VS 2012)
Assigning to a var makes it happy.
You will want to catch the return value of the call to be able to unsubscribe by disposing the object.
Also note that Someone used a System Object name (Queue) instead of making it a EasyNetQueue or something, so you have to add the using clarification for the compiler, or fully specify it.
using Queue = EasyNetQ.Topology.Queue;
private const string QueueName = "EasyNetQ_Default_Error_Queue";
public static void Should_be_able_to_subscribe_to_error_messages(IBus bus)
{
Action <IMessage<Error>, MessageReceivedInfo> foo = HandleErrorMessage2;
IQueue queue = new Queue(QueueName,false);
bus.Advanced.Consume<Error>(queue, foo);
}
private static void HandleErrorMessage2(IMessage<Error> msg, MessageReceivedInfo info)
{
}

Apache MINA networking - How to get data from org.apache.mina.core.service.IoHandlerAdapter messageRecieved(IoSession, Object)

public void messageReceived(IoSession session, Object message) throws Exception
{
// do something
}
Can anyone tell me how to get data from the Object?
It's really quite simple, just cast the message into an IoBuffer and pull out the bytes.
// cast message to io buffer
IoBuffer data = (IoBuffer) message;
// create a byte array to hold the bytes
byte[] buf = new byte[data.limit()];
// pull the bytes out
data.get(buf);
// look at the message as a string
System.out.println("Message: " + new String(buf));
Cast message to the object type you used in the client's session.write.

Why is WCF reading input stream to EOF on Close()?

We're using WCF to build a simple web service which our product uses to upload large files over a WAN link. It's supposed to be a simple HTTP PUT, and it's working fine for the most part.
Here's a simplified version of the service contract:
[ServiceContract, XmlSerializerFormat]
public interface IReplicationWebService
{
[OperationContract]
[WebInvoke(Method = "PUT", UriTemplate = "agents/{sourceName}/epoch/{guid}/{number}/{type}")]
ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream stream);
}
In the implementation of this contract, we read data from stream and write it out to a file. This works great, so we added some error handling for cases when there's not enough disk space to store the file. Here's roughly what it looks like:
public ReplayResult PutEpochFile(string sourceName, string guid, string number, string type, Stream inStream)
{
//Stuff snipped
try
{
//Read from the stream and write to the file
}
catch (IOException ioe)
{
//IOException may mean no disk space
try
{
inStream.Close();
}
// if instream caused the IOException, close may throw
catch
{
}
_logger.Debug(ioe.ToString());
throw new FaultException<IOException>(ioe, new FaultReason(ioe.Message), new FaultCode("IO"));
}
}
To test this, I'm sending a 100GB file to a server that doesn't have enough space for the file. As expected this throws an exception, but the call to inStream.Close() appeared to hang. I checked into it, and what's actually happening is that the call to Close() made its way through the WCF plumbing until it reached System.ServiceModel.Channels.DrainOnCloseStream.Close(), which according to Reflector allocates a Byte[] buffer and keeps reading from the stream until it's at EOF.
In other words, the Close call is reading the entire 100GB of test data from the stream before returning!
Now it may be that I don't need to call Close() on this stream. If that's the case I'd like an explanation as to why. But more importantly, I'd appreciate it if anyone could explain to me why Close() is behaving this way, why it's not considered a bug, and how to reconfigure WCF so that doesn't happen.
.Close() is intended to be a "safe" and "friendly" way of stopping your operation - and it will indeed complete the currently running requests before shutting down - by design.
If you want to throw down the sledgehammer, use .Abort() on your client proxy (or service host) instead. That just shuts down everything without checking and without being nice about waiting for operations to complete.