With a BLE GattLocalService on Windows.Devices.Bluetooth while acting as the Peripheral with a characteristic with only a WriteWithoutResponse property and Plain protection level we are seeing GattLocalCharacteristic event WriteRequested is raised out of order as compared to the bytes being sent. We must only use the WriteWithoutResponse property on the characteristic and cannot use Notify, which seems to work fine. The trouble comes that we are sending around 10K over BLE with an MTU of 20 through this characteristic and need to reassemble them in order once all bytes are received. While many of the functions are asynchronous we can't seem to determine the order in which the bytes were originally sent. When this is setup on either Android or iOS this it works perfectly fine but on the UWP implementation we are notified out of order. Understandably, UWP is using several Async functions - but not being able to reassemble the bytes in the order received is quite problematic.
While researching Microsoft's github examples they only show receiving about 4 bytes on a single characteristic - order here wouldn't really matter. We have looked at using the Offset property on the GattWriteRequest, which appears to always be 0. We have monitored the calls to the GattLocalCharacteristic and determined they are notified out of order.
We've attempted to adapt from Microsoft's example here: https://learn.microsoft.com/en-us/windows/uwp/devices-sensors/gatt-server#write
Our vb.NET adaptation of this prior example:
Private Async Sub InitializePerihperal()
Dim serviceResult As GattServiceProviderResult = Await GattServiceProvider.CreateAsync(New Guid(peripheralServiceUUID))
If serviceResult.Error <> BluetoothError.Success Then
peripheralResponseReceived = True
peripheralResponseError = "GattServiceError"
Exit Sub
End If
serviceProvider = serviceResult.ServiceProvider
Dim characResult As GattLocalCharacteristicResult
Dim propCentralToPeripheral As New GattLocalCharacteristicParameters
propCentralToPeripheral.CharacteristicProperties = GattCharacteristicProperties.WriteWithoutResponse
propCentralToPeripheral.WriteProtectionLevel = GattProtectionLevel.Plain
characResult = Await serviceProvider.Service.CreateCharacteristicAsync(New Guid(CentralToPeripheralUUID), propCentralToPeripheral)
characCentralToPeripheral = characResult.Characteristic
End Sub
Private Async Sub CentralResponseReceived(sender As GattLocalCharacteristic, args As GattWriteRequestedEventArgs) Handles characCentralToPeripheral.WriteRequested
Dim sequence As Int32 = Threading.Interlocked.Increment(PerihperalWriteSequence)
Using requestDeferral As Windows.Foundation.Deferral = args.GetDeferral
Dim request As GattWriteRequest = Await args.GetRequestAsync
Dim requestReader As DataReader = DataReader.FromBuffer(request.Value)
Dim response(CType(requestReader.UnconsumedBufferLength - 1, Int32)) As Byte
requestReader.ReadBytes(response)
'peripheralBlocks is a global ConcurrentDictionary
peripheralBlocks(sequence) = response
requestDeferral.Complete()
End Using
End Sub
We might be missing something, but it seems like either Offset on the response or the function should be called in sequence. Above we have attempted to eliminate issues with GetRequestAsync delays by immediately capturing a call sequence. Maybe we are just missing something on the API - but we can't seem to find anything.
It is possible to cast a managed array<Byte>^ to some non-managed struct only using pin_ptr, AFAIK, like:
void Example(array<Byte>^ bfr) {
pin_ptr<Byte> ptr = &bfr[0];
auto data = reinterpret_cast<NonManagedStruct*>(ptr);
data->Header = 7;
data->Length = sizeof(data);
data->CRC = CalculateCRC(data);
}
However, is with interior_ptr in any way?
I'd rather work on managed data the low-level-way (using unions, struct-bit-fields, and so on), without pinning data - I could be holding this data for quite a long time and don't want to harass the GC.
Clarification:
I do not want to copy managed-data to native and back (so the Marshaling way is not an option here...)
You likely won't harass the GC with pin_ptr - it's pretty lightweight unlike GCHandle.
GCHandle::Alloc(someObject, GCHandleType::Pinned) will actually register the object as being pinned in the GC. This lets you pin an object for extended periods of time and across function calls, but the GC has to track that object.
On the other hand, pin_ptr gets translated to a pinned local in IL code. The GC isn't notified about it, but it will get to see that the object is pinned only during a collection. That is, it will notice it's pinned status when looking for object references on the stack.
If you really want to, you can access stack memory in the following way:
[StructLayout(LayoutKind::Explicit, Size = 256)]
public value struct ManagedStruct
{
};
struct NativeStruct
{
char data[256];
};
static void DoSomething()
{
ManagedStruct managed;
auto nativePtr = reinterpret_cast<NativeStruct*>(&managed);
nativePtr->data[42] = 42;
}
There's no pinning at all here, but this is only due to the fact that the managed struct is stored on the stack, and therefore is not relocatable in the first place.
It's a convoluted example, because you could just write:
static void DoSomething()
{
NativeStruct native;
native.data[42] = 42;
}
...and the compiler would perform a similar trick under the covers for you.
I am receiving messages on UDP in multiple threads. After each reception I raise MessageReceived.OnNext(message).
Because I am using multiple threads the messages raised unordered which is a problem.
How can I order the raise of the messages by the message counter?
(lets say there is a message.counter property)
Must take in mind a message can get lost in the communication (lets say if we have a counter hole after X messages that the hole is not filled I raise the next message)
Messages must be raised ASAP (if the next counter received)
In stating the requirement for detecting lost messages, you haven't considered the possibility of the last message not arriving; I've added a timeoutDuration which flushes the buffered messages if nothing arrives in the given time - you may want to consider this an error instead, see the comments for how to do this.
I will solve this by defining an extension method with the following signature:
public static IObservable<TSource> Sort<TSource>(
this IObservable<TSource> source,
Func<TSource, int> keySelector,
TimeSpan timeoutDuration = new TimeSpan(),
int gapTolerance = 0)
source is the stream of unsorted messages
keySelector is a function that extracts an int key from a message. I assume the first key sought is 0; amend if necessary.
timeoutDuration is discussed above, if omitted, there is no timeout
tolerance is the maximum number of messages held back while waiting for an out of order message. Pass 0 to hold any number of messages
scheduler is the scheduler to use for the timeout and is supplied for test purposes, a default is used if not given.
Walkthrough
I'll present a line-by-line walkthrough here. The full implementation is repeated below.
Assign Default Scheduler
First of all we must assign a default scheduler if none was supplied:
scheduler = scheduler ?? Scheduler.Default;
Arrange Timeout
Now if a time out was requested, we will replace the source with a copy that will simply terminate and send OnCompleted if a message doesn't arrive in timeoutDuration.
if(timeoutDuration != TimeSpan.Zero)
source = source.Timeout(
timeoutDuration,
Observable.Empty<TSource>(),
scheduler);
If you wish to send a TimeoutException instead, just delete the second parameter to Timeout - the empty stream, to select an overload that does this. Note we can safely share this with all subscribers, so it is positioned outside the call to Observable.Create.
Create Subscribe handler
We use Observable.Create to build our stream. The lambda function that is the argument to Create is invoked whenever a subscription occurs and we are passed the calling observer (o). Create returns our IObservable<T> so we return it here.
return Observable.Create<TSource>(o => { ...
Initialize some variables
We will track the next expected key value in nextKey, and create a SortedDictionary to hold the out of order messages until they can be sent.
int nextKey = 0;
var buffer = new SortedDictionary<int, TSource>();
Subscribe to the source, and handle messages
Now we can subscribe to the message stream (possibly with the timeout applied). First we introduce the OnNext handler. The next message is assigned to x:
return source.Subscribe(x => { ...
We invoke the keySelector function to extract the key from the message:
var key = keySelector(x);
If the message has an old key (because it exceeded our tolerance for out of order messages) we are just going to drop it and be done with this message (you may want to act differently):
// drop stale keys
if(key < nextKey) return;
Otherwise, we might have the expected key, in which case we can increment nextKey send the message:
if(key == nextKey)
{
nextKey++;
o.OnNext(x);
}
Or, we might have an out of order future message, in which case we must add it to our buffer. If we do this, we must also ensure our buffer hasn't exceeded our tolerance for storing out of order messages - in this case, we will also bump nextKey to the first key in the buffer which because it is a SortedDictionary is conveniently the next lowest key:
else if(key > nextKey)
{
buffer.Add(key, x);
if(gapTolerance != 0 && buffer.Count > gapTolerance)
nextKey = buffer.First().Key;
}
Now regardless of the outcome above, we need to empty the buffer of any keys that are now ready to go. We use a helper method for this. Note that it adjusts nextKey so we must be careful to pass it by reference. We simply loop over the buffer reading, removing and sending messages as long as the keys follow on from each other, incrementing nextKey each time:
private static void SendNextConsecutiveKeys<TSource>(
ref int nextKey,
IObserver<TSource> observer,
SortedDictionary<int, TSource> buffer)
{
TSource x;
while(buffer.TryGetValue(nextKey, out x))
{
buffer.Remove(nextKey);
nextKey++;
observer.OnNext(x);
}
}
Dealing with errors
Next we supply an OnError handler - this will just pass through any error, including the Timeout exception if you chose to go that way.
Flushing the buffer
Finally, we must handle OnCompleted. Here I have opted to empty the buffer - this would be necessary if an out of order message held up messages and never arrived. This is why we need a timeout:
() => {
// empty buffer on completion
foreach(var item in buffer)
o.OnNext(item.Value);
o.OnCompleted();
});
Full Implementation
Here is the full implementation.
public static IObservable<TSource> Sort<TSource>(
this IObservable<TSource> source,
Func<TSource, int> keySelector,
int gapTolerance = 0,
TimeSpan timeoutDuration = new TimeSpan(),
IScheduler scheduler = null)
{
scheduler = scheduler ?? Scheduler.Default;
if(timeoutDuration != TimeSpan.Zero)
source = source.Timeout(
timeoutDuration,
Observable.Empty<TSource>(),
scheduler);
return Observable.Create<TSource>(o => {
int nextKey = 0;
var buffer = new SortedDictionary<int, TSource>();
return source.Subscribe(x => {
var key = keySelector(x);
// drop stale keys
if(key < nextKey) return;
if(key == nextKey)
{
nextKey++;
o.OnNext(x);
}
else if(key > nextKey)
{
buffer.Add(key, x);
if(gapTolerance != 0 && buffer.Count > gapTolerance)
nextKey = buffer.First().Key;
}
SendNextConsecutiveKeys(ref nextKey, o, buffer);
},
o.OnError,
() => {
// empty buffer on completion
foreach(var item in buffer)
o.OnNext(item.Value);
o.OnCompleted();
});
});
}
private static void SendNextConsecutiveKeys<TSource>(
ref int nextKey,
IObserver<TSource> observer,
SortedDictionary<int, TSource> buffer)
{
TSource x;
while(buffer.TryGetValue(nextKey, out x))
{
buffer.Remove(nextKey);
nextKey++;
observer.OnNext(x);
}
}
Test Harness
If you include nuget rx-testing in a console app, the following will run given you a test harness to play with:
public static void Main()
{
var tests = new Tests();
tests.Test();
}
public class Tests : ReactiveTest
{
public void Test()
{
var scheduler = new TestScheduler();
var xs = scheduler.CreateColdObservable(
OnNext(100, 0),
OnNext(200, 2),
OnNext(300, 1),
OnNext(400, 4),
OnNext(500, 5),
OnNext(600, 3),
OnNext(700, 7),
OnNext(800, 8),
OnNext(900, 9),
OnNext(1000, 6),
OnNext(1100, 12),
OnCompleted(1200, 0));
//var results = scheduler.CreateObserver<int>();
xs.Sort(
keySelector: x => x,
gapTolerance: 2,
timeoutDuration: TimeSpan.FromTicks(200),
scheduler: scheduler).Subscribe(Console.WriteLine);
scheduler.Start();
}
}
Closing comments
There's all sorts of interesting alternative approaches here. I went for this largely imperative approach because I think it's easiest to follow - but there's probably some fancy grouping shenanigans you can employ to do this to. One thing I know to be consistently true about Rx - there's always many ways to skin a cat!
I'm also not entirely comfortable with the timeout idea here - in a production system, I would want to implement some means of checking connectivity, such as a heartbeat or similar. I didn't get into this because obviously it will be application specific. Also, heartbeats have been discussed on these boards and elsewhere before (such as on my blog for example).
Strongly consider using TCP instead if you want reliable ordering - that's what it's for; otherwise, you'll be forced to play a guessing game with UDP and sometimes you'll be wrong.
For example, imagine that you receive the following datagrams in this order: [A, B, D]
When you receive D, how long should you wait for C to arrive before pushing D?
Whatever duration you choose you may be wrong:
What if C was lost during transmission and so it will never arrive?
What if the duration you chose is too short and you end up pushing D but then receive C?
Perhaps you could choose a duration that heuristically works best, but why not just use TCP instead?
Side Note:
MessageReceived.OnNext implies that you're using a Subject<T>, which is probably unnecessary. Consider converting the async UdpClient methods into observables directly instead, or convert them by writing an async iterator via Observable.Create<T>(async (observer, cancel) => { ... }).
For measuring execution time of methods, I've seen suggestions to use
public class PerformanceInterceptor {
#AroundInvoke
Object measureTime(InvocationContext ctx) throws Exception {
long beforeTime = System.currentTimeMillis();
Object obj = null;
try {
obj = ctx.proceed();
return obj;
}
finally {
time = System.currentTimeMillis() - beforeTime;
// Log time
}
}
Then put
#Interceptors(PerformanceInterceptor.class)
before whatever method you want measured.
Anyway I tried this and it seems to work fine.
I also added a
public static long countCalls = 0;
to the PerformanceInterceptor class and a
countCalls++;
to the measureTime() which also seems to work o.k.
With my newby hat on, I will ask if my use of the countCalls is o.k. i.e
that Glassfish/JEE6 is o.k. with me using static variables in a Java class that is
used as an Interceptor.... in particular with regard to thread safety. I know that
normally you are supposed to synchronize setting of class variables in Java, but I
don't know what the case is with JEE6/Glassfish. Any thoughts ?
There is not any additional thread safety provided by container in this case. Each bean instance does have its own instance of interceptor. As a consequence multiple thread can access static countCalls same time.
That's why you have to guard both reads and writes to it as usual. Other possibility is to use AtomicLong:
private static final AtomicLong callCount = new AtomicLong();
private long getCallCount() {
return callCount.get();
}
private void increaseCountCall() {
callCount.getAndIncrement();
}
As expected, these solutions will work only as long as all of the instances are in same JVM, for cluster shared storage is needed.
I want to set an event handler only if this is not set:
If GetHandlers(MyWindow.Closed, AddressOf MyWindow_Closed).Length = 0 Then
AddHandler MyWindow.Closed, AddressOf MyWindow_Closed
EndIf
You can't really query the current value of the event's delegate, except in the code that defines the event. What is your intent here? Normally you shouldn't be too concerned (necessarily) with other subscribers? There are ways of hacking past the encapsulation to find the current value, but they are not recommended (it just isn't a good idea).
If your concern is whether you are already handling that event with that handler (i.e. you don't want to double-subscribe, then you can always either a: fix the code so it doesn't do this, or b: cheat (C# example):
// remove handler **if subscribed**, then re-subscribe
myWindow.Closed -= MyWindow_Closed;
myWindow.Closed += MyWindow_Closed;
To get the invocation list is... brittle but doable. In simple cases you can just use reflection to get the field, and snag the value. But with forms etc it uses sparse techniques (to minimise the space for events without subscribers). In the case of FormClosed, this is keyed via EVENT_FORMCLOSED.
It might make more sense with an example (C#, sorry):
Form form = new Form();
form.FormClosed += delegate { Console.WriteLine("a");}; // just something, anything
form.FormClosed += delegate { Console.WriteLine("b");}; // just something, anything
object key = typeof(Form).GetField("EVENT_FORMCLOSED",
BindingFlags.NonPublic | BindingFlags.Static).GetValue(null);
EventHandlerList events = (EventHandlerList )
typeof(Component).GetProperty("Events",
BindingFlags.NonPublic | BindingFlags.Instance).GetValue(form, null);
FormClosedEventHandler handler = (FormClosedEventHandler)events[key];
foreach (FormClosedEventHandler subhandler in handler.GetInvocationList())
{
subhandler(form, null); // access the two events separately
}
In the case of an ObservableCollection<T>, the delegate is directly on a field, so less indirection is required:
ObservableCollection<SomeType> list = ...
NotifyCollectionChangedEventHandler handler = (NotifyCollectionChangedEventHandler)
list.GetType()
.GetField("CollectionChanged", BindingFlags.Instance | BindingFlags.NonPublic)
.GetValue(list);