CWInterface returning no data - objective-c

I try to collect some information about the current state of the CWInterface(connected bssid, available access points...) and send them periodic (every 5-10 sec) via udp to a server.
My Problem is that after some time (between 30 and 50 min in some tests with different collection/sending interval) the CWInterface stops returning data.
[CWInterface interface] returns nil
[CWInterface interfaceNames] returns a NSSet with 0 entries
[[CWInterface interface] scanForNetworksWithSSID:nil &error] also returns a NSSet with 0
entries
What am I doing wrong?
I'm totaly out of ideas...

OK as I already commented my own question I changed the framework from CoreWlan to the private Apple80211.framework.
This seams to work.
My Application now runs for about one and a quater hour and scanning every few seconds.
Two negative points about using Apple80211 are:
There is no public documentation about how to use it (I used the documentation from http://code.google.com/p/iphone-wireless/ which also works for Mac OS X)
The scans now last about 5 seconds which is prety long but hey it works...

Related

Google Mock, expect call between 2 given numbers

I'm using google mock as a framework for testing a real time system, I want to verify that the configuration to the sensors is being implemented correctly. To do that I am sending a new Frequency to said sensor and expecting my sensor data callback to be called x number of times.
Due to this I have to sleep the testing thread for a few seconds, so my callback can be called a few times.
Due to this the expected calls are not a set number, for example, if I expect my callback to be called once per second and I sleep the thread 5 seconds, it can actually be called between 4 and 6 times, otherwise the test would be too restrictive.
This is the problem, I haven't found a way to test if expect call is between 4 and 6, I tried the following:
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AnyNumber());
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtMost(6));
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtLeast(4));
And
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AnyNumber());
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtLeast(4));
EXPECT_CALL(*handler,Data_Mock(_,_)).Times(::testing::AtMost(6));
Try Between from https://github.com/google/googletest/blob/master/googlemock/docs/cheat_sheet.md#cardinalities-cardinalitylist. It is exactly for the purpose of asserting that given call be called between m and n times.

SerialPort.ReadByte all of a sudden returns zero in stead of the version number of the attached cable

We use a USB to SerialPort converter for a long time now in our application. To check if the correct cable is attached we start with sending a command that requests the converter to return its version. We then read te returned data using
var version=SerialPort.ReadByte();
Which is expected to return &11 for version 1.1.
All of a sudden on Windows 10 1803 or later versions this checks starts to fail because ReadByte() returns first a zero then, when called a second time, &11.
This change in behaviour must be caused by a Windows update as we did not change this part of the code in years.
Can anyone shed some light on what might be going on? Is this a Windows 10 fluke that will be reversed soon, or is our implementation inherently wrong?
EDIT
I replaced ReadByte() with ReadExisting() and that came back with (in VB.Net) vbNullChar & vbNullChar & CharW(17).
So it seems that the first call to ReadByte returns the two nullchars and the second call returns the expected value of &11
EDIT
There is another, very likely cause as well: the two converters we used to test are from the same batch. They seemingly respond with two null chars the first time they are read.

Why are the messages sent over WebRTC received in a different order sometimes?

I use ordered set to true, however when many (1000 or more) messages are sent in a short period of time (< 1 second) the messages received are not all received in the same order.
rtcPeerConnection.createDataChannel("app", {
ordered: true,
maxPacketLifeTime: 3000
});
I could provide a minimal example to reproduce this strange behavior if necessary.
I also use bufferedAmountLowThreshold and the associated event to delay when the send buffered amount is too big. I chose 2000 but I don't know what the optimal number is. The reason I have so many messages in a short period of time is because I don't want to overflow the maximum amount of data sent at once. So I split the data into 800 Bytes packs and send those. Again I don't know what the maximum size 1 message can be.
const SEND_BUFFERED_AMOUNT_LOW_THRESHOLD = 2000; //Bytes
rtcSendDataChannel.bufferedAmountLowThreshold = SEND_BUFFERED_AMOUNT_LOW_THRESHOLD;
const MAX_MESSAGE_SIZE = 800;
Everything works fine for small data that is not split into too many messages. The error occurs randomly for big files only.
In 2016/11/01 , there is a bug that lets the dataChannel.bufferedAmount value change during the event loop task execution. Relying on this value can thus cause unexpected results. It is possible to manually cache dataChannel.bufferedAmount, and to use that to prevent this issue.
See https://bugs.chromium.org/p/webrtc/issues/detail?id=6628

What is the unit for SoftLayer_Virtual_Guest:getBandwidthDataByDate

Do you know what is the unit for SoftLayer_Virtual_Guest:getBandwidthDataByDate?
Bit, byte or Octect?
I found some mismatching between the return value from API and portal.
Thanks.
The method that you are using will return an "average: bandwith usage, but the portal uses another method which returns a "sum" value. So the values will not be the same, but they will nearly.
Another thing to point out is that the API does not return bytes/per second, it returns the bytes used by the interface in a peiod of time. what I can see in your result of the api is that period of time is 5 minutes.
so let's convert the data with that information:
646793.0 bytes in 5 minutes
converting to bytes per second (5 minutes = 300 seconds)
646793.0/300 = 2155.976 bytes/second
converting to bits
2155.976 * 8 = 17247.808
converting to kilo bits (note we are not using 1024 )
17247.808 / 1000 = 17.247 KB/s
As I told you the value is closer, but not the same due to the method used if you are looking the exact value you have to use the getSummaryData method. here an example in java Getting bandWidth data in SL
Regards
If I'm not wrong, it is in bytes per second.
Here I added an example for April's question.
Portal Bandwidth Graph
I got the datas by SoftLayer_Virtual_Guest:getBandwidthDataByDate.
getBandwidthDataByDate's output
It showed that 'counter': 646793.0, if the "unit" is bytes per sec, 646793.0Bps*8/1024 != 16.62Kbps

How can I (reasonably) precisely perform an action every N milliseconds?

I have a machine which uses an NTP client to sync up to internet time so it's system clock should be fairly accurate.
I've got an application which I'm developing which logs data in real time, processes it and then passes it on. What I'd like to do now is output that data every N milliseconds aligned with the system clock. So for example if I wanted to do 20ms intervals, my oututs ought to be something like this:
13:15:05:000
13:15:05:020
13:15:05:040
13:15:05:060
I've seen suggestions for using the stopwatch class, but that only measures time spans as opposed to looking for specific time stamps. The code to do this is running in it's own thread, so should be a problem if I need to do some relatively blocking calls.
Any suggestions on how to achieve this to a reasonable (close to or better than 1ms precision would be nice) would be very gratefully received.
Don't know how well it plays with C++/CLR but you probably want to look at multimedia timers,
Windows isn't really real-time but this is as close as it gets
You can get a pretty accurate time stamp out of timeGetTime() when you reduce the time period. You'll just need some work to get its return value converted to a clock time. This sample C# code shows the approach:
using System;
using System.Runtime.InteropServices;
class Program {
static void Main(string[] args) {
timeBeginPeriod(1);
uint tick0 = timeGetTime();
var startDate = DateTime.Now;
uint tick1 = tick0;
for (int ix = 0; ix < 20; ++ix) {
uint tick2 = 0;
do { // Burn 20 msec
tick2 = timeGetTime();
} while (tick2 - tick1 < 20);
var currDate = startDate.Add(new TimeSpan((tick2 - tick0) * 10000));
Console.WriteLine(currDate.ToString("HH:mm:ss:ffff"));
tick1 = tick2;
}
timeEndPeriod(1);
Console.ReadLine();
}
[DllImport("winmm.dll")]
private static extern int timeBeginPeriod(int period);
[DllImport("winmm.dll")]
private static extern int timeEndPeriod(int period);
[DllImport("winmm.dll")]
private static extern uint timeGetTime();
}
On second thought, this is just measurement. To get an action performed periodically, you'll have to use timeSetEvent(). As long as you use timeBeginPeriod(), you can get the callback period pretty close to 1 msec. One nicety is that it will automatically compensate when the previous callback was late for any reason.
Your best bet is using inline assembly and writing this chunk of code as a device driver.
That way:
You have control over instruction count
Your application will have execution priority
Ultimately you can't guarantee what you want because the operating system has to honour requests from other processes to run, meaning that something else can always be busy at exactly the moment that you want your process to be running. But you can improve matters using timeBeginPeriod to make it more likely that your process can be switched to in a timely manner, and perhaps being cunning with how you wait between iterations - eg. sleeping for most but not all of the time and then using a busy-loop for the remainder.
Try doing this in two threads. In one thread, use something like this to query a high-precision timer in a loop. When you detect a timestamp that aligns to (or is reasonably close to) a 20ms boundary, send a signal to your log output thread along with the timestamp to use. Your log output thread would simply wait for a signal, then grab the passed-in timestamp and output whatever is needed. Keeping the two in separate threads will make sure that your log output thread doesn't interfere with the timer (this is essentially emulating a hardware timer interrupt, which would be the way I would do it on an embedded platform).
CreateWaitableTimer/SetWaitableTimer and a high-priority thread should be accurate to about 1ms. I don't know why the millisecond field in your example output has four digits, the max value is 999 (since 1000 ms = 1 second).
Since as you said, this doesn't have to be perfect, there are some thing that can be done.
As far as I know, there doesn't exist a timer that syncs with a specific time. So you will have to compute your next time and schedule the timer for that specific time. If your timer only has delta support, then that is easily computed but adds more error since the you could easily be kicked off the CPU between the time you compute your delta and the time the timer is entered into the kernel.
As already pointed out, Windows is not a real time OS. So you must assume that even if you schedule a timer to got off at ":0010", your code might not even execute until well after that time (for example, ":0540"). As long as you properly handle those issues, things will be "ok".
20ms is approximately the length of a time slice on Windows. There is no way to hit 1ms kind of timings in windows reliably without some sort of RT add on like Intime. In windows proper I think your options are WaitForSingleObject, SleepEx, and a busy loop.