requestLocation() fails to provide Course on watch hardware but work in Sim - gps

I've written a standalone Applewatch app which uses GPS. Trying to be a good citizen and minimize power consumption, I am using the single shot location request method. On the Xcode sim, this works fine, providing a location AND course and speed (when using one of the recorded movements in the sim). On real hardware, I get accurate lat and long data back but course always returns "-1". Note: I am not looking for Heading data as there is no magnetometer hardware in watch. I want course over the ground from ∆X,Y from the GPS fixes. Expected behavior is that it should not work in Sim with a single shot requestLocation(), but since it does, why not work on the real thing? Has anyone else seen this? Is this a bug to Radar, or a feature?
I suppose if this approach is known to fail, I could try startUpdating() until I see a course, then stop updating, but Apple says this is a power hogging approach.

Continuing with experiments, it appears that the course issue is dependent on the properties of the instance of CLLocationManager: when
locationMgr.desiredAccuracy = kCLLocationAccuracyKilometer
locationMgr.distanceFilter = kCLDistanceFilterNone
locationMgr.activityType = .otherNavigation
No course reported by hardware(Sim does report),but when
locationMgr.desiredAccuracy = kCLLocationAccuracyBest
locationMgr.distanceFilter = kCLDistanceFilterNone
locationMgr.activityType = .otherNavigation
then the watch does report course. It would seem that .DistanceFilter should be relevant to ∆X,∆Y reports for course, but why should .desiredAccuracy matter?
BTW, original code had Accuracy of 1km due Apple noting increasing accuracy increases power drain.

Related

iBeacon Monitoring with Unreliable Results (didEnterRegion & didExitRegion)

I'm currently working on an iOS app that ranges and monitors an iBeacon in order to be able to do some actions and receive notifications.
Ranging is working flawlessly, but I'm having troubles with the beacon monitoring and the notifications. I've researched quite a bit about the subject and I'm aware that CoreLocation framework has usually problems like this, but I was wondering how other devs are fixing/approaching this.
Basically, I'm showing local notifications when didEnterRegion and didExitRegion methods are fired. Unfortunately, these two methods are being fired quite often (in an unreliable fashion), even when the iBeacon is right next to it, although sometimes is works perfectly, which makes it more annoying.
I've tried lowering the iBeacon advertising interval, and although it helped, it didn't fix the issue completely. Now, I'm trying with a logic filter where I ignore firing the notification if the enter or exit event happened in the last X minutes (I'm thinking of a 'magic' number between 5 and 15).
Is anyone having the same problems? Would adding a 2nd iBeacon to the situation help? (maybe monitor both of them, and filter logically the exit and enter events based on those two inputs?).
I was also thinking in adding another layer of data to show notifications, maybe based on GPS or Wifi info. Has anyone tried this?
Any other idea? I'm open to any recommendation.
Just in case, I'm using Estimote iBeacons and iOS9 (Objective-c).
Thanks for your time!
Intermittent region exit/entry events are a common problem, and are typically solved with a timer-baded software filter exactly as you suggest. The specifics of how you set up the filter (the minimum time to wait for a re-entry after an exit before processing exit logic) varies for each use case so it is good to have it under your control.
Understand that region exits are caused by iOS not detecting any Bluetooth advertisements from a beacon in the CLBeaconRegion for 30 seconds. If two detected packets are 31 seconds apart, you will get a region exit and then a region entry one second later.
This commonly happens with low signal levels. If an iOS device is on the outer edge of the beacon's transmission range, only a small percentage of packets will be received. With a beacon transmitting at 1Hz, if 30 packets in a row are missed, the iOS device will get an exit event.
There are several things you can do to reduce this problem in a specific area where you want solid coverage:
Turn your beacon transmitter power up to the maximum. This will give stronger signal levels and fewer missed packets in the area you care about.
Turn the advertising rate to the maximum. Advertising at 10 Hz gives 10x as many packets to receive as 1 Hz.
If needed add additional beacons with the same identifier to increase coverage.
Of course, there are costs to the above, including reduced battery life at high advertising rates and transmitter power levels.
Even if you do all of the above, you still need the software filter, because there will always be a point where you are on the edge if the nearest beacon's transmission radius.
You can see an example of software filter code in my answer here.
Beacons emit a pulsing signal. Ranging also performs an intermittent scan (roughly every 100 ms). This means that it is possible for your device to miss beacon for a few seconds in a row, which can cause the results you are experiencing. You can log the beacons RSSI value in this method:
- (void)locationManager:(CLLocationManager *)manager didRangeBeacons:(NSArray *)iBeacons inRegion:(CLBeaconRegion *)region
I believe you will see a lot of zero values before seeing didExitRegion being called. This isn't a fault in your code, or with the beacon. This just has to do with the fact that neither the signal being emitted or the detection are constant. They are pulsing. These problems can occur while the beacon is just sitting on the same desk as your device, and can be exaggerated in a real world setting when the signals are blocked by physical objects and people.
I would use ranging to determine more accurately if your beacon is around. Note that ranging, especially in the background can have a significant battery draw.

Enabling 10 Hz sampling rate in Ublox modules

I'm using ublox NEO-M8N-0-01 GNSS module.
This module supports up to 5Hz GPS+GLONASS and 10 Hz GPS only.
However, when I try to change the sampling rate (via UBX-CFG-RATE in the messages view) I can only increase it to 5 Hz (Measurement period = 200ms). Any value below 200ms is impossible (changes the box to pink).
It happens even if I only produce NMEA message GxGGA.
The way I made it only GPS was via UBX-CFG-GNSS
Has anyone encountered this issue?
Thanks in advance
Roi Yozevitch
You don't say how you are setting the rate however going by your description I'm assuming you are using the ublox u-center software.
There is a simple explanation for this issue and a simple solution: Their software has a bug in (or wasn't updated to match the final specification of the part).
The solution is to not use u-center, it's the PC software that's complaining not the receiver. The receiver itself doesn't care what the spec sheet says, it will try it's best to run at whatever rate you request.
Sending commands directly I've managed to get a fairly reliable 10Hz GPS+Glonass. There is the occasional missing point but most of the time it keeps up.
Running GPS only you can get faster than 10Hz. If you play with the settings and restrict it to 8 channels 18-19Hz is fairly reliable. Unfortunately 20Hz is pushing it too far, you end up getting positions at 10Hz.
Obviously when running at these update rates make sure that your baud rate is high enough to cope with the requested messages and rate.

Debugging methods for finding the location and error that's causing a game to freeze

I recently I came across an error that I cannot understand. The game I'm developing using Cocos2D just freezes at a certain random point -- it gets a SIGSTOP -- and I cannot find the reason. What tool can I use (and how do I use it) to find out where the error occurs and what's causing it?
Jeremy's suggestion to stop in the debugger is a good one.
There's a really quick way to investigate a freeze (or any performance issue), especially when it's not easy to reproduce. You have to have a terminal handy (so you'll need to be running in the iOS simulator or on Mac OS X, not on an iOS device).
When the hang occurs pop over to a terminal and run:
sample YourProgramName
(If there are spaces in your program name wrap that in quotes like sample "My Awesome Game".) The output of sample is a log showing where your program is spending time, and if your program is actually hung, it will be pretty obvious which functions are stuck.
I disagree with Aaron Golden's answer above as running on a device is extremely useful in order to have a real-case scenario of where the app freezes. The simulator has more memory and does not reproduce the hardware of the device in an accurate way (for example, the frame rate is in certain cases lower).
"Obviously", you need to connect your device (with a developer profile) on Xcode and look at the console terminal to look for traces that user #AaronGolden suggested.
If those are not enough you might want to enable a general exception breakpoint in Xcode to capture more of the stacktrace messages.
When I started learning Cocos2D my app often frooze. This is a list of common causes:
I wasn't using sprite sheets and hence the frame rate was dropping drammatically
I was using too much memory (too many high-definition sprites. Have a look at TexturePacker and use pvr.ccz or pvr.gz format; it cuts memory allocation in half)
Use instruments to profile your app for memory warnings (for example, look at allocation instruments and look for memory warnings).

Any way to get stable/consistent FPS form kinect?

I am trying to record kinect files in .oni format, that I will later try to synchronize with other sensors. As such, it is very important that I get consistent fps, even if some frames are repeats.
From what I can see now, WaitAndUpdateAll does not guarantee that the frame rate is consistent. I will be recording for several minutes (20+), so I need to make sure there is no drift!
Does anyone know if it's possible to lock down the fps of the recording, and if not, how stable the recording fps of the kinect is? Thanks!
After some investigation of this issue, I put together the following write up on the topic:
http://denislantsman.com/?p=50
Putting it here so interested people can find it and not have to wrestle with this issue.
My guess would be to go with the PCL libary since the developers also work together with the ROS team where they also have to sync sensors a lot. But be warned I wasn't able capture XYZRGB clouds at 30 FPS on windows 7. If you only need XYZ to be captured you should be fine. Worst case you have to time stamp and sync all your data by yourself.

Testing Real Time Operating System for Hardness

I have an embedded device (Technologic TS-7800) that advertises real-time capabilities, but says nothing about 'hard' or 'soft'. While I wait for a response from the manufacturer, I figured it wouldn't hurt to test the system myself.
What are some established procedures to determine the 'hardness' of a particular device with respect to real time/deterministic behavior (latency and jitter)?
Being at college, I have access to some pretty neat hardware (good oscilloscopes and signal generators), so I don't think I'll run into any issues in terms of testing equipment, just expertise.
With that kind of equipment, it ought to be fairly easy to sync the o-scope to a steady clock, produce a spike each time the real-time system produces an output, an see how much that spike varies from center. The less the variation, the greater the hardness.
To clarify Bob's answer maybe:
Use the signal generator to generate a pulse at some varying frequency.
Random distribution across some range would be best.
use the signal generator (trigger signal) to start the scope.
the RTOS has to respond, do it thing and send an output pulse.
feed the RTOS output into input 2 of the scope.
get the scope to persist/collect mode.
get the scope to start on A , stop on B. if you can.
in an ideal workd, get it to measure the distribution for you. A LeCroy would.
Start with a much slower trace than you would expect. You need to be able to see slow outliers.
You'll be able to see the distribution.
Assuming a normal distribution the SD of the response time variation is the SOFTNESS.
(This won't really happen in practice, but if you don't get outliers it is reasonably useful. )
If there are outliers of large latency, then the RTOS is NOT very hard. Does not meet deadlines well. Unsuitable then it is for hard real time work.
Many RTOS-like things have a good left edge to the curve, sloping down like a 1/f curve.
Thats indicitive of combined jitters. The thing to look out for is spikes of slow response on the right end of the scope. Keep repeating the experiment with faster traces if there are no outliers to get a good image of the slope. Should be good for some speculative conclusion in your paper.
If for your application, say a delta of 1uS is okay, and you measure 0.5us, it's all cool.
Anyway, you can publish the results ( and probably in the publish sense, but certainly on the web.)
Link from this Question to the paper when you've written it.
Hard real-time has more to do with how your software works than the hardware on its own. When asking if something is hard real-time it must be applied to the complete system (Hardware, RTOS and application). This means hard or soft real-time is system design issues.
Under loading exceeding the specification even a hard real-time system will fail (hopefully with proper failure indication) while a soft real-time system with low loading would give hard real-time results. How much processing must happen in time and how much pre/post processing can be performed is the real key to hard/soft real-time.
In some real-time applications some data loss is not a failure it should just be below a certain level, again a system criteria.
You can generate inputs to the board and have a small application count them and check at what level data is going to be lost. But that gives you a rating specific to that system running that application. As soon as you start doing more processing your computational load increases and you now have a different hard real-time limit.
This board will running a bare bones scheduler will give great predictable hard real-time performance for most tasks.
Running a full RTOS with heavy computational load you probably only get soft real-time.
Edit after comment
The most efficient and easiest way I have used to measure my software's performance (assuming you use a schedular) is by using a free running hardware timer on the board and to time stamp my start and end of my cycle. Or if you run a full RTOS time stamp you acquisition and transition. Save your Max time and run a average on the values over a second. If your average is around 50% and you max is within 20% of your average you are OK. If not it is time to refactor your application. As your application grows the cycle time will grow. You can monitor the effect of all your software changes on your cycle time.
Another way is to use a hardware timer generate a cyclical interrupt. If you are in time reset the interrupt. If you miss the deadline you have interrupt handler signal a failure. This however will only give you a warning once your application is taking to long but it rely on hardware and interrupts so you can't miss.
These solutions also eliminate the requirement to hook up a scope to monitor the output since the time information can be displayed in any kind of terminal by a background task. If it is easy to monitor you will monitor it regularly avoiding solving the timing problems at the end but as soon as they are introduced.
Hope this helps
I have the same board here at work. It's a slightly-modified 2.6 Kernel, I believe... not the real-time version.
I don't know that I've read anything in the docs yet that indicates that it is meant for strict RTOS work.
I think that this is not a hard real-time device, since it runs no RTOS.
I understand being geek, but using oscilloscope to test a computer with ethernet/usb/other digital ports and HUGE internal state (RAM) is both ineffective and unreliable.
Instead of watching wave forms, you can connect any PC to the output port and run proper statistical analysis.
The established procedure (if the input signal is analog by nature) is to test system against several characteristic inputs - traditionally spikes, step functions and sine waves of different frequencies - and measure phase shift and variance for each input type. Worst case is then used in specifications of the system.
Again, if you are using standard ports, you can easily generate those on PC. If the input is truly analog, a separate DAC or simply a good sound card would be needed.
Now, that won't say anything about OS being real-time - it could be running vanilla Linux or even Win CE and still produce good and stable results in those tests if hardware is fast enough.
So, you need to simulate heavy and varying loads on processor, memory and all ports, let it heat and eat memory for a few hours, and then repeat tests. If latency stays constant, it's hard real-time. If it doesn't, under any load and input signal type, increase above acceptable limit, it's soft. Otherwise, it's advertisement.
P.S.: Implication is that even for critical systems you don't actually need hard real-time if you have hardware.