CMPedometerData.distance ... how is this calculated? - cocoa-touch

I have an app that uses CMStepCounter in iOS7. I am updating it to use the new APIs in iOS 8, primarily CMPedometer. CMPedometer's new handler returns an object called CMPedometerData which has distance, speed, and number of steps instead of just number of steps like CMStepCounter had. This is great but I maw wondering if anybody knows how apple calculates distance.
Previously I had to calculate distance by taking the number of steps and multiplying by the distance per step which I obtained from a user input stride length. People of different sizes will have different stride lengths and will travel different distances for the same number of steps. So I was wondering if Apple just takes this into account, or if they just use a hard coded stride length for everybody. IF anybody has figured this out, I would really like to know. Thanks!

Related

React Native, IBECONS, RSSI value to distance conversion

How to stabilize the RSSI (Received Signal Strength Indicator) of low energy Bluetooth beacons (BLE) for more accurate distance calculation?
We are trying to develop an indoor navigation system and came across this problem where the RSSI is fluctuating so much that, the distance estimation is nowhere near the correct value. We tried using an advance average calculator but to no use,
The device is constantly getting RSSI values, how to filter them, how to get the mean value, I am completely lost, please help.
Can anyone suggest any npm library or point in the right direction, I have been searching for many days but have not gotten anywhere.
FRONT END: ReactNative BACKEND: NODEJS
In addition to the answer of #davidgyoung, we would like to point out that any filtering method is a compromise between quality of noise level reduction and the time-lag introduced by this filtration (depending on the characteristic filtering time you use in your method). As was pointed by #davidgyoung, if you take characteristic filtering period T you will get an average time-lag of about T/2.
Thus, I think the best approach to solve your problem is not to try to find the best filtering method but to make changes on the transmitter’s end itself.
First you can increase the number of signals, transmitter per second (most of the modern beacon allow to do so by using manufacturer applications and API).
Secondly, you can increase beacon's power (which is also usually one of the beacon’s settings), which usually reduces signal-to-noise ratio.
Finally, you can compare beacons from different vendors. At Navigine company we experimented and tested lots of different beacons from multiple manufacturers, and it appears that signal-to-noise ratio can significantly vary among existing manufacturers. From our side, we recommend taking a look at kontakt.io beacons (https://kontakt.io/) as an one of the recognized leaders with 5+ years experience in the area.
It is unlikely that you will find a pre-built package that will do what you want as your needs are pretty specific. You will most likely have to wtite your own filtering code.
A key challenge is to decide the parameters of your filtering, as an indoor nav use case often is impacted by time lag. If you average RSSI over 30 seconds, for example, the output of your filter will effectively give you the RSSI of where a moving object was on average 15 seconds ago. This may be inappropriate for your use case if dealing with moving objects. Reducing the averaging interval to 5 seconds might help, but still introduces time lag while reducing smoothing of noise. A filter called an Auto-Regressive Moving Average Filter might be a good choice, but I only have an implementation in Java so you would need to translate to JavaScript.
Finally, do not expect a filter to solve all your problems. Even if you smooth out the noise on the RSSI you may find that the distance estimates are not accurate enough for your use case. Make sure you understand the limits of what is possible with this technology. I wrote a deep dive on this topic here.

Is there a reason why the number of channels/filters and batch sizes in many deep learning models are in powers of 2?

In many models the number of channels is kept in powers of 2. Also the batch-sizes are described in powers of 2. Is there any reason behind this design choice?
There is no significant in keeping channels and batch size as powers of 2. You can keep any number you want.
In many models the number of channels is kept in powers of 2. Also the batch-sizes are described in powers of 2. Is there any reason behind this design choice?
While both could probably be optimized for speed (cache-alignment? optimal usage of CUDA cores?) to be powers of two, I am 95% certain that 99.9% do it because others used the same numbers / it worked.
For both hyperparameters you could choose any positive integer. So what would you try? Keep in mind, each complete evaluation takes at least several hours. Hence I guess if people play with this parameter, they make something like a binary search: Starting from one number, doubling keep doubling if it improves until an upper bound is found. At some point the differences are minor and then it is irrelevant what you choose. And people will wonder less if you write that you used a batch size of 64 than if you write that you used 50. Or 42.

Cluster 3D points into different segments

I'm asking if there are any ideas of how to cluster different body segments using the depth map from the Kinect device? There are two problems, the first one is how to identify different body parts from each other, for example: lower arm from upper arm. The second one is how to identify a body part if there is an occluded part?
I hope if anyone could guide me solve this.
Many Thanks for your kind assistance
You can use skeleton recognition middlewares (e.g. Nite) to get the coordinates of the joints of the body (such as shoulder, elbow, fingertip). After reading the Z (depth) value of the joints, you can consider only the points which has a Z value close to the body joints' Z values.
For example if the middleware tells you that the Z value of the hand is 2000mm, you can safely assume that all the pixels/points that are part of fingers and palm will have a Z value around 1900-2100mm, and the wall or desk behind or in front of the user will have a much different Z value. So you can just disregard any point outside 1900-2100mm.
You should also disregard any points that are far from the joints. For example there might be a book that is exactly 2000mm far from the camera, but located far from the user.

Algorithm for reducing GPS track data to discard redundant data?

We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.

How can I distribute a number of values Normally in Excel VBA

Sorry I know the question isnt as specific as it could be. I am currently working on a replenishment forecasting system for a clothing company (dont ask why it's in VBA). The module I am currently working on is distribution forecasts down to a size level. The idea is that the planners can forecast the number to sell, then can specify a ratio between the sizes.
In order to make the interface a bit nicer I was going to give them 4 options; Assess trend, manual entry, Poisson and Normal. The last two is where I am having an issue. Given a mean and SD I'd like to drop in a ratio (preferably as %s) between the different sizes. The number of the sizes can vary from 1 to ~30 so its going to need to be a calculation.
If anyone could point me towards a method I'd be etenaly greatfull - likewise if you have suggestions for a better method.
Cheers
For the sake of anyone searching this, whilst only a temporary solution I used probability mass functions to get ratios this allowed the user to modify the mean and SD and thus skew the curve as they wished. I could then use the ratios for my calculations. Poisson also worked with this method but turned out to be a slightly stupid idea in terms of choice.