what does 'resolution' mean in embedded system? - embedded

I'm taking embedded system class and I keep seeing this word being used as a clock period. But sometimes, it acts differently to clock period. It's definitely not a graphics screen resolution. What's the exact definition in terms of embedded system?

The clock resolution is the smallest unit of time it can measure. For example, a clock with a rate of 1hz has a resolution of 1s. One with a rate of 60hz has a resolution of 16.6 repeating ms.
Desired unit in seconds/clock rate per second = resolution
1000ms/60 hz = 16.6 repeating ms

The term has a general meaning relating to granularity or minimum increment that applies to time as well as images and displays.
In the case of a clock signal there are a number of terms relating to resolution that are probably less ambiguous such as period, frequency, or granularity.

Related

Fabric previous day event count changes after 6-12 hours

Looks like Fabric is recalculating or correcting the count for one of my events (iOS app) 6-12h later. The updated count is always lower, 20-30% down.
Mike from Fabric here. Yes, this can happen either for a count upward or downward around UTC midnight each day.
We have two layers of processing - speed and batch - that allow us to collect data in real-time, while at the same time correcting any errors that happen during the collection of the data. What you're seeing happen is the correction of the speed data layer by the batch layer.
As a result of having less time to calculate speed layer information we use probabilistic algorithms which let us calculate this information in real-time, but a cost of reduced accuracy. For most apps, the correction is extremely small, but there are outliers.

Understanding simple simulation and rendering loop

This is an example (pseudo code) of how you could simulate and render a video game.
//simulate 20ms into the future
const long delta = 20;
long simulationTime = 0;
while(true)
{
while(simulationTime < GetMilliSeconds()) //GetMilliSeconds = Wall Clock Time
{
//the frame we simulated is still in the past
input = GetUserlnput();
UpdateSimulation(delta, input);
//we are trying to catch up and eventually pass the wall clock time
simulationTime += delta;
}
//since my current simulation is in the future and
//my last simulation is in the past
//the current looking of the world has got to be somewhere inbetween
RenderGraphics(InterpolateWorldState(GetMilliSeconds() - simulationTime));
}
That's my question:
I have 40ms to go through the outer 'while true' loop (means 25FPS).
The RenderGraphics method takes 10ms. So that means I have 30ms for the inner loop. The UpdateSimulation method takes 5ms. Everything else can be ignored since it's a value under 0.1ms.
What is the maximum I can set the variable 'delta' to in order to stay in my time schedule of 40ms (outer loop)?
And why?
This largely depends on how often you want and need to update your simulation status and user input, given the constraints mentioned below. For example, if your game contains internal state based on physical behavior, you would need a smaller delta to ensure that movements and collisions, if any, are properly evaluated and reflected within the game state. Also, if your user input requires fine-grained evaluation and state update, you would also need smaller delta values. For example, a shooting game with analogue user input (e.g. mouse, joystick), would benefit from update frequencies larger than 30Hz. If your game does not need such high-frequency evaluation of input and game state, then you could get away with larger delta values, or even by simply updating your game state once any input by the player was being detected.
In your specific pseudo-code, your simulation would update according to a fixed time-slice of length delta, which requires your simulation update to be processed in less wallclock time than the wallclock time to be simulated. Otherwise, wallclock time would proceed faster, than your simulation time can be updated. This ultimately limits your delta depending on how quick any simulation update of delta simulation time can actually be computed. This relationship also depends on your use case and may not be linear or constant. For example, physics engines often would divide your delta time given internally to what update rate they can reasonably process, as longer delta times may cause numerical instabilities and harder to solve linear systems raising processing effort non-linearly. In other use cases, simulation updates may take a linear or even constant time. Even so, many (possibly external) events could cause your simulation update to be processed too slowly, if it is inherently demanding. For example, loading resources during simulation updates, your operating system deciding to lay your execution thread aside, another process run by the user, anti-virus software kicking in, low memory pressure, a slow CPU and so on. Until now, I saw mostly two strategies to evade this problem or remedy its effects. First, simply ignoring it could work if the simulation update effort is low and it is assumed that the cause of the slowdown is temporary only. This would result in more or less noticeable "slow motion" behavior of your simulation, which could - in worst case - lead to simulation time lag piling up forever. The second strategy I often saw was to simply cap the measured frame time to be simulated to some artificial value, say 1000ms. This leads to smooth behavior as soon as the cause of slow down disappears, but has the drawback that the 'capped' simulation time is 'lost', which may lead to animation hiccups if not handled or accounted for. To choose a strategy, analyzing your use case could consist of measuring the wallclock time it takes to process simulation updates of delta and x * delta time and how changing the delta time and simulation load to process actually reflects in wallclock time needed to compute it, which will hint you to what the maximum value of delta is for your specific hardware and software environment.

GPS reported accuracy, error function

Most GPS systems report "accuracy" in units of meters, with the figure varying over orders of magnitude. What does this figure mean? How can it be translated to an error function for estimation, i.e. the probability of an actual position given the GPS reading and its reported accuracy?
According to the Wikipedia article on GPS accuracy, a reading down to 3 meters can be achieved by precisely timing the radio signals arriving at the receiver. This seems to correspond with the tightest error margin reported by e.g. an iPhone. But that wouldn't account for external signal distortion.
It sounds like an error function should have two domains, with a gentle linear slope out to the reported accuracy and then a polynomial or exponential increase further out.
Is there a better approach than to tinker with it? Do different GPS chipset vendors conform to any kind of standard meaning, or do they all provide only some kind of number for the sake of feature parity?
The number reported is usually called HEPE, Horizontal Estimated Position Error. In theory, 67% of the time the measurement should be within HEPE of the true position, and 33% of the time the measurement should be in horizontal error by more than the HEPE.
In practice, no one checks HEPE's very carefully, and in my experience, HEPE's reported for 3 or 4 satellite fixes are much larger than they need to be. That is, in my experience 3 satellite fixes are accurate to within a HEPE distance much more than 67% of the time.
The "assumed" error distribution is circular gaussian. So in principle you could find the right ratios for a circular gaussian and derive the 95% probability radius and so on. But to have these numbers be meaningful, you would need to do extensive statistical testing to verify that indeed you are getting around 95%.
The above are my impressions from working in the less accuracy sensitive parts of GPS over the years. Concievably, people who work on using GPS for aircraft landing may have a better sense of how to predict errors and error rates, but the techniques and methods they use are likely not available in consumer GPS devices.

accelerometer measuring negative peaks of velocity

I'm writing an application for iphone4 and I'm taking values from the accelerometer to compute the current movement from a known initial position.
I've noticed a very strange behavior: often times when I walk holding the cellphone for a few meters and then I stop, I register a negative peak of overall velocity when the handset decelarates. How is that possible if I keep moving in the same direction?
To compute the variation in velocity I just do this:
delta_v = (acc_previous + acc_now)/2 * (1/(updating_frequency))
Say you are moving at a constant 10 m/s. Your acceleration is zero. Let's say, for the sake of simplicity, you sample every 1 second.
If you decelerate smoothly over a period of 0.1 seconds, you might get a reading of 100 m/s/s or you might not get a reading at all since the deceleration might fall between two windows. Your formula most likely will not detect any deceleration or if it does, you'll get two values of -50 m/s/s: (0 - 100) / 2 and then (-100 + 0) / 2. Either way you'll get the wrong final velocity.
Something similar could happen at almost any scale, All you need is a short period of high acceleration or deceleration that you happen to sample and your figures are screwed.
Numerical integration is hard. Naive numerical integration of a noisy signal will essentially always produce significant errors and drift (like what you're seeing). People have come up with all sorts of clever ways to deal with this problem, most of which require having some source of reference information other than the accelerometer (think of a Wii controller, which has not only an accelerometer, but also the thingy on top of the TV).
Note that any MEMS accelerometer is necessarily limited to reporting only a certain band of accelerations; if acceleration goes outside of that band, then you will absolutely get significant drift unless you have some way to compensate for it. On top of that, there is the fact that the acceleration is reported as a discrete quantity, so there is necessarily some approximation error as well as noise even if you do not go outside of the window. When you add all of those factors together, some amount of drift is inevitable.
Well if you move any object in one direction, there's a force involved which accelerates the object.
To make the object come to a halt again, the same force is needed in the exact opposite direction - or to be more precise, the vector of the acceleration event that happened before needs to be multiplied with -1. That's your negative peak.
Not strictly a programming answer, but then again, your question is not strictly a programming question :)
If it's going a thousand miles per hour, its acceleration is 0. If it's speeding, the acceleration is positive. If it's slowing down, the acceleration is negative.
You can use the absolute number of the velocity to invert any negative acceleration, if that's needed:
fabs(delta_v); // use abs for ints

GPS signal cleaning & road network matching

I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.