This is a simple question that I would have rather chatted with someone about but here it is:
How is heading calculated? I can't figure it out visually.
If the heading is calculated in regards to the Earth's Geographic North, does that mean a top view of the Earth? So when you are standing on top of the surface of the Earth somewhere, how can you get a heading direction on a digital device? What are the calculations? Does it involve the sphere at all or does the device ignore the existence of the sphere and simply keep in mind a simple coordinate eg. 90.000 N and 0.000 W?
I don't know why I can't seem to grasp the concept of heading mathematically...
Edit: I think I figured it out. You are treated as a point, on the surface; north is always directly above you figuratively- you may deviate from this point 360 degrees potentially, that's as you as a point on the surface of the Earth.
Precisly a GPS receiver does not calculate heading.
heading is the direction where you are looking to.
The more correct term is course or course over ground.
But modern APIs often intermix heading, course and bearing.
heading and course is the same for a vehicle,
But not for a ship (due drift).
But the main point is that one could think a GPS chip calculates the course/heading by evaluiating old and new position. But this is not true. This would be by far to inaccurate.
GPS receiver use Doppler Shift for speed and probably also for heading calculation.
And yes course and heading is the angle clockwise measured from geographical north (0°)
There's no concept of "heading" in a coordinate, only in a procession of coordinates generated as something moves, in which case the heading is calculated based on the differences between the coordinates.
So if your first coordinate was at 10N50E and the second at 11N50E your device calculates you as traveling due north, thus on a northerly heading.
More than one question on this one :)
The heading (or yaw angle, in aeronautics), is defined as the angle between the North and the direction faced by the nose of the plane, when the plane is horizontal (pitch and roll angle at zero).
This is also what you could read on a compass (North = 0°, East = 90°, etc.)
Wherever you are on the globe, you should be able to lay on the ground a protractor whose 0° is aligned with your current meridian, pointing north, and the 90° is aligned on the local parallel, toward the east. Hence you can read your heading everywhere (except on poles)
In a car, the heading is deduced from the trajectory, by looking at the previously recorded points (Doppler based speed measurement is not widespread on cheap devices). And, as stated in other answers, this is not the heading but the track which is displayed (the direction you are moving to, compared to the direction you are facing). Luckily, cars don't drift (most of the time) and so, the track is equal to the heading.
In a smartphone, the display of the heading may be assisted by the internal compass.
So a GPS, as this, is not able to assess your heading, unless you use more than one antenna, like in this device. Where the phase difference measured between the two antennas enable the device to deduce a pure, GPS based, heading.
Similar to the method suggested in another answer - One way would be to have 2 GPS antennas and knowing their relative position to each other. Then you have the location of a straight line on the earth and you have a solid direction (perpendicular to this straight line). You can now calculate your orientation with respect to any datum (e.g. true north, or a reference GPS location).
Related
I am very basic question. Can we find the bearing i.e. angle with respect to North of a coordinate i.e. having GPS lat long of single point without any other sensors. I searched but everywhere it is like finding the bearing between two points, I want to know whether can we find bearing of single point without knowing the direction.
Secondly if we move our angle to some degrees, can we again find the new bearing?
Any reference or link to any material if available please help.
The bearing is an angle. Angle is a figure between two directions or rays with common origin (note it is not between two points, but between two rays). The bearing is angle between (1) direction from some point A to North and (2) direction from same point A to another point B.
So yes, you need two points to define bearing. While first direction (to North) is defined by a single point, defining second direction requires another reference point.
I am working on an iOS application using location services. Having a background in experimental physics, I am wondering what exactly horizontalAccuracy in a location found in locationManager:didUpdateToLocation:fromLocation: stands for. The documentation is a bit sparse...
I assume that the accuracy gives a confidence interval based on a gaussian (or poisson?) distribution. Thus, with a certain probability, the actual position is within a circle with a radius of horizontalAccuracy, but could as well be outside that area. The question is then: how big is that probability? If horizontalAccuracy corresponds to 1σ, I'd have a probability of 68% to be within that circle with horizontalAccuracy, but looking the other way around, in nearly one third of the cases, the actual position will be outside that area. Thus, in certain cases, I'd rather use 2σ (2*horizontalAccuracy) or even 3σ (3*horizontalAccuracy) to calculate with.
To put it short: is there any indication somewhere, which confidence interval horizontalAccuracy has?
Comment to all who respond "Apple says it is within":
Well - the measurement can not be exact. It must have a certain level of uncertainty. If you repeat the measurement very often, you will get a distribution of results - probably a gaussian distribution. This gaussian has a certain width, which corresponds to the level of uncertainty of the measurements. Measuring the position more often will reduce the uncertainty and thus increase accuracy, but never will give you a distinct interval where the actual position is guaranteed to be in. You will only get a probability. But if the accuracy is 3sigma, we have 99,7% - which is close to certain.
To put it short - I doubt the documentation from Apple.
I have been looking for the same information and could not find any answers. The only pointer I have, is that on Android, they are using 1σ:
http://developer.android.com/reference/android/location/Location.html#getAccuracy%28%29
To all the non-believers, this link also explains a little bit how the accuracy thing works.
My guess is, the same is true on iOS, but there is no way to be sure - except for asking the guy who wrote the code ;)
Edit:
After some playing around and checking location updates vs. physical location it seems like it is more likely 3σ on iOS. There are two observations that lead me to believe that is true:
On Android locations that come from WiFi triangulation are usually reported as having an accuracy between 20 and 50 meters. On iOS it's between 65 and 165 meters.
When measuring the distance between a reported location and the device's physical location, it has been within the reported accuracy every time so far.
The iOS documentation doesn't specify the probability of containment, but android reports a one-sigma horizontal accuracy, which they define to represent 68% probability that the true location is within the circle.
Their explanation is that location errors follow a normal distribution, and therefore +/- one-sigma represents 68% probability. However, 68% is the probability for a one-dimensional normal distribution. In two dimensions, a one-sigma error represents 39% probability of containment within a circle (the distance error follows a Rayleigh distribution, a.k.a. a chi distribution with two degrees of freedom).
There are two possible explanations.
The circle truly represents 68% probability of containment, in which case android developers have scaled the one-dimensional sigma by a factor of about 1.5 so that the circle happens to represent 68%. In this case, their choice of 68% is completely arbitrary.
The circle actually represents 39% probability of containment. In this case, their description would be correct if you replaced a one-dimensional gaussian with a two-dimensional one and its associated probability.
I think the second explanation is more likely.
iOS: https://developer.apple.com/library/ios/documentation/CoreLocation/Reference/CLLocation_Class/index.html#//apple_ref/occ/instp/CLLocation/horizontalAccuracy
Android: http://developer.android.com/reference/android/location/Location.html#getAccuracy%28%29
Which is denoting the Accuracy Level of Location. Example: If horizontalAccuracy is 0 means high accuracy and 500 as horizontalAccuracy means low accuracy.
Location Services Provider updates the location based on the consolidated best value of cellular, WiFi (in the case of WiFi connections) and GPS. So, the location value will be oscillating base on coverage. You can filter it by using this horizontalAccuracy.
Horizontal accuracy of X indicates that your horizontal position can be X meters off.. Remember location can be found out using GPS, cell tower triangulation or wifi location data. CLLocationManager gives you a most accurate location from these 3 methods.. And say there is a chance it may be off by atmost X meters.
In what way is the documentation sparse?
The radius of uncertainty for the location, measured in meters. (read-only)
The location’s latitude and longitude identify the center of the circle, and this value indicates the radius of that circle. A negative value indicates that the location’s latitude and longitude are invalid.
So your location is within the circle. It isn't outside the circle, or the radius would be bigger. Your assumption about confidence intervals is incorrect.
We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.
I am trying to write a simple game, but I'm stuck on what I think is simple physics. I have an object that at point 0,0,0 and is travelling at say 1 unit per second. If I give an instruction, that the object must turn 15 degrees per second , for 6 seconds (so it ends up 90 degrees right of it's starting position), and accelerate at 1 unit per second for 4 seconds (so it's final speed is 5 units per second), how do I calculate it's end point?
I think I know how to answer this for an object that isn't accelerating, because it's just a circle. In the example above, I know that the circumference of the circle is 4 * distance (because it is traversing 1/4 of a circle), and from that I can calculate the radius and angles and use simple trig to solve the answer.
However, because at any given moment in time the object is travelling slightly faster than it was in the previous moment, my end result wouldn't be a circle, it would be some sort of arc. I suppose I could estimate the end point by looping through each step (say 60 steps per second), but this sounds error prone and inefficient.
Could anybody point me in the right direction?
Your notion of stepping through is exactly what you do.
Almost all games operate under what's known as a "game tick". There are actually a number of different ticks that could be going on.
"Game tick" - each game tick, a set of requests are fired, AI is re-evaluated, and overall the game state has changed.
"physics tick" - each physics tick, each physical object is subject to a state change based on its current physical state.
"graphics tick" - also known as a rendering loop, this is simply drawing the game state to the screen.
The game tick and physics tick often, but do not need to, coincide with each other. You could have a physics tick that moves objects at their current speed along their current movement vector, and also applied gravity to it if necessary (altering its speed,) while adding additional acceleration (perhaps via rocket boosters?) in a completely separate loop. With proper multi-threading care, it would fit together nicely. The more de-coupled they are, the easier it will be to swap them out with better implementations later anyway.
Simulating via a time-step is how almost all physics are done in real-time gaming. I even used to do thermal modeling for the department of defense, and that's how we did our physics modelling there too (we just got to use bigger computers :-) )
Also, this allows you to implement complex rotations in your physics engine. The less special cases you have in your physics engine, the less things will break.
What you're asking is actually a Mathematical Rate of change question. Every object that is in motion has position locations of (x,y,z). If you are able to break down the component velocity and accelerations into their individual planes, your final end point would be (x1, y1, z1) which is the respective outcome of your equations in that plane.
Hope it helps (:
I am writing a physics simulation using Ogre and MOC.
I have a sphere that I shoot from the camera's position and it travels in the direction the camera is facing by using the camera's forward vector.
I would like to know how I can detect the point of collision between my sphere and another mesh.
How would I be able to check for a collision point between the two meshes using MOC or OGRE?
Update: Should have mentioned this earlier. I am unable to use a 3rd party physics library as we I need to develop this myself (uni project).
The accepted solution here flat out doesn't work. It will only even sort of work if the mesh density is generally high enough that no two points on the mesh are farther apart than the diameter of your collision sphere. Imagine a tiny sphere launched at short range on a random vector at a huuuge cube mesh. The cube mesh only has 8 verts. What are the odds that the cube is actually going to hit one of those 8 verts?
This really needs to be done with per-polygon collision. You need to be able to check intersection of polygon and a sphere (and additionally a cylinder if you want to avoid tunneling like reinier mentioned). There are quite a few resources for this online and in book form, but http://www.realtimerendering.com/intersections.html might be a useful starting point.
The comments about optimization are good. Early out opportunities (perhaps a quick check against a bounding sphere or an axis aligned bounding volume for the mesh) are essential. Even once you've determined that you're inside a bounding volume, it would probably be a good idea to be able to weed out unlikely polygons (too far away, facing the wrong direction, etc.) from the list of potential candidates.
I think the best would be to use a specialized physics library.
That said. If I think about this problem, I would suspect that it's not that hard:
The sphere has a midpoint and a radius. For every point in the mesh do the following:
check if the point lies inside the sphere.
if it does check if it is closer to the center than the previously found point(if any)
if it does... store this point as the collision point
Of course, this routine will be fairly slow.
A few things to speed it up:
for a first trivial reject, first see if the bounding sphere of the mesh collides
don't calc the squareroots when checking distances... use the squared lengths instead.(much faster)
Instead of comparing every point of the mesh, use a dimensional space division algorithm (quadtree / BSP)for the mesh to quickly rule out groups of points
Ah... and this routine only works if the sphere doesn't travel too fast (relative to the mesh). If it would travel very fast, and you sample it X times per second, chances are the sphere would have flown right through the mesh without every colliding. To overcome this, you must use 'swept volumes' which basically makes your sphere into a tube. Making the math exponentially complicated.