Does celestial body locations reported by PyEphem report apparent or actual positions? - physics

I've been using PyEphem for a couple of things, and I was wondering if the location of celestial bodies are reported as actual or apparent positions. That is to say, does the locations factor in the time delay for information to reach us?
Thanks,

It corrects for light travel time — here's the code where it does so, in case you want to check its technique:
https://github.com/brandon-rhodes/pyephem/blob/6849cc42dbb52284f9365655ba84cac5497de1f1/libastro-3.7.7/circum.c#L336

Related

Does ZedGraph offer any kind of Level-of-Detail Culling behavior?

I've searched and can't find an answer to this question. I could write the code myself to do it, but I don't want to reinvent the wheel. :)
Since ZedGraph uses an IPointList and its indexer for internal data access, you can assign any kind of data structure to it and dynamically change the data that ZedGraph receives when it calls the indexer.
It's a smart architecture, and naturally, it would be feasible to implement a Level-of-Detail system using a custom IPointList where the number of points is culled based on the xScale and yScale of the GraphPane.
This way you can have millions of points loaded, but when the zoomlevel of the graph would show all the points, they can be culled so that ZedGraph is only drawing a few thousand. As the zoom magnification is increased, fewer points would be culled in the region of interest.
I wanted to know if ZedGraph already offers anything like this out of the box. I haven't seen any indication of support for it.
Does anyone know?
I posted about this on Sourceforge and got no response there either.
Then I posted on a fork on Github and got a response. It's here:
https://github.com/ZedGraph/ZedGraph/issues/13
The answer:
There is a naive algorithm that filters points by simply skipping them blindly to reach a target display number.
Of course this naive approach can give completely wrong impressions of what the data looks like when peaks and valleys get dropped in a line graph, for instance. IMHO, an algorithm like that is completely unuseable.
So basically, there is no acceptable built-in culling in ZedGraph at the present time.

Indoor positioning

I am trying to get indoor gps by trying to orient my floorplan with the actual building from google maps. I know perfect accuracy is not possible. Any idea how to do this ? Do the maps need to be converted to kml format?
Forget that!
Only with luck you can get indoor GPS signals, probably only near the window, and then it is likely to be more distorted than the size of your building.
You only can try to get the coordinates outside, at the corner of the buildings.
For precise measures you would need some averaging of the measures, which only a few GPS devices offer. For less precision, take the coordinate, or measure it on differnet hours, days.
Otherwise, you should think about geolocation using Wifi/HF and any other wireless/radio sources that you can precisely locate since you probably install it yourself or at least someone from your company/service is responsible of them and could give you the complete list with coordinates. Then, once you've got the radio location, you can geolocate the devices using radio propagation and location.
I know that's not the answer you were looking for, but think about it as an alternate one if you really need to locate people inside your building.
PS: I did it at work and it works pretty well (except some areas where radio emitter are broken).

Algorithm for reducing GPS track data to discard redundant data?

We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.

Location manager, not accurate even after setting kCLLocationAccuracyBest

Hi there
I am using location manager and mapkit, i am able to get the curernt location, but its not accurate enough - This is my problem
My current location on the map is for example 3.0856333888778926, 101.67204022407532, but location manager's location only returns +3.08370327, +101.67506444; which is short of a few decimal numbers
This is resulting in the wrong location (about 1 KM away) when i try to show directions
I have already set location to be kCLLocationAccuracyBest -
Any suggestions?
Where do you try it? Inside, the accuracy of GPS is inherently limited (usually not to 1km, though. But within big cities, reflections from buildings are possible). Ahh, and another thing: is the measurement done inside the simulator? I'm not sure how the location is determined within it. But in my tests, I'm also usually quite off my actual position.
It may be related on how you have setup your locationmanager.
Could you please post it here for us to check? Maybe this could help.
Are you on wifi? This happens to me if I am on wifi. When I switch to edge/3g, everything turns to normal. Just try with standard map application if it also shows you wrong.
the highest possible accuracy and combine it with additional sensor data.
kCLLocationAccuracyBestForNavigation
This level of accuracy is intended for use in navigation applications that require precise position information at all times and are intended to be used only while the device is plugged in.

GPS signal cleaning & road network matching

I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.