I'm not sure what is going wrong here. It works in some projects, but not in others. I can't figure out what the difference is between them. To test the problem, I created a point set with a single point at a position I am sure is inside the cube. When I call IndexAtPosition, I sometimes get obviously wrong answers. For example, I sometimes get inline or crossline indexes that are negative or way beyond the maximum index. The z-dimension index also comes back with a very unrealistic answer too.
I am fairly certain that my data is all consistent, i.e. same domain and CRS. There must be some settings I'm not checking.
My guess is that your point is a point like x, y, 1000m in depth
the cube however is probably in the time domain. So if you try to find a point in the cube you are going to be looking at x,y, 1000Seconds. This would be VERY far out of range and would give you a crazy number for your k index (super high). Based on the angle from north your i,j could be crazy too that far away from reality. You need to have your point in time or some way to convert it from Depth to Time.
Unfortunately, I made a mistake. One of my data points was in fact outside of the cube area when the cube was rotated.
Related
I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,
I have a 3D set of points. These points will undergo a series of tiny perturbations (all points will be perturbed at once). Example: if I have 100 points in a box, each point may be moved up to, but no more than 0.2% of the box width in each iteration of my program.
After each perturbation operation, I want to know the new distance to each point's nearest neighbor.
This needs to use a very fast data structure; I'm optimizing this for speed. It's a somewhat tricky problem because I'm modifying all points at once. Approximate NN algorithms are not suitable for this problem.
I feel like the answer is somewhere between kd-trees and Voronoi tessellations, but I am not an expert on data structures, so I am baffled about what to do. I sure this is a very hard problem that would require a lot of research to reach a truly optimal solution, but even something fairly optimal will work for me.
Thanks
You can try a quadkey or monster curve. It reduce the dimension and fills the plane. Microsoft bing maps quadkey is a good start to learn.
We're building a GIS interface to display GPS track data, e.g. imagine the raw data set from a guy wandering around a neighborhood on a bike for an hour. A set of data like this with perhaps a new point recorded every 5 seconds, will be large and displaying it in a browser or a handheld device will be challenging. Also, displaying every single point is usually not necessary since a user can't visually resolve that much data anyway.
So for performance reasons we are looking for algorithms that are good at 'reducing' data like this so that the number of points being displayed is reduced significantly but in such a way that it doesn't risk data mis-interpretation. For example, if our fictional bike rider stops for a drink, we certainly don't want to draw 100 lat/lon points in a cluster around the 7-Eleven.
We are aware of clustering, which is good for when looking at a bunch of disconnected points, however what we need is something that applies to tracks as described above. Thanks.
A more scientific and perhaps more math heavy solution is to use the Ramer-Douglas-Peucker algorithm to generalize your path. I used it when I studied for my Master of Surveying so it's a proven thing. :-)
Giving your path and the minimum angle you can tolerate in your path, it simplifies the path by reducing the number of points.
Typically the best way of doing that is:
Determine the minimum number of screen pixels you want between GPS points displayed.
Determine the distance represented by each pixel in the current zoom level.
Multiply answer 1 by answer 2 to get the minimum distance between coordinates you want to display.
starting from the first coordinate in the journey path, read each next coordinate until you've reached the required minimum distance from the current point. Repeat.
I'm using Google Maps and I'm trying to work out the maximum number of points visible in the viewport at a given zoom level.
My naive approach is to get the viewing area (in coordinates) and use that as a "fitting rectangle" and see how many points fit in the area.
I had a look around but I couldn't find any algorithm for "best fit" of random points in a rectangular area.
It seems a quite common problem so I probably don't know the right keywords to use.
Any help in getting me to a solution would be appreciated.
EDIT: thanks for the answers but I'm afraid I didn't make myself clear. Fitting a rectangle over ALL the points is pretty much a trivial affair (sort them all, get the min/max and voilĂ ).
What I want to know is the maximum number of points that can be fit under a FIXED SIZED rectangle: I've got all my points and a "moving window" of fixed size and I want to know how many points I can fit in.
Sorry for the bad initial explanation.
Cheers.
To find a best-fit rectangle over a set of points, and with the assumption that all points in the set need to be within the rectangle, all you need to do is find the min/max in both dimensions.
One way to do this would be to sort the points by their X dimension and take the first and last as the min/max in that dimension, and then repeat the process in the Y dimension to get that min/max. From that information, you have all you need to make a rectangle.
From a computational complexity standpoint, the complexity is 2x the complexity of the sort algorithm used (since you have to sort 2 times) + the complexity of getting the first and last elements of each sorted set, which, if you use an array, for example, is an O(1) operation.
If you use merge sort, and sort into arrays, you have an overall complexity of O(n log n). Broken down into number of operations, you have 2(n log n) + 4.
This wont give you the tightest fit on the set of rectangles because it won't ensure that one side of the rectangle is collinear with at least 2 of the points (for that you will need the Rotating Calipers algorithm that #Bart Kiers suggestes), but it is a much faster algorithm since the rotating calipers does esentially the same as I have described here, but then rotates the rectangle until one of it's edges lines up with 2 of the min/max points.
Sorry I know the question isnt as specific as it could be. I am currently working on a replenishment forecasting system for a clothing company (dont ask why it's in VBA). The module I am currently working on is distribution forecasts down to a size level. The idea is that the planners can forecast the number to sell, then can specify a ratio between the sizes.
In order to make the interface a bit nicer I was going to give them 4 options; Assess trend, manual entry, Poisson and Normal. The last two is where I am having an issue. Given a mean and SD I'd like to drop in a ratio (preferably as %s) between the different sizes. The number of the sizes can vary from 1 to ~30 so its going to need to be a calculation.
If anyone could point me towards a method I'd be etenaly greatfull - likewise if you have suggestions for a better method.
Cheers
For the sake of anyone searching this, whilst only a temporary solution I used probability mass functions to get ratios this allowed the user to modify the mean and SD and thus skew the curve as they wished. I could then use the ratios for my calculations. Poisson also worked with this method but turned out to be a slightly stupid idea in terms of choice.