Check if point is within a polygon - gps

Given a GEO-JSON polygon, such as the below:
[
[15.520376, 38.231155],
[15.160243, 37.444046],
[15.309898, 37.134219],
[15.099988, 36.619987],
[14.335229, 36.996631],
[13.826733, 37.104531],
[12.431004, 37.61295],
[12.570944, 38.126381],
[13.741156, 38.034966],
[14.761249, 38.143874],
[15.520376, 38.231155]
]
How can I check if a GPS location is within the polygon region?
For example, if the user is at Lat 37.387617, Long 14.458008, how would I go about searching the array?
I don't need someone to necessarily write the code for me, I just don't understand the logic of how I can check. If you have any example (any language) please point me.

This task is called point in polygon test.
Gerve has explained the algorithm that is widley used for this task. But this will not help you in implementing it. There are foot traps, like parallel lines.
One of that algorithms is called Crossings Multiply test, which is an optimized variant.
Source code: CrossingsMultiplyTest (last function in the file)
An Overview is given in "Point in Polygon Strategies"
Use longitude for the x coordinate, and latitude for the y coordinate.

I've found an article about the Ray-casting algorithm. It's explained pretty well here, the jist of it is (in pseudo code):
count ← 0
foreach side in polygon:
if ray_intersects_segment(P,side) then
count ← count + 1
if is_odd(count) then
return inside
else
return outside

Related

Building an MKPolygon using outer boundary of a set of coordinates - How do I split coordinates that fall on either side of a line?

I'm trying to build a MKPolygon using the outer boundary of a set of coordinates.
From what I can tell, there is no delivered functionality to achieve this in Xcode (the MKPolygon methods would use all points to build the polygon, including interior points).
After some research I've found that a convex-hull solves this problem.
After looking into various algorithms, the one I can best wrap my head around to implement is QuickHull.
This takes the outer lat coords and draws a line between the two. From there, you split your points based on that line into two subsets and process distance between the outer lats to start building triangles and eliminating points within until you are left with the outer boundary.
I can find the outer points just by looking at min/max lat and can draw a line between the two (MKPolyline) - but how would I determine whether a point falls on one side or the other of this MKPolyline?
A follow up question is whether there is a hit test to determine whether points fall within an MKPolygon.
Thanks!
I ended up using a variation of the gift wrap algorithm. Certainly not a trivial task.
Having trouble with formatting of the full code so I'll have to just put my steps (probably better because I have some clean up to do!)
I started with an array of MKPointAnnotations
1) I got the lowest point that is furthest left. To do this, I looped through all of the points and compared lat/lng to get lowest point. This point will definitely be in the convex hull, so add it to a NSMutableArray that will store our convex hull points (cvp)
2) Get all points to the left of the lowest point and loop through them, calculating the angle of the cvp to the remaining points on the left. Whichever has the greatest angle, will be the point you need to add to the array.
atan(cos(lat1)sin(lat2)-sin(lat1)*cos(lat2)*cos(lon2-lon1), sin(lon2-lon1)*cos(lat2))
For each point found, create a triangle (by using lat from new point and long from previous point) and create a polygon. I used this code to do a hit test on my polygon:
BOOL mapCoordinateIsInPolygon = CGPathContainsPoint(polygonView.path, NULL, polygonViewPoint, NO);
If anything was found in the hit test, remove it from the comparison array (all those on the left of the original array minus the hull points)
Once you have at least 3 points in your cvp array, build another polygon with all of the cvp's in the array and remove anything within using the hit test.
3) Once you've worked through all of the left points, create a new comparison array of the remaining points that haven't been eliminated or added to the hull
4) Use the same calculations and polygon tests to remove points and add the cvp's found
At the end, you're left with a list of points in that make up your convex hull.

What is the best way to get the length of an NSBezierPath?

The path I'm using has no curves; just a series of connected points.
The method I'm currently using involves iterating through the NZBezierPathElement components of the path and getting each point, but this is clumsy - especially because I have to save the last point to get each new distance. If any of you know of a better method, that would be very much appreciated.
Your algorithm sounds exactly right - the length of the path is the sum of the lengths between each pair of points. It's hard to see why you think this is "clumsy", so here is the algorithm in pseudo code just in case:
startPoint <- point[1]
length <- 0
repeat with n <- 2 to number of points
nextPoint <- point[n]
length <- length + distanceBetween(startPoint, nextPoint)
startPoint <- nextPoint // hardly clumsy
end repeat
Maybe you're doing it differently?
The notion of current point is fundamental to NSBezierPath (and SVG, turtle graphics, etc.), with its relative commands (e.g. move relative, line relative) - there is no escaping it!

How can I compare two NSImages for differences?

I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.

How to calculate current position on a great circle path

Given a starting point (origLat, origLon), ending point (destLat, destlon), and a % of trip completed. How do I calculate the current position (curLat, curLon)?
Aviation Formulary is a great resource which covers this question and more.
MTL provides some good content on great circle computations and some working applets you can use to verify your implementation.
In this case, it should be really simple:
curLat = origLat + percentageOfTripCompleted*(destLat-origLat);
curLon = origLon + percentageOfTripCompleted*(destLon-origLon);
*The fact that the earth is a sphere really has no bearing on this problem.

Projectile hit coordinates at the apex of its path

I have a projectile that I would like to pass through specific coordinates at the apex of its path. I have been using a superb equation that giogadi outlined here, by plugging in the velocity values it produces into chipmunk's cpBodyApplyImpulse function.
The equation has one drawback that I haven't been able to figure out. It only works when the coordinates that I want to hit have a y value higher than the cannon (where my projectile starts). This means that I can't shoot at a downward angle.
Can anybody help me find a suitable equation that works no matter where the target is in relation to the cannon?
As pointed out above, there isn't any way to make the apex be lower than the height of the cannon (without making gravity work backwards). However, it is possible to make the projectile pass through a point below the cannon; the equations are all here. The equation you need to solve is:
angle = arctan((v^2 [+-]sqrt(v^4 - g*(x^2+2*y*v^2)))/g*x)
where you choose a velocity and plug in the x and y positions of the target - assuming the cannon is at (0,0). The [+-] thing means that you can choose either root. If the argument to the square root function is negative (an imaginary root) you need a larger velocity. So, if you are "in range" you have two possible angles for any particular velocity (other than in the maximum range 45 degree case where the two roots should give the same answer).
I suspect one trajectory will tend to 'look' much more sensible than the other, but that's something to play around with once you have something working. You may want to stick with the apex grazing code for the cases where the target is above the cannon.