Different distance between two points on iOS and Android - objective-c

I'm trying to measure the distance between two points (longitude, latitude). My problem is that I get different results on iOS then on Android.
I've checked it with this site and the result was that the Android values are correct.
I'm using this MapKit method to get the distance in iOS: distanceFromLocation:
Here are my test locations:
P1: 48.643798, 9.453735
P2: 49.495150, 9.782150
Distance iOS: 97717 m
Distance Android: 97673 m
How is this possible and how can I fix this?

So I was having a different issue and stumbled upon the answer to both of our questions:
On iOS you can do the following:
meters1 = [P1 distanceFromLocation:P2]
// meters1 is 97,717
meters2 = [P2 distanceFromLocation:P1]
// meters2 is 97,630
I've searched and searched but haven't been able to find a reason for the difference. Since they are the exact same points, it should show the same distance no matter which way you are traveling. I submitted it to Apple as a bug and they closed it as a duplicate but have still not fixed it. I would suggest to anyone who wants this to be fixed to also submit it as a bug.
In the meantime, the average of the two is actually the correct value:
meters = (meters1 + meters2)/2
// meters (the average of the first two) is 97,673
Apparently Android does not have this problem.

The longitude and latitude are not all that you need. You have to use the same reference model like WGS84 or ETRS89.
The earth is not an exact ellipsoid, so you need models, none of the models are entirely exact, and depending on which model you use, distances are somewhat different.
Please make sure you use the same reference for iOS and Android.

There is more than one way to calculate distance between long/lat coords based on how you compensate for the curvature of the earth, and there's no right or wrong approach. Most likely the two platforms use a slightly different model.
Here are some formulae for calculating it yourself. http://www.movable-type.co.uk/scripts/latlong.html
If you absolutely need them to be the same, just implement your own calculation using one of these formulae, then you can ensure you get the same result on both platforms.

Related

How to get user location using accelerometer, gryoscope, and magnetometer in iPhone?

The simple equation for user location using inbuilt inertial measurement unit (IMU) which is also called pedestrian dead reckoning (PDR) is given as:
x= x(previous)+step length * sin(heading direction)
y= y(previous)+step length *cos(heading direction )
We can use the motionManager property of CMMotionManager class to access raw values from accelerometer, gyroscope, and magnetometer. Also, we can get attitudes values as roll, pitch, and yaw. The step length can be calculated as the double square root of acceleration. However, I'm confused with the heading direction. Some of the published literature has used a combination of magnetometer and gyroscope data to estimate the heading direction. I can see that CLHeading also gives heading information. There are some online tutorials especially for an android platform like this to estimate user location. However, it does not give any proper mathematical explanation.
I've followed many online resources like this, this,this, and this to make a PDR app. My app can detect the steps and gives the step length properly however its output is full of errors. I think the error is due to the lack of proper heading direction. I've used the following relation to get heading direction from the magnetometer.
magnetometerHeading = atan2(-self.motionManager.magnetometerData.magneticField.y, self.motionManager.magnetometerData.magneticField.x);
Similarly, from gyroscope:
grysocopeHeading +=-self.motionManager.gyroData.rotationRate.z*180/M_PI;
Finally, I give proportional weight to the previous heading driection, gryoscopeheading, and magnetometerHeading as follows:
headingDriection = (2*headingDirection/5)+(magnetometerHeading/5)+(2*gryospoceHeading/5);
I followed this method from a published journal paper. However, I'm getting lots of error in my work. Is my approach wrong? What exactly should I do to get a proper heading direction such that the localization estimation error would be minimum?
Any help would be appreciated.
Thank you.
EDIT
I noticed that while calculating heading direction using gyroscope data, I didn't multiply the rotation rate (which is in radian/sec) with the delta time. For this, I added following code:
CMDeviceMotion *motion = self.motionManager.deviceMotion;
[_motionManager startDeviceMotionUpdates];
if(!previousTime)
previousTime = motion.timestamp;
double deltaTime = motion.timestamp - previousTime;
previousTime = motion.timestamp;
Then I updated the gyroscope heading with :
gyroscopeHeading+= -self.motionManager.gryoData.rotationRate.z*deltaTime*180/M_PI;
The localization result is still not close to the real location. Is my approach correct?

Fuzzy screenshot comparison with Selenium

I'm using Selenium to automate webpage functional testing. It's important for us to do a pixel-by-pixel comparison when we roll out new code, so we're using Selenium to take screenshots and comparing the base64 encoded strings to see if anything has changed.
We're finding that in practice, it's hard to get complete pixel consistency, especially with images. I would like minor blurriness / rendering artifacts to count as a "pass" instead of a "fail", so I'm wondering if there's a way of doing a fuzzy comparison to make our tests a bit less fragile.
I was thinking of maybe looking at the Levenshtein distance between the base64 strings as a starting point, but I don't really know if that's a good approach, or what the tolerances should be that distinguish "something moved on the page" from "rendering artifact". Any ideas / approaches?
So I ended up going with the ImageMagick command-line tool (because why re-invent image comparison). The "Peak Absolute Error" metric of the "compare" tool tells you how much you have to fuzz pixels before two images are identical. This seems to work well... for an image with slight graphical distortions, there might be a lot of pixels that don't match, but slight fuzzing is enough to make them match; but for two images that are actually different, even though most pixels might match, the ones that don't tend to be very different. Right now I'm checking for a PAE of less than 15% to see if the images should be counted as identical. Command line I'm using is:
compare -metric PAE original.png new.png comparison.png
Documentation on ImageMagick's compare tool is here: http://www.imagemagick.org/script/compare.php
I've been using perceptualdiff which uses a model of the human visual system to try to avoid reporting unnoticeable changes (the authors used for renderer regression testing). Usage is quite simple:
perceptualdiff -output diff.ppm baseline.png test.png
(where diff.ppm is a PPM format image highlighting the areas of difference)
The needle regression testing framework has support for using pdiff to compare screenshots:
http://needle.readthedocs.org/en/latest/#engines
Use an image format that does not create artifacts (like BMP or PNG) then you can do a per-pixel comparison.
I think you can check each pixel with a common Euclidean Distance.
To improve performance a little, do not calculate the square root but check the squares of the distances
// Maximum color distance allowed to define pixel consistency.
const float maxDistanceAllowed = 5.0;
// Square of the distance, used in calculations.
float maxD = maxDistanceAllowed * maxDistanceAllowed;
public bool isPixelConsistent(Color pixel1, Color pixel2)
{
// Euclidean distance in 3-dimensions.
float distanceSquared = (pixel1.R - pixel2.R)*(pixel1.R - pixel2.R) + (pixel1.G - pixel2.G)*(pixel1.G - pixel2.G) + (pixel1.B - pixel2.B)*(pixel1.B - pixel2.B);
// If the actual distance is less than the max allowed, the pixel is
// consistent and the method returns TRUE
return distanceSquared <= maxD;
}
Didn't test the C# code, but it should give you the idea. Give some tries and adjust the maxDistanceAllowed to your needs.
If anyone else is looking for something similar there is Depicted-dpxdt. It is designed to be used as part of a CI/CD process.
It combines perceptual diff with server, commandline tool, wrapper for phantom js.
It has functionality demonstrated like crawling entire site and comparing pages for differences.

How can I compare two NSImages for differences?

I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.

sample code for Location finder in iphone

I am trying to make an app which uses both the GPS and the magnetometer for finding the way and direction for mecca (Mosque). It has some special features like date picker for upcoming prayers, prayer timings left calculated from current location time zone and weather on the current location and some more on. If anyone has sample code regarding to this, please reply.
Thanks in advance
The magnetometer can be told just to return the current device heading, via CLLocationManager. It can return in unfiltered 3d, but there's no reason to use it — from CLHeading just use the trueHeading property. That'll give you the same information shown in the Compass app.
To work out the heading to Mecca from where you are, you can use the formulas given here. Google Maps gives me a geolocation of about 21.436, 39.832 for Masjid al-Haram (I'm not sure it's terribly accurate, so I've rounded inappropriately low), so you could get the bearing from whatever location CLLocationManager tells you you're at something like:
#define toRadians(x) ((x)*M_PI / 180.0)
#define toDegrees(x) ((x)*180.0 / M_PI)
...
double y = sin(toRadians(39.832 - currentLocation.longitude)) * cos(toRadians(21.436));
double x = cos(toRadians(currentLocation.latitude)) * dsin(toRadians(21.436)) -
sin(toRadians(currentLocation.latitude)) * dcos(toRadians(21.436)) * dcos(toRadians(39.832 - currentLocation.longitude));
double bearing = toDegrees(atan2(y, x));
You can then rotate a pointer on screen by the difference between the device's heading and the one you've just calculated. Probably easiest to use a CGAffineTransform on a UIView's transform property.
That's all typed as I answer by the way, not tested. I'd check it against a reliable source before you depend on it.

Projectile hit coordinates at the apex of its path

I have a projectile that I would like to pass through specific coordinates at the apex of its path. I have been using a superb equation that giogadi outlined here, by plugging in the velocity values it produces into chipmunk's cpBodyApplyImpulse function.
The equation has one drawback that I haven't been able to figure out. It only works when the coordinates that I want to hit have a y value higher than the cannon (where my projectile starts). This means that I can't shoot at a downward angle.
Can anybody help me find a suitable equation that works no matter where the target is in relation to the cannon?
As pointed out above, there isn't any way to make the apex be lower than the height of the cannon (without making gravity work backwards). However, it is possible to make the projectile pass through a point below the cannon; the equations are all here. The equation you need to solve is:
angle = arctan((v^2 [+-]sqrt(v^4 - g*(x^2+2*y*v^2)))/g*x)
where you choose a velocity and plug in the x and y positions of the target - assuming the cannon is at (0,0). The [+-] thing means that you can choose either root. If the argument to the square root function is negative (an imaginary root) you need a larger velocity. So, if you are "in range" you have two possible angles for any particular velocity (other than in the maximum range 45 degree case where the two roots should give the same answer).
I suspect one trajectory will tend to 'look' much more sensible than the other, but that's something to play around with once you have something working. You may want to stick with the apex grazing code for the cases where the target is above the cannon.