I have multiple sequences of images from my micropipette aspiration experiments that look somewhat comparable to this: https://www.youtube.com/watch?v=HpY_2_e7b6Y
Now I would like to track the length of the tissue within the pipette automatically for all the different images in the sequence.
Does anyone know how this can be done?
Thanks!
You could probably do it with a macro: first, you manually draw a line in the middle of the channel to define the direction. Then, for each image of the movie, you try to find the edge of the pipette and the interface. To do that, you can try to use the "Process>Find Edges" function. Depending on the quality of your images, you might need several steps. Finally, you just need to find the distance between these two points.
Here is a quick-and-dirty macro which kind of gives a not-too-wrong result with the Youtube video:
print("\\Clear");
run("Clear Results");
run("Duplicate...", "title=Stack-1 duplicate range=1-5");
selectWindow("Stack-1");
run("Find Edges", "stack");
waitForUser("Please draw a line and click OK.");
//makeLine(402, 238, 170, 221);
setLineWidth(20);
//for each frame of the movie
for(s=1;s<=nSlices;s++){
setSlice(s);
profile=getProfile();
maxProfile=Array.findMaxima(profile,15);
diff=abs(maxProfile[1]-maxProfile[0]);
print("max at: "+maxProfile[1] +" "+ maxProfile[0]);
print("Slice "+s+" the difference is: "+diff+" px");
setResult("Time",nResults,s*2);
setResult("Position",nResults-1,diff/200);
}
I tested it on five frames from the Youtube video (available here for about 1 month, 6MB): it gives almost consistent results, but it's very approximative and could be easily improved. I think the main idea is correct.
I'm trying to measure the distance between two points (longitude, latitude). My problem is that I get different results on iOS then on Android.
I've checked it with this site and the result was that the Android values are correct.
I'm using this MapKit method to get the distance in iOS: distanceFromLocation:
Here are my test locations:
P1: 48.643798, 9.453735
P2: 49.495150, 9.782150
Distance iOS: 97717 m
Distance Android: 97673 m
How is this possible and how can I fix this?
So I was having a different issue and stumbled upon the answer to both of our questions:
On iOS you can do the following:
meters1 = [P1 distanceFromLocation:P2]
// meters1 is 97,717
meters2 = [P2 distanceFromLocation:P1]
// meters2 is 97,630
I've searched and searched but haven't been able to find a reason for the difference. Since they are the exact same points, it should show the same distance no matter which way you are traveling. I submitted it to Apple as a bug and they closed it as a duplicate but have still not fixed it. I would suggest to anyone who wants this to be fixed to also submit it as a bug.
In the meantime, the average of the two is actually the correct value:
meters = (meters1 + meters2)/2
// meters (the average of the first two) is 97,673
Apparently Android does not have this problem.
The longitude and latitude are not all that you need. You have to use the same reference model like WGS84 or ETRS89.
The earth is not an exact ellipsoid, so you need models, none of the models are entirely exact, and depending on which model you use, distances are somewhat different.
Please make sure you use the same reference for iOS and Android.
There is more than one way to calculate distance between long/lat coords based on how you compensate for the curvature of the earth, and there's no right or wrong approach. Most likely the two platforms use a slightly different model.
Here are some formulae for calculating it yourself. http://www.movable-type.co.uk/scripts/latlong.html
If you absolutely need them to be the same, just implement your own calculation using one of these formulae, then you can ensure you get the same result on both platforms.
I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.
I am trying to make an app which uses both the GPS and the magnetometer for finding the way and direction for mecca (Mosque). It has some special features like date picker for upcoming prayers, prayer timings left calculated from current location time zone and weather on the current location and some more on. If anyone has sample code regarding to this, please reply.
Thanks in advance
The magnetometer can be told just to return the current device heading, via CLLocationManager. It can return in unfiltered 3d, but there's no reason to use it — from CLHeading just use the trueHeading property. That'll give you the same information shown in the Compass app.
To work out the heading to Mecca from where you are, you can use the formulas given here. Google Maps gives me a geolocation of about 21.436, 39.832 for Masjid al-Haram (I'm not sure it's terribly accurate, so I've rounded inappropriately low), so you could get the bearing from whatever location CLLocationManager tells you you're at something like:
#define toRadians(x) ((x)*M_PI / 180.0)
#define toDegrees(x) ((x)*180.0 / M_PI)
...
double y = sin(toRadians(39.832 - currentLocation.longitude)) * cos(toRadians(21.436));
double x = cos(toRadians(currentLocation.latitude)) * dsin(toRadians(21.436)) -
sin(toRadians(currentLocation.latitude)) * dcos(toRadians(21.436)) * dcos(toRadians(39.832 - currentLocation.longitude));
double bearing = toDegrees(atan2(y, x));
You can then rotate a pointer on screen by the difference between the device's heading and the one you've just calculated. Probably easiest to use a CGAffineTransform on a UIView's transform property.
That's all typed as I answer by the way, not tested. I'd check it against a reliable source before you depend on it.
I have a projectile that I would like to pass through specific coordinates at the apex of its path. I have been using a superb equation that giogadi outlined here, by plugging in the velocity values it produces into chipmunk's cpBodyApplyImpulse function.
The equation has one drawback that I haven't been able to figure out. It only works when the coordinates that I want to hit have a y value higher than the cannon (where my projectile starts). This means that I can't shoot at a downward angle.
Can anybody help me find a suitable equation that works no matter where the target is in relation to the cannon?
As pointed out above, there isn't any way to make the apex be lower than the height of the cannon (without making gravity work backwards). However, it is possible to make the projectile pass through a point below the cannon; the equations are all here. The equation you need to solve is:
angle = arctan((v^2 [+-]sqrt(v^4 - g*(x^2+2*y*v^2)))/g*x)
where you choose a velocity and plug in the x and y positions of the target - assuming the cannon is at (0,0). The [+-] thing means that you can choose either root. If the argument to the square root function is negative (an imaginary root) you need a larger velocity. So, if you are "in range" you have two possible angles for any particular velocity (other than in the maximum range 45 degree case where the two roots should give the same answer).
I suspect one trajectory will tend to 'look' much more sensible than the other, but that's something to play around with once you have something working. You may want to stick with the apex grazing code for the cases where the target is above the cannon.