I have two sets of Latitude/Longitude coordinates.
The first one is the coordinates of an equirectangular panorama picture.
The second one of a point in sight. So it is on the picture.
I need the x/y coordinates of the second point to draw it on the picture.
In javascript.
Example equirectangular panorama picture and application (using bearing and distance instead of Lat/Lon coordinates):
http://www.diy-streetview.org/data/development/20100121/streetview-playerA.html?streetview=small/001sm.jpg
Hints:
The horizon is half the hight of the image.
The camera was 2 meters above ground.
Image Direction is the middle of the image. Where it "looks to".
It's in the Netherlands, so a negative Altitude is no error. They are really below the waterline.
Skipp the camera hight and GPSAltitude when its to complicated.
A bit data:
001sm.jpg
GPSLatitude: 51.802104
GPSLongitude: 3.929393
GPSAltitude: 1.100000
GPS Image Direction = 260 degrees
002sm.jpg
GPSLatitude: 51.802082
GPSLongitude: 3.929200
GPSAltitude: -2.270000
GPS Image Direction = 265 degrees
003sm.jpg
GPSLatitude: 51.802082
GPSLongitude: 3.928986
GPSAltitude: -3.710000
GPS Image Direction = 275 degrees
004sm.jpg
GPSLatitude: 51.802104
GPSLongitude: 3.928771
GPSAltitude: -3.710000
GPS Image Direction = 270 degrees
Thanks,
Jan
janmartin AT diy-streetview DOT org
Related
I am writing a program. I have, say, a grid of dots on a piece of paper. I fix one end and bend the paper toward the screen, giving me a trapezoidal shape from the camera's point of view. I have the (x,y) camera coordinate of each dot. Is there a simple way I can change these (x,y) to real life (x,y) which should give me a rectangle? I have the camera/real (x,y) of the original flat sheet of paper pre-bend if that helps.
I have looked at 3D Camera coordinates to world coordinates (change of basis?) and Transforming screen coordinates from security camera to real world coordinates.
Look up "homography". The transformation from a plane in 3D space to its image as captured by an ideal pinhole camera is a homography. It can be represented as a 3x3 matrix H that transforms the 3D coordinates X of points in the world to their corresponding homogeneous image coordinates x:
x = H * X
where X is a 3x1 vector of the world point coordinates, and x = [u, v, w]^T is the image point in homogeneous coordinates.
Given a minimum of 4 matches between world and image points (e.g. the corners of a rectangle) you can estimate the parameters of the matrix H. For details, look up "DLT algorithm". In OpenCV the routine to use is findHomography.
look at the picture above, see a black circle.
black circle Coordinates is lat(126.897453), lon(37.530028)
if the red rectangle is square(vertical and horizontal are 20m), I want to know the blue circle of coordinates
please let me know calculation formula.
In advance, thanks for your answer!
have a good time :)
This task is solved by first transforming the spherical lat/long coordinates to a cartesian x,y coordinates with unit meters.
Then you calculate the location with very basic addition. (x = x-20, y=y-20/2)
Then you transform the location back to lat/long coordinates.
I have a database(PostgreSQL) with coordinates of lightning bolts.
I want coordinates inside a circumference sector(SQL Query). I have the radius, angle(Z) and point A of circumference, and coordinates of extremal points([lat1,lng1], [lat2, lng2]).
Thanks a lot!
I've got a bunch of thumbnails/icons packed right up next to each other in a texture map / sprite sheet. From a pixel to pixel relationship, these are being scaled up from being 145 pixels square to 238 screen pixels square. I was expecting to get +-1 or 2 pixel accuracy on the edges of the box when accessing the texture coordinates, so I'm also drawing a 4 pixel outline overtop of the thumbnail to hide this probable artifact. But I'm seeing huge variations in accuracy. Sometimes it's off in one direction, sometimes the other.
I've checked over the math and I can't figure out what's happening.
The the thumbnail is being scaled up about 1.64 times. So a single pixel off in the source texture coordinate could result in around 2 pixels off on the screen. The 4 pixel white frame over top is being drawn at a 1-1 pixel to fragment relationship and is supposed to cover about 2 pixels on either side of the edge of the box. That part is working. Here I've turned off the border to show how far off the texture coordinates are....
I can tweak the numbers manually to make it go away. But I have to shrink the texture coordinate width/height by several source pixels and in some cases add (or subtract) 5 or 6 pixels to the starting point. I really just want the math to work out or to figure out what I'm doing wrong here. This sort of stuff drives me nuts!
A bunch of crap to know.
The shader is doing the texture coordinate offsetting in the vertex shader...
v_fragmentTexCoord0 = vec2((a_vertexTexCoord0.x * u_texScale) + u_texOffset.s, (a_vertexTexCoord0.y * u_texScale) + u_texOffset.t);
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
This object is a box which is a triangle strip with 2 tris.
Not that it should matter, but matrix applied to the model isn't doing any scaling. The box is to screen scale. The scaling is happening only in the texture coordinates that are being supplied.
The texture coordinates of the object as seen above are 0.00 - 0.07, then in the shader have an addition of an offset amount which is different per thumbnail. .07 out of 2048 is like 143. Originally I had it at .0708 which should be closer to 145 it was worse and showed more like 148 pixels from the texture. To get it to only show 145 source pixels I have to make it .0.06835 which is 140 pixels.
I've tried doing the math in a calculator and typing in the numbers directly. I've also tried doing like =1305/2048. These are going in to GLfloats not doubles.
This texture map image is PNG and is loaded with these settings:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
but I've also tried GL_LINEAR with no apparent difference.
I'm not having any accuracy problems on other textures (in the same texture map) where I'm not doing the texture scaling.
It doesn't get farther off as the coords get higher. In the image above the NEG MAP thumb is right next to the HEAT MAP thumb and are off in different directions but correct at the seam.
here's the offset data for those two..
filterTypes[FT_gradientMap20].thumbTexOffsetS = 0.63720703125;
filterTypes[FT_gradientMap20].thumbTexOffsetT = 0.1416015625;
filterTypes[FT_gradientMap21].thumbTexOffsetS = 0.7080078125;
filterTypes[FT_gradientMap21].thumbTexOffsetT = 0.1416015625;
==== UPDATE ====
A couple of things off the bat I realized I was doing wrong and are discussed over here: OpenGL Texture Coordinates in Pixel Space
The width of a single thumbnail is 145. But that would be 0-144, with 145 starting the next one. I was using a width of 145 so that's going to be 1 pixel too big. Using the above center of pixel type math, we should actually go from the center of 0 to the center of 144. 144.5 - 0.5 = 144.
Using his formula of (2i + 1)/(2N) I made new offset amounts for each of the starting points and used the 144/2048 as the width. That made things better but still off in some areas. And again still off in one direction sometimes and the other other times. Although consistent for each x or y position.
Using a width of 143 proves better results. But I can fix them all by just adjusting the numbers manually to work. I want to have the math to make it work out right.
... or.. maybe it has something to do with min/mag filtering - although I read up on that and what I'm doing seems right for this case.
After a lot of experiments and having to create a grid-lined guide texture so I could see exactly how far off each texture was... I finally got it!
It's pretty simple actually.
uniform mat4 u_modelViewProjectionMatrix;
uniform mediump vec2 u_texOffset;
uniform mediump float u_texScale;
attribute vec3 a_vertexPosition;
attribute mediump vec2 a_vertexTexCoord0;
The precision of the texture coordinates. By specifying mediump it just fixed itself. I suspect this also would help solve the problem I was having in this question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Once I did that, I had to go back to my original 145 width (which still seems wrong but oh well). And for what it's worth I ended up then going back to all my original math on all the texture coordinates. The "center of pixel" method was showing more of the neighboring pixels than the straight /2048 did.
I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.