lat lon coordinates (WGS84) conversion to local x, y plane - coordinate-transformation

Currently I'm trying the following: I have points from google earth (WGS84) which I want to transform to a local x,y coordinate system: a tangential plane with y positive from south to north and x positive from west to east.
There is no need for the plane to be part of a global coordinate system more than the relation (x=0, y=0) = (lat,lon). The scale at which I'm working is in the order of say 100 kilometers (maximum of for example 200 km's). Very small errors (due to for example the curvature of the earth) are acceptable.
I have relatively little understanding of this topic as of yet. Can anybody help me out? Where would I need to look for example.
Thanks!

I haven't found the answer mathematically but have found that the package basemap (of the mpl_toolkit) should help with this respect (from wgs84 to a transverse mercator projection).

Related

pose estimation: determine whether rotation and transmation matrix are right

Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.

Finding coordinates between longitude and latitude

How to find a few coordinates that are in the straight line, between 2 coordinates?
For example:
Start coordinate: Lat=X1 Long=Y1
End coordinate: Lat=X2 Long=Y2
Make a straight line from X1,Y1 to X2,Y2.
Then find 5 points that are located in that line, that are spread in the same distance.
Anyone can help to find the algorithm and calculation?
The coordinate is in decimal format, e.g. 50.123456, 6.123456
Thanks.
There are no "straight lines" on a sphere (or ellipsoid).
Anyway, you'll need to:
Calculate the distance and initial azimuth from (x1,y1) to (x2,y2).
You can use Vincenty's inverse method.
Calculate the coordinate of points with distance (0,25d, 0.5d, 0.75d) from (x1,y1) at that azimuth (plus points (x1,y1) and (x2,y2) of course).
You can use Vincenty's direct method.
Both direct and inverse methods are described on Wikipedia.
An extremely accurate implementations for both direct and inverse problems are available as part of GeographicLib.
Less accurate, but much simpler methods are described in Aviation Formulary.

GEOS C API - calculating areas with WGS84 coords (SRID=4326)

I create a polygon where each x/y point is WGS84 format
lat/long values.
The polygons are good approximations to circles and sectors of
radius R (each circumference/arc point is a projected lat/long
value of distance R from a centre/apex coordinate - which I have
verified is correct by computing the Haversine distance between
the edge and reference points and getting a value of R back) .
I use GEOSSetSRID(4326) to indicate the coords are WGS84 format.
GEOSGetSRID() confirms the SRID is set.
Use of GEOSArea then gives a value not even remotely close to
the expected value.
I do not see what else I can programmatically do.
If I set the points in cartesian format, and then set the SRID to
4326, will GEOS implicitly convert the polygon points to WGS84 ??
Is the basic GEOS C API incapable of doing the above ??
Dos SRID have no meaning to the API at all ??
Any info/pointers to correct usage/solutions would be much appreciated.
TIA.
The distance that is given is something like degrees between the two points. In actuality, the GEOS API (at least the C++ interface) is units agnostic; the units it gives the distance in is based on whatever you passed in.
In general, multiplying the result you get by 111000 gives you a fairly accurate measurement in meters. For area, you have to do 111000^2.

kinect object measuring

I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.

How to calculate area which was compose with mulit- Coordinates?

as topic, the Coordinates value (Latitude and Longitude) is known , these Coordinates will compose as polygonal area , my question is how to calculate the area of the polygonal that is base the geography ?
thanks for your help .
First you would need to know whether the curvature of the surface would be significant. If it is a relatively small then you can get a good approximation by projecting the coordinates onto a plane.
Determine units of measure per degree of latitude (eg. meters per degree)
Determine units of meature per degree of longitude at a given latitude (the conversion factor varies as you go North or South)
Convert latitude and longitude pairs to (x,y) pairs in the plane
Use an algorithm to compute area of a polygon. See StackOverflow 451425 or Paul Bourke
If you are calculating a large area then spherical techniques must be used.
If I understand your question correctly - triangulation should help you. Basically you break the polygonal to triangles in such a way that they don't overlap and sum their areas.