I have deeply read the Nutiteq Api Reference and I haven't found built-in Methods to get the pixel representation of longitude and latitude on a device. There is nothing under the existing Projections, so I don't know how I could overcome this issue.
What I want to make is drawing a circle for my actual GPS Location like this,
NOT like n-vertices Polygon in HelloMap3D.
Getting Pixels of lat, lon and radius given Zoom Levelunder a given Projection is the Challenge because the rest would be calls like this
...
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintStroke); // <- For blue circunference
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintFill); // <- For blue translucent circle
...
So, how could I turn lat, lon and radius into their pixel representation under Nutiteq?
I thank you all in advance.
MapView has worldToScreen() method for this, see Map Calculations page in the Nutiteq Android demo project wiki.
Related
Recently I'm struggling with a pose estimation problem with a single camera. I have some 3D points and the corresponding 2D points on the image. Then I use solvePnP to get the rotation and translation vectors. The problem is, how can I determine whether the vectors are right results?
Now I use an indirect way to do this:
I use the rotation matrix, the translation vector and the world 3D coordinates of a certain point to obtain the coordinates of that point in Camera system. Then all I have to do is to determine whether the coordinates are reasonable. I think I know the directions of x, y and z axes of Camera system.
Is Camera center the origin of the Camera system?
Now consider the x component of that point. Is x equavalent to the distance of the camera and the point in the world space in Camera's x-axis direction (the sign can then be determined by the point is placed on which side of the camera)?
The figure below is in world space, while the axes depicted are in Camera system.
========How Camera and the point be placed in the world space=============
|
|
Camera--------------------------> Z axis
| |} Xw?
| P(Xw, Yw, Zw)
|
v x-axis
My rvec and tvec results seems right and wrong. For a specified point, the z value seems reasonable, I mean, if this point is about one meter away from the camera in the z direction, then the z value is about 1. But for x and y, according to the location of the point I think x and y should be positive but they are negative. What's more, the pattern detected in the original image is like this:
But using the points coordinates calculated in Camera system and the camera intrinsic parameters, I get an image like this:
The target keeps its pattern. But it moved from bottom right to top left. I cannot understand why.
Yes, the camera center is the origin of the camera coordinate system, which seems to be right following to this post.
In case of camera pose estimation, value seems reasonable can be named as backprojection error. That's a measure of how well your resulting rotation and translation map the 3D points to the 2D pixels. Unfortunately, solvePnP does not return a residual error measure. Therefore one has to compute it:
cv::solvePnP(worldPoints, pixelPoints, camIntrinsics, camDistortion, rVec, tVec);
// Use computed solution to project 3D pattern to image
cv::Mat projectedPattern;
cv::projectPoints(worldPoints, rVec, tVec, camIntrinsics, camDistortion, projectedPattern);
// Compute error of each 2D-3D correspondence.
std::vector<float> errors;
for( int i=0; i < corners.size(); ++i)
{
float dx = pixelPoints.at(i).x - projectedPattern.at<float>(i, 0);
float dy = pixelPoints.at(i).y - projectedPattern.at<float>(i, 1);
// Euclidean distance between projected and real measured pixel
float err = sqrt(dx*dx + dy*dy);
errors.push_back(err);
}
// Here, compute max or average of your "errors"
An average backprojection error of a calibrated camera might be in the range of 0 - 2 pixel. According to your two pictures, this would be way more. To me, it looks like a scaling problem. If I am right, you compute the projection yourself. Maybe you can try once cv::projectPoints() and compare.
When it comes to transformations, I learned not to follow my imagination :) The first thing I Do with the returned rVec and tVec is usually creating a 4x4 rigid transformation matrix out of it (I posted once code here). This makes things even less intuitive, but instead it is compact and handy.
Now I know the answers.
Yes, the camera center is the origin of the camera coordinate system.
Consider that the coordinates in the camera system are calculated as (xc,yc,zc). Then xc should be the distance between the camera and
the point in real world in the x direction.
Next, how to determine whether the output matrices are right?
1. as #eidelen points out, backprojection error is one indicative measure.
2. Calculate the coordinates of the points according to their coordinates in the world coordinate system and the matrices.
So why did I get a wrong result(the pattern remained but moved to a different region of the image)?
Parameter cameraMatrix in solvePnP() is a matrix supplying the parameters of the camera's external parameters. In camera matrix, you should use width/2 and height/2 for cx and cy. While I use width and height of the image size. I think that caused the error. After I corrected that and re-calibrated the camera, everything seems fine.
MapKit has function MKMapPointForCoordinate, It accept lat lng as argument and return point x,y.
https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitFunctionsReference/index.html
lat = 59.90738808515509
lng = 10.724523067474365
if we pass above lat, lng then function return
x = 142214284, y = 78089986
I check with lag lng wot UTM but it gives different result
http://www.latlong.net/lat-long-utm.html
MKMapPointForCoordinate doesn't return UTM Coordinates.
Coordinates refer to a position on the earth (a pseudo-sphere), but sometimes you need to do calculation refering to a 2D map (much simpler) and then convert again to coordinates. This is the goal of the conversion.
So, the MKMapPoint struct returned by MKMapPointForCoordinate is a 2D representation of the coordinates, but it doesn't match any standard known.
At this link: https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitDataTypesReference/index.html#//apple_ref/doc/c_ref/MKMapPoint
in the MKMapPoint documentation, you can read:
The actual units of a map point are tied to the underlying units used
to draw the contents of an MKMapView, but you should never need to
worry about these units directly. You use map points primarily to
simplify computations that would be complex to do using coordinate
values on a curved surface.
EDIT
for Coordinates-UTM Conversion in a previous project I used this Open Source Code
look at the picture above, see a black circle.
black circle Coordinates is lat(126.897453), lon(37.530028)
if the red rectangle is square(vertical and horizontal are 20m), I want to know the blue circle of coordinates
please let me know calculation formula.
In advance, thanks for your answer!
have a good time :)
This task is solved by first transforming the spherical lat/long coordinates to a cartesian x,y coordinates with unit meters.
Then you calculate the location with very basic addition. (x = x-20, y=y-20/2)
Then you transform the location back to lat/long coordinates.
I'm a libgdx NOOB - apologies if this is an obvious question..
Using libgdx I have set up a Perspective camera, looking at origin (camera near = 1f, far = 300f). Viewport extends across the entire screen.
Basically, I would like to know how to convert a 2D screen coordinate (x,y) to the 3D world coordinate (x, y, z) where the Z value is clamped to the camera's near plane.
I think I should use the camera.getPickRay method to get a picking ray for the screen coordinate. I should then get the intersection point of this ray and the camera's near plane to get the world coordinate of the point on the near plane.
I thought that the resulting Ray object's origin property was the near-plane intersection point I was after, but this doesn't seem to be the case.
Am I on the right track?
Camera#unproject() returns, depending on the z-value, the value between the near (z=0) and far plane (z=1), see the javadocs. Camera#getPickRay() sets the origin member to the unprojected value at z=0, thus on the near plane, see the code. If you don't need the ray (including the direction) then you don't have to calculate the pick ray, instead you can call the unproject method directly.
Vector3 pointOnNearPlane = camera.unproject(new Vector3(touchX, touchY, 0f));
Likewise, for the point on the far plane:
Vector3 pointOnFarPlane = camera.unproject(new Vector3(touchX, touchY, 1f));
I have an array of x,y points of location. I don't know how to use it because it's not long/lat.
for example: X=217338 , Y=703099
I want to know how to use it on the iphone SDK and with which framework?
Thanks in advance!
First you need to know in which format your values are.
If they are not lon/lat they can be anything like meters or inches or half arm lengths or even normalized doughnut holes.
In any case you need to come up with an conversion method because MKMapKit only understands geo coordinates (long/lat).
If you have clarified that you should take a look at the location awarness guide from apple. There are also some other good sources for mapkit stuff like raywenderlich.com.
Without knowing what is represented by those values, there isn't really anything you can do with them. Assuming you can convert them to Latitude/Longitude values, this is how you'd be able to center your map at that (X, Y) coordinate:
//Import the <MapKit/MapKit.h> and <CoreLocation/CoreLocation.h> framework
//and then this will go in your implementation file:
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(xConvertedToLat, yConvertedToLong);
//Set the region your map will display centered on the above coord and spanning 250m on x-axis and 250 on y-axis
MKCoordinateRegion region = MKCoordinateRegionMake(coord, 250, 250);
//You should have a MKMapView object
[myMapView setRegion:region animated:YES];
You can iterate through this for each object in your array, but you won't see anything appear until the last (x, y) coordinate is set.