Difficulties understanding MapKit Coordinate System - ios7

I read the apple docs
"A map point is an x and y value on the Mercator map projection"
A point is a graphical unit associated with the coordinate system of a UIView
What is the difference logically between a Point and a MKPoint?
I obviously need CGPoint to display something on the screen.
So why does MapKit need MKMapPoint?

The fact that both the CGPoint and MKMapPoint structs happen to store two floating-point values named x and y is irrelevant.
They are given different names because they logically deal with different coordinate systems, transformations, ranges and scales.
A 2D world map needs a large, fixed coordinate system that allows a latitude and longitude to be converted to a fixed point on the map regardless of what portion is currently being displayed on the screen.
The range of MKMapPoint values are large since they need to represent the world's coordinates at a high-enough resolution (well beyond screen sizes).
However, you don't exactly need to care about the actual values of an MKMapPoint. Occasionally, you may need to convert a CLLocationCoordinate2D to an MKMapPoint (or the other way around) but you don't need to worry about those values nor should you store them (the docs recommend not doing this since the internal projection calculations to convert a latitude and longitude to a 2D projection may change between iOS releases).
Your usage of an MKMapPoint is only on the basis that you are dealing with the map's 2D projection independent of the device's screen size or what portion of the map is currently displaying.
I obviously need CGPoint to display something on the screen.
Yes but when adding annotations or overlays, you generally deal with CLLocationCoordinate2D values and let the map view do the conversion as needed.

MKMapPoint is a geographical point - projectively converted latitude and longitude. On the screen you have some bounded view containing your mapView. And you need to convert your geographical position (coord) to the CGPoint on your mapView
CLLocationCoordinate2D coord;
coord.latitude = location.latitude.doubleValue;
coord.longitude = location.longitude.doubleValue;
MKMapPoint point = MKMapPointForCoordinate(coord);
CGPoint cgpoint = [mapView convertCoordinate:coord toPointToView:mapView];

Related

Does anyone know algorithm of MKMapPointForCoordinate function in ObjectiveC MapKit

MapKit has function MKMapPointForCoordinate, It accept lat lng as argument and return point x,y.
https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitFunctionsReference/index.html
lat = 59.90738808515509
lng = 10.724523067474365
if we pass above lat, lng then function return
x = 142214284, y = 78089986
I check with lag lng wot UTM but it gives different result
http://www.latlong.net/lat-long-utm.html
MKMapPointForCoordinate doesn't return UTM Coordinates.
Coordinates refer to a position on the earth (a pseudo-sphere), but sometimes you need to do calculation refering to a 2D map (much simpler) and then convert again to coordinates. This is the goal of the conversion.
So, the MKMapPoint struct returned by MKMapPointForCoordinate is a 2D representation of the coordinates, but it doesn't match any standard known.
At this link: https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitDataTypesReference/index.html#//apple_ref/doc/c_ref/MKMapPoint
in the MKMapPoint documentation, you can read:
The actual units of a map point are tied to the underlying units used
to draw the contents of an MKMapView, but you should never need to
worry about these units directly. You use map points primarily to
simplify computations that would be complex to do using coordinate
values on a curved surface.
EDIT
for Coordinates-UTM Conversion in a previous project I used this Open Source Code

How can I turn MapPos/(long,lat) into Pixels under Nutiteq?

I have deeply read the Nutiteq Api Reference and I haven't found built-in Methods to get the pixel representation of longitude and latitude on a device. There is nothing under the existing Projections, so I don't know how I could overcome this issue.
What I want to make is drawing a circle for my actual GPS Location like this,
NOT like n-vertices Polygon in HelloMap3D.
Getting Pixels of lat, lon and radius given Zoom Levelunder a given Projection is the Challenge because the rest would be calls like this
...
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintStroke); // <- For blue circunference
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintFill); // <- For blue translucent circle
...
So, how could I turn lat, lon and radius into their pixel representation under Nutiteq?
I thank you all in advance.
MapView has worldToScreen() method for this, see Map Calculations page in the Nutiteq Android demo project wiki.

Three.js camera tilt up or down and keep horizon level

camera.rotate.y pans left or right in a predictable manner.
camera.rotate.x looks up or down predictably when camera.rotate.y is at 180 degrees.
but when I change the value of camera.rotate.y to some new value, and then I change the value of camera.rotate.x, the horizon rotates.
I've looked for an algorithm to adjust for horizon rotation after camera.rotate.x is changed, but haven't found it.
In three.js, an object's orientation can be specified by its Euler rotation vector object.rotation. The three components of the rotation vector represent the rotation in radians around the object's internal x-axis, y-axis, and z-axis respectively.
The order in which the rotations are performed is specified by object.rotation.order. The default order is "XYZ" -- rotation around the x-axis occurs first, then the y-axis, then the z-axis.
Rotations are performed with respect to the object's internal coordinate system -- not the world coordinate system. This is important. So, for example, after the x-rotation occurs, the object's y- and z- axes will generally no longer be aligned with the world axes. Rotations specified in this way are not unique.
So, for example, if in code you specify,
camera.rotation.y = y_radians; // Y first
camera.rotation.x = x_radians; // X second
camera.rotation.z = 0;
the rotations are applied in the object's rotation.order, not in the order you specified them.
In your case, you may find it more intuitive to set rotation.order to "YXZ", which is equivalent to "heading, pitch, and roll".
For more information about Euler angles, see the Wikipedia article. Three.js follows the Tait–Bryan convention, as explained in the article.
three.js r.61
I've been looking for the same info for few days now, the trick is: use regular rotateX to look up/down, but use rotateOnWorldAxis(new THREE.Vector3(0.0, 1.0, 0.0), angle) for horiz turn (https://discourse.threejs.org/t/vertical-camera-rotation/15334).

iOS map location with x.y

I have an array of x,y points of location. I don't know how to use it because it's not long/lat.
for example: X=217338 , Y=703099
I want to know how to use it on the iphone SDK and with which framework?
Thanks in advance!
First you need to know in which format your values are.
If they are not lon/lat they can be anything like meters or inches or half arm lengths or even normalized doughnut holes.
In any case you need to come up with an conversion method because MKMapKit only understands geo coordinates (long/lat).
If you have clarified that you should take a look at the location awarness guide from apple. There are also some other good sources for mapkit stuff like raywenderlich.com.
Without knowing what is represented by those values, there isn't really anything you can do with them. Assuming you can convert them to Latitude/Longitude values, this is how you'd be able to center your map at that (X, Y) coordinate:
//Import the <MapKit/MapKit.h> and <CoreLocation/CoreLocation.h> framework
//and then this will go in your implementation file:
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(xConvertedToLat, yConvertedToLong);
//Set the region your map will display centered on the above coord and spanning 250m on x-axis and 250 on y-axis
MKCoordinateRegion region = MKCoordinateRegionMake(coord, 250, 250);
//You should have a MKMapView object
[myMapView setRegion:region animated:YES];
You can iterate through this for each object in your array, but you won't see anything appear until the last (x, y) coordinate is set.

What's the difference between MKMapRect and MKCoordinateRegion

They both specify a map center and how big the box is.
So why use both?
Some function in MKMapview use one and some use the other
(MKCoordinateRegion)regionThatFits:(MKCoordinateRegion)region
(MKMapRect)mapRectThatFits:(MKMapRect)mapRect
edgePadding:(UIEdgeInsets)insets
What's their difference?
More importantly, which one we should use to set the region we see?
There is no regionThatFits:edgePadding: by the way.
A MKCoordinateRegion is defined using degrees coordinate of type CLLocationCoordinate2D which represents the latitude and longitude of a point on the surface of the globe.
MKMapRect represents an actual flat rectangle defined with view coordinate (x, y) on your map view.
You can use functions to do conversions for you like MKCoordinateRegionForMapRect
See http://developer.apple.com/library/ios/#documentation/MapKit/Reference/MapKitFunctionsReference/Reference/reference.html
And to answer your final question, you would use MKCoordinateRegion which will define what region of the globe's surface you want to see and by definition it will set your zoom level.