Convert from latitude, longitude to x, y - gps

I want to convert GPS location (latitude, longitude) into x,y coordinates.
I found many links about this topic and applied it, but it doesn't give me the correct answer!
I am following these steps to test the answer:
(1) firstly, i take two positions and calculate the distance between them using maps.
(2) then convert the two positions into x,y coordinates.
(3) then again calculate distance between the two points in the x,y coordinates
and see if it give me the same result in point(1) or not.
one of the solution i found the following, but it doesn't give me correct answer!
latitude = Math.PI * latitude / 180;
longitude = Math.PI * longitude / 180;
// adjust position by radians
latitude -= 1.570795765134; // subtract 90 degrees (in radians)
// and switch z and y
xPos = (app.radius) * Math.sin(latitude) * Math.cos(longitude);
zPos = (app.radius) * Math.sin(latitude) * Math.sin(longitude);
yPos = (app.radius) * Math.cos(latitude);
also i tried this link but still not work with me well!
any help how to convert from(latitude, longitude) to (x,y) ?
Thanks,

No exact solution exists
There is no isometric map from the sphere to the plane. When you convert lat/lon coordinates from the sphere to x/y coordinates in the plane, you cannot hope that all lengths will be preserved by this operation. You have to accept some kind of deformation. Many different map projections do exist, which can achieve different compromises between preservations of lengths, angles and areas. For smallish parts of earth's surface, transverse Mercator is quite common. You might have heard about UTM. But there are many more.
The formulas you quote compute x/y/z, i.e. a point in 3D space. But even there you'd not get correct distances automatically. The shortest distance between two points on the surface of the sphere would go through that sphere, whereas distances on the earth are mostly geodesic lengths following the surface. So they will be longer.
Approximation for small areas
If the part of the surface of the earth which you want to draw is relatively small, then you can use a very simple approximation. You can simply use the horizontal axis x to denote longitude λ, the vertical axis y to denote latitude φ. The ratio between these should not be 1:1, though. Instead you should use cos(φ0) as the aspect ratio, where φ0 denotes a latitude close to the center of your map. Furthermore, to convert from angles (measured in radians) to lengths, you multiply by the radius of the earth (which in this model is assumed to be a sphere).
x = r λ cos(φ0)
y = r φ
This is simple equirectangular projection. In most cases, you'll be able to compute cos(φ0) only once, which makes subsequent computations of large numbers of points really cheap.

I want to share with you how I managed the problem. I've used the equirectangular projection just like #MvG said, but this method gives you X and Y positions related to the globe (or the entire map), this means that you get global positions. In my case, I wanted to convert coordinates in a small area (about 500m square), so I related the projection point to another 2 points, getting the global positions and relating to local (on screen) positions, just like this:
First, I choose 2 points (top-left and bottom-right) around the area where I want to project, just like this picture:
Once I have the global reference area in lat and lng, I do the same for screen positions. The objects containing this data are shown below.
//top-left reference point
var p0 = {
scrX: 23.69, // Minimum X position on screen
scrY: -0.5, // Minimum Y position on screen
lat: -22.814895, // Latitude
lng: -47.072892 // Longitude
}
//bottom-right reference point
var p1 = {
scrX: 276, // Maximum X position on screen
scrY: 178.9, // Maximum Y position on screen
lat: -22.816419, // Latitude
lng: -47.070563 // Longitude
}
var radius = 6371; //Earth Radius in Km
//## Now I can calculate the global X and Y for each reference point ##\\
// This function converts lat and lng coordinates to GLOBAL X and Y positions
function latlngToGlobalXY(lat, lng){
//Calculates x based on cos of average of the latitudes
let x = radius*lng*Math.cos((p0.lat + p1.lat)/2);
//Calculates y based on latitude
let y = radius*lat;
return {x: x, y: y}
}
// Calculate global X and Y for top-left reference point
p0.pos = latlngToGlobalXY(p0.lat, p0.lng);
// Calculate global X and Y for bottom-right reference point
p1.pos = latlngToGlobalXY(p1.lat, p1.lng);
/*
* This gives me the X and Y in relation to map for the 2 reference points.
* Now we have the global AND screen areas and then we can relate both for the projection point.
*/
// This function converts lat and lng coordinates to SCREEN X and Y positions
function latlngToScreenXY(lat, lng){
//Calculate global X and Y for projection point
let pos = latlngToGlobalXY(lat, lng);
//Calculate the percentage of Global X position in relation to total global width
pos.perX = ((pos.x-p0.pos.x)/(p1.pos.x - p0.pos.x));
//Calculate the percentage of Global Y position in relation to total global height
pos.perY = ((pos.y-p0.pos.y)/(p1.pos.y - p0.pos.y));
//Returns the screen position based on reference points
return {
x: p0.scrX + (p1.scrX - p0.scrX)*pos.perX,
y: p0.scrY + (p1.scrY - p0.scrY)*pos.perY
}
}
//# The usage is like this #\\
var pos = latlngToScreenXY(-22.815319, -47.071718);
$point = $("#point-to-project");
$point.css("left", pos.x+"em");
$point.css("top", pos.y+"em");
As you can see, I made this in javascript, but the calculations can be translated to any language.
P.S. I'm applying the converted positions to an HTML element whose id is "point-to-project". To use this piece of code on your project, you shall create this element (styled as position absolute) or change the "usage" block.

Since this page shows up on top of google while i searched for this same problem, I would like to provide a more practical answers. The answer by MVG is correct but rather theoratical.
I have made a track plotting app for the fitbit ionic in javascript. The code below is how I tackled the problem.
//LOCATION PROVIDER
index.js
var gpsFix = false;
var circumferenceAtLat = 0;
function locationSuccess(pos){
if(!gpsFix){
gpsFix = true;
circumferenceAtLat = Math.cos(pos.coords.latitude*0.01745329251)*111305;
}
pos.x:Math.round(pos.coords.longitude*circumferenceAtLat),
pos.y:Math.round(pos.coords.latitude*110919),
plotTrack(pos);
}
plotting.js
plotTrack(position){
let x = Math.round((this.segments[i].start.x - this.bounds.minX)*this.scale);
let y = Math.round(this.bounds.maxY - this.segments[i].start.y)*this.scale; //heights needs to be inverted
//redraw?
let redraw = false;
//x or y bounds?
if(position.x>this.bounds.maxX){
this.bounds.maxX = (position.x-this.bounds.minX)*1.1+this.bounds.minX; //increase by 10%
redraw = true;
}
if(position.x<this.bounds.minX){
this.bounds.minX = this.bounds.maxX-(this.bounds.maxX-position.x)*1.1;
redraw = true;
};
if(position.y>this.bounds.maxY){
this.bounds.maxY = (position.y-this.bounds.minY)*1.1+this.bounds.minY; //increase by 10%
redraw = true;
}
if(position.y<this.bounds.minY){
this.bounds.minY = this.bounds.maxY-(this.bounds.maxY-position.y)*1.1;
redraw = true;
}
if(redraw){
reDraw();
}
}
function reDraw(){
let xScale = device.screen.width / (this.bounds.maxX-this.bounds.minX);
let yScale = device.screen.height / (this.bounds.maxY-this.bounds.minY);
if(xScale<yScale) this.scale = xScale;
else this.scale = yScale;
//Loop trough your object to redraw all of them
}

For completeness I like to add my python adaption of #allexrm code which worked really well. Thanks again!
radius = 6371 #Earth Radius in KM
class referencePoint:
def __init__(self, scrX, scrY, lat, lng):
self.scrX = scrX
self.scrY = scrY
self.lat = lat
self.lng = lng
# Calculate global X and Y for top-left reference point
p0 = referencePoint(0, 0, 52.526470, 13.403215)
# Calculate global X and Y for bottom-right reference point
p1 = referencePoint(2244, 2060, 52.525035, 13.405809)
# This function converts lat and lng coordinates to GLOBAL X and Y positions
def latlngToGlobalXY(lat, lng):
# Calculates x based on cos of average of the latitudes
x = radius*lng*math.cos((p0.lat + p1.lat)/2)
# Calculates y based on latitude
y = radius*lat
return {'x': x, 'y': y}
# This function converts lat and lng coordinates to SCREEN X and Y positions
def latlngToScreenXY(lat, lng):
# Calculate global X and Y for projection point
pos = latlngToGlobalXY(lat, lng)
# Calculate the percentage of Global X position in relation to total global width
perX = ((pos['x']-p0.pos['x'])/(p1.pos['x'] - p0.pos['x']))
# Calculate the percentage of Global Y position in relation to total global height
perY = ((pos['y']-p0.pos['y'])/(p1.pos['y'] - p0.pos['y']))
# Returns the screen position based on reference points
return {
'x': p0.scrX + (p1.scrX - p0.scrX)*perX,
'y': p0.scrY + (p1.scrY - p0.scrY)*perY
}
pos = latlngToScreenXY(52.525607, 13.404572);
pos['x] and pos['y] contain the translated x & y coordinates of the lat & lng (52.525607, 13.404572)
I hope this is helpful for anyone looking like me for the proper solution to the problem of translating lat lng into a local reference coordinate system.
Best

Its better to convert to utm coordinates, and treat that as x and y.
import utm
u = utm.from_latlon(12.917091, 77.573586)
The result will be (779260.623156606, 1429369.8665238516, 43, 'P')
The first two can be treated as x,y coordinates, the 43P is the UTM Zone, which can be ignored for small areas (width upto 668 km).

Related

UTM to WGS 84 conversion issue

I am trying to use ESRI latest land use data and downloaded a tif image from
https://www.arcgis.com/apps/instant/media/index.html?appid=fc92d38533d440078f17678ebc20e8e2
for example, shown here
https://lulctimeseries.blob.core.windows.net/lulctimeseriespublic/lc2021/16R_20210101-20220101.tif
When I load the image to ArcGIS pro, the longitude between A and B (corners) is 6 degrees, which is expected. The tif image is in WGS84/UTM 16N projection. I want to be able to find longitude and latitude of each pixels on the tif file, so I converted it to WGS 84 coordinate. However, after I converted it (by GDAL and ArcGIS), the longitude span between A and B is larger than 6 degrees. It looks like the transformer treat each grid on the image as equal distance instead of equal longitude/latitude. Am I doing something wrong here?
def wgs_transformer(img):
"""Giving geoio image, return two transformers from image crs to wgs84 and wgs84 to image crs
They can be used to translate between pixels and lat/long
"""
assert isinstance(img, geoio.base.GeoImage), 'img is not geoio.base.GeoImage type object'
old_cs= osr.SpatialReference()
old_cs.ImportFromWkt(img.ds.GetProjectionRef())
# create the new coordinate system
new_cs = osr.SpatialReference()
new_cs.ImportFromEPSG(4326)
# create a transform object to convert between coordinate systems
transform_img_wgs = osr.CoordinateTransformation(old_cs,new_cs)
transform_wgs_img = osr.CoordinateTransformation(new_cs, old_cs)
return transform_img_wgs, transform_wgs_img
def pixels_to_latlong(img, px, py, transform, arr=None, return_band=False):
"""Giving pixels x, y cordinates, return latitude and longitude
Args:
img: geoio.base.GeoImage
px: float, x cordinate
py: float, y cordinate
transfrom: osgeo.osr.CoordinateTransformation
arr: np array, band info of the img
return_band: bool, if Ture, return band info for the pixel
Returns:
latitude, longitude
"""
assert isinstance(img, geoio.base.GeoImage), 'img is not geoio.base.GeoImage type object'
assert isinstance(transform, osgeo.osr.CoordinateTransformation), 'transform has to be osgeo.osr.CoordinateTransformation object'
band, width, height = img.shape
assert 0 <= px < width and 0 <= py < height, f'px {px}, py {py} are beyond the img shape ({width}, {height})'
if return_band:
assert isinstance(arr, np.ndarray), 'arr needs to be numpy.ndarray'
lat, long, _ = transform.TransformPoint(*img.raster_to_proj(px, py))
if return_band:
band_val = arr[py-1, px-1]
return lat, long, band_val
else:
return lat, long, 0

Shortest rotation between two vectors not working like expected

def signed_angle_between_vecs(target_vec, start_vec, plane_normal=None):
start_vec = np.array(start_vec)
target_vec = np.array(target_vec)
start_vec = start_vec/np.linalg.norm(start_vec)
target_vec = target_vec/np.linalg.norm(target_vec)
if plane_normal is None:
arg1 = np.dot(np.cross(start_vec, target_vec), np.cross(start_vec, target_vec))
else:
arg1 = np.dot(np.cross(start_vec, target_vec), plane_normal)
arg2 = np.dot(start_vec, target_vec)
return np.arctan2(arg1, arg2)
from scipy.spatial.transform import Rotation as R
world_frame_axis = input_rotation_object.apply(canonical_axis)
angle = signed_angle_between_vecs(canonical_axis, world_frame_axis)
axis_angle = np.cross(world_frame_axis, canonical_axis) * angle
C = R.from_rotvec(axis_angle)
transformed_world_frame_axis_to_canonical = C.apply(world_frame_axis)
I am trying to align world_frame_axis to canonical_axis by performing a rotation around the normal vector generated by the cross product between the two vectors, using the signed angle between the two axes.
However, this code does not work. If you start with some arbitrary rotation as input_rotation_object you will see that transformed_world_frame_axis_to_canonical does not match canonical_axis.
What am I doing wrong?
not a python coder so I might be wrong but this looks suspicious:
start_vec = start_vec/np.linalg.norm(start_vec)
from the names I would expect that np.linalg.norm normalizes the vector already so the line should be:
start_vec = np.linalg.norm(start_vec)
and all the similar lines too ...
Also the atan2 operands are not looking right to me. I would (using math):
a = start_vec / |start_vec | // normalized start
b = target_vec / |target_vec| // normalized end
u = a // normalized one axis of plane
v = cross(u ,b)
v = cross(v ,u)
v = v / |v| // normalized second axis of plane perpendicular to u
dx = dot(u,b) // target vector in 2D aligned to start
dy = dot(v,b)
ang = atan2(dy,dx)
beware the ang might negated (depending on your notations) if the case either add minus sign or reverse the order in cross(u,v) to cross(v,u) Also you can do sanity check with comparing result to unsigned:
ang' = acos(dot(a,b))
in absolute values they should be the same (+/- rounding error).

Description of parameters of GDAL SetGeoTransform

Can anyone help me with parameters for SetGeoTransform? I'm creating raster layers with GDAL, but I can't find description of 3rd and 5th parameter for SetGeoTransform. It should be definition of x and y axis for cells. I try to find something about it here and here, but nothing.
I need to find description of these two parameters... It's a value in degrees, radians, meters? Or something else?
The geotransform is used to convert from map to pixel coordinates and back using an affine transformation. The 3rd and 5th parameter are used (together with the 2nd and 4th) to define the rotation if your image doesn't have 'north up'.
But most images are north up, and then both the 3rd and 5th parameter are zero.
The affine transform consists of six coefficients returned by
GDALDataset::GetGeoTransform() which map pixel/line coordinates into
georeferenced space using the following relationship:
Xgeo = GT(0) + Xpixel*GT(1) + Yline*GT(2)
Ygeo = GT(3) + Xpixel*GT(4) + Yline*GT(5)
See the section on affine geotransform at:
https://gdal.org/tutorials/geotransforms_tut.html
I did do like below code.
As a result I was able to do same with SetGeoTransform.
# new file
dst = gdal.GetDriverByName('GTiff').Create(OUT_PATH, xsize, ysize, band_num, dtype)
# old file
ds = gdal.Open(fpath)
wkt = ds.GetProjection()
gcps = ds.GetGCPs()
dst.SetGCPs(gcps, wkt)
...
dst.FlushCache()
dst = Nonet
Given information from the aforementioned gdal datamodel docs, the 3rd & 5th parameters of SatGeoTransform (x_skew and y_skew respectively) can be calculated from two control points (p1, p2) with known x and y in both "geo" and "pixel" coordinate spaces. p1 should be above-left of p2 in pixelspace.
x_skew = sqrt((p1.geox-p2.geox)**2 + (p1.geoy-p2.geoy)**2) / (p1.pixely - p2.pixely)`
y_skew = sqrt((p1.geox-p2.geox)**2 + (p1.geoy-p2.geoy)**2) / (p1.pixelx - p2.pixelx)`
In short this is the ratio of Euclidean distance between the points in geospace to the height (or width) of the image in pixelspace.
The units of the parameters are "geo"length/"pixel"length.
Here is a demonstration using the corners of the image stored as control points (gcps):
import gdal
from math import sqrt
ds = gdal.Open(fpath)
gcps = ds.GetGCPs()
assert gcps[0].Id == 'UpperLeft'
p1 = gcps[0]
assert gcps[2].Id == 'LowerRight'
p2 = gcps[2]
y_skew = (
sqrt((p1.GCPX-p2.GCPX)**2 + (p1.GCPY-p2.GCPY)**2) /
(p1.GCPPixel - p2.GCPPixel)
)
x_skew = (
sqrt((p1.GCPX-p2.GCPX)**2 + (p1.GCPY-p2.GCPY)**2) /
(p1.GCPLine - p2.GCPLine)
)
x_res = (p2.GCPX - p1.GCPX) / ds.RasterXSize
y_res = (p2.GCPY - p1.GCPY) / ds.RasterYSize
ds.SetGeoTransform([
p1.GCPX,
x_res,
x_skew,
p1.GCPY,
y_skew,
y_res,
])

implementing ease in update loop

I want to animate a sprite from point y1 to point y2 with some sort of deceleration. when it reaches point y2, the speed of the object will be 0 so it will completely stop.
I Know the two points, and I know the object's starting speed.
The animation time is not so important to me. I can decide on it if needed.
for example: y1 = 0, y2 = 400, v0 = 250 pixels per second (= starting speed)
I read about easing functions but I didn't understand how do I actually implement it in the
update loop.
here's my update loop code with the place that should somehow implement an easing function.
-(void)onTimerTick{
double currentTime = CFAbsoluteTimeGetCurrent() ;
float timeDelta = self.lastUpdateTime - currentTime;
self.lastUpdateTime = currentTime;
float *pixelsToMove = ???? // here needs to be some formula using v0, timeDelta, y2, y1
sprite.y += pixelsToMove;
}
Timing functions as Bézier curves
An easing timing function is basically a Bézier curve from (0,0) to (1,1) where the horizontal axis is "time" and the vertical axis is "amount of change". Since a Bézier curve mathematically is as
start*(1-t)^3 + c1*t(1-t)^2 + c2*t^2(1-t) + end*t^3
you can insert any time value and get the amount of change that should be applied. Note that both time and change is normalized (in the range of 0 to 1).
Note that the variable t is not the time value, t is how far along the curve you have come. The time value is the x value of the point along the curve.
The curve below is a sample "ease" curve that starts off slow, goes faster and slows down in the end.
If for example a third of the time had passed you would calculate what amount of change that corresponds to be update the value of the animated property as
currentValue = beginValue + amountOfChange*(endValue-beginValue)
Example
Say you are animating the position from (50, 50) to (200, 150) using a curve with control points at (0.6, 0.0) and (0.5, 0.9) and a duration of 4 seconds (the control points are trying to be close to that of the image above).
When 1 second of the animation has passed (25% of total duration) the value along the curve is:
(0.25,y) = (0,0)*(1-t)^3 + (0.6,0)*t(1-t)^2 + (0.5,0.9)*t^2(1-t) + (1,1)*t^3
This means that we can calculate t as:
0.25 = 0.6*t(1-t)^2 + 0.5*t^2(1-t) + t^3
Wolfram Alpha tells me that t = 0.482359
If we the input that t in
y = 0.9*t^2*(1-t) + t^3
we will get the "amount of change" for when 1 second of the duration has passed.
Once again Wolfram Alpha tells me that y = 0.220626 which means that 22% of the value has changed after 25% of the time. This is because the curve starts out slow (you can see in the image that it is mostly flat in the beginning).
So finally: 1 second into the animation the position is
(x, y) = (50, 50) + 0.220626 * (200-50, 150-50)
(x, y) = (50, 50) + 0.220626 * (150, 100)
(x, y) = (50, 50) + (33.0939, 22.0626)
(x, y) = (50+33.0939, 50+22.0626)
(x, y) = (83.0939, 72.0626)
I hope this example helps you understanding how to use timing functions.

zedgraph common majortick for all y axis

I'm using Zedgraph to display multiple y axis (both YAxis and Y2Axis).
When having multple yaxis it becomes rather hard to compare curves with all the major ticks. On the picture below each curve has its own major tick:
https://dl.dropbox.com/u/70476173/problem.png
I would like the graph to share the same major ticks so that it is easy to compare the curves. I have tried with the code:
//majorTickCount = 12.0
var min = Math.Floor(yAxis.Scale.Min);
var max = Math.Ceiling(yAxis.Scale.Max);
var step = (max - min) / majorTickCount;
var wholeStep = step;
max = min + wholeStep * majorTickCount;
//yAxis.Scale.MajorStepAuto = true;
//yAxis.Scale.MajorStepAuto = false;
//yAxis.Scale.MinGrace = 0;
//yAxis.Scale.MaxGrace = 0;
yAxis.Scale.Min = min;
yAxis.Scale.Max = max;
yAxis.Scale.MajorStep = wholeStep;
yAxis.Scale.BaseTic = min;
This seems to create the desired effect, but with a problem:
https://dl.dropbox.com/u/70476173/problem2.png
The red curves 2nd and 3rd point has the value 6, but as you can see on the picture, the point lies below the majorgrid for 6. I believe the problem is that the majorstep is calculated to 2.5 and the y axis label displaying 6 should rather be 6.1 or something like that.
TL;DR: How do I make all my y axes share the same major steps
Any idea of how I can scale the y axis so that they share the same major grid?