I have a random region, and would need the coordinates in the world plane of the intersection between the region and the center of the image, with the highest y coordinate.
this is what I have so far:
fill_up(SelectedRegions, RegionFillUp)
get_image_size(Image, w,h)
gen_region_line(RegionLine,0,w/2,h,w/2)
disp_line (3600, 0, w/2, h, w/2)
intersection (RegionFillUp, RegionLine, RegionIntersection)
*EDIT
I've made some progress and have now all the intersections. But cannot figure out how to get the last entry of the array if the array is bigger than one...
gen_contour_region_xld (RegionFillUp, Contours, 'border')
get_image_size(Image, w,h)
gen_contour_polygon_xld (Line1,[0,h],[w/2,w/2])
intersection_line_contour_xld(Contours,0,w/2,h,w/2,rowcoords, columncoords, isOverlapping)
Solved it this way:
fill_up(SelectedRegions, RegionFillUp)
gen_contour_region_xld (RegionFillUp, Contours, 'border')
get_image_size(Image, w,h)
intersection_line_contour_xld(Contours,0,w/2,h,w/2,rowcoords, columncoords, isOverlapping)
disp_cross (3600, rowcoords, columncoords, 6, 0)
tuple_length (rowcoords,Length)
Position := 0
if (Length > 0)
Position := rowcoords[Length-1]
endif
image_points_to_world_plane (CamParam, Pose, Position, w/2, 1, X1, Y1)
Related
I don't know if this question have been repeating in here. If yes then i'm sorry..
I have a box that positioned to see H,W,L view. I understand steps to get vertices however most of the examples in the net only describes how to get 4 vertices from 2D plane. So my question is, how if we want to get 7 vertices (like the pic above) and handle it in numpy? How to differentiate between upper points and lower points?
I will be using Python to determine this.
Here's my attempt to get the 8 corners of the 3d rectangle. I masked on the saturation channel of the HSV color space since that separates out white.
I used findContours to get the contour of the box and then used approxPolyDP to get a six-point approximation (the six visible corners).
From there I approximated the two "hidden" corners via a parallelogram approximation. For each point I looked two points behind and created a fourth point that would make a parallelogram with that side. I then took the centroid of these parallelogram points to guess the corner. I hoped that taking the centroid of the points would help even out the error between the parallelogram assumption and the perspective warping, but it did a poor job.
If you need a better approximation there are probably ways to estimate the perspective warping to get the corners.
import cv2
import numpy as np
import random
def tup(point):
return (int(point[0]), int(point[1]));
# load image
img = cv2.imread("box.jpg");
# reduce size to fit on screen
scale = 0.25;
h,w = img.shape[:2];
h = int(scale*h);
w = int(scale*w);
img = cv2.resize(img, (w,h));
copy = np.copy(img);
# convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# make mask
mask = cv2.inRange(s, 30, 255);
# dilate and erode to get rid of small holes
kernel = np.ones((5,5), np.uint8);
mask = cv2.dilate(mask, kernel, iterations = 1);
mask = cv2.erode(mask, kernel, iterations = 1);
# contours # OpenCV 3.4, in OpenCV 2 or 4 it returns (contours, _)
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
contour = contours[0]; # just take the first one
# approx until 6 points
num_points = 999999;
step_size = 0.01;
percent = step_size;
while num_points >= 6:
# get number of points
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
num_points = len(approx);
# increment
percent += step_size;
# step back and get the points
# there could be more than 6 points if our step size misses it
percent -= step_size * 2;
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
# draw contour
cv2.drawContours(img, [approx], -1, (0,0,200), 2);
# draw points
for point in approx:
point = point[0]; # drop extra layer of brackets
center = (int(point[0]), int(point[1]));
cv2.circle(img, center, 4, (150, 200, 0), -1);
# do parallelogram approx to get the two "hidden" corners to complete our 3d rectangle
proposals = [];
size = len(approx);
for a in range(size):
# get points backwards
two = approx[a - 2][0];
one = approx[a - 1][0];
curr = approx[a][0];
# get vector from one -> two
dx = two[0] - one[0];
dy = two[1] - one[1];
hidden = [curr[0] + dx, curr[1] + dy];
proposals.append([hidden, curr, a, two]);
# debug draw
c = np.copy(copy);
cv2.circle(c, tup(two), 4, (255, 0, 0), -1);
cv2.circle(c, tup(one), 4, (0,255,0), -1);
cv2.circle(c, tup(curr), 4, (0,0,255), -1);
cv2.circle(c, tup(hidden), 4, (255,255,0), -1);
cv2.line(c, tup(two), tup(one), (0,0,200), 1);
cv2.line(c, tup(curr), tup(hidden), (0,0,200), 1);
cv2.imshow("Mark", c);
cv2.waitKey(0);
# draw proposals
for point in proposals:
point = point[0];
center = (point[0], point[1]);
cv2.circle(img, center, 4, (200, 100, 0), -1);
# group points and sum up points
hidden_corners = [[0,0], [0,0]];
for point in proposals:
# get index and update hidden corners
index = point[2] % 2;
pos = point[0];
hidden_corners[index][0] += pos[0];
hidden_corners[index][1] += pos[1];
# divide to get centroid
hidden_corners[0][0] /= 3.0;
hidden_corners[0][1] /= 3.0;
hidden_corners[1][0] /= 3.0;
hidden_corners[1][1] /= 3.0;
# draw new points
for point in proposals:
# unpack
pos = point[0];
parent = point[1];
index = point[2] % 2;
source = point[3];
# draw
color = [random.randint(0, 150) for a in range(3)];
cv2.line(img, tup(hidden_corners[index]), tup(parent), (0,0,200), 2);
cv2.line(img, tup(pos), tup(parent), color, 1);
cv2.line(img, tup(pos), tup(source), color, 1);
cv2.circle(img, tup(hidden_corners[index]), 4, (200, 200, 0), -1);
# show
cv2.imshow("Image", img);
cv2.imshow("Mask", mask);
cv2.waitKey(0);
I've made a list of my own texture objects so that I can access them accordingly. These are the two bitmap images I'm using:
Every time I load my program, it reads from the two bitmap files and stores their texture data into my global texture list. The grass tile one loads first and then the checkerboard with the 1.0 loads after it. The grass tile texture renders. Here is how it looks like in my program:
It appears as if It's rotated 180 degrees and flipped horizontally. I've checked my 2d projection, coordinates and they're alright. Up goes towards positive Y, right towards positive X which, is fine. Also, the colors are alright, the texture works!
However, if I choose to render the second texture, which is the black/magenta checkerboard, it looks like this in my program:
It's rotated and flipped as well, but the colors aren't being rendered properly either. Why does this happen? Here is my code:
Loading the texture from Bitmap:
Private Function LoadFromBitmap(ByVal Bitmap As Bitmap) As Integer
Dim Tex As Integer
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest)
GL.GenTextures(1, Tex)
GL.BindTexture(TextureTarget.Texture2D, Tex)
Dim Data As BitmapData = Bitmap.LockBits(New Rectangle(0, 0, Bitmap.Width, Bitmap.Height), ImageLockMode.ReadOnly, Imaging.PixelFormat.Format32bppArgb)
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, Data.Width, Data.Height, 0, OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, Data.Scan0)
Bitmap.UnlockBits(Data)
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, TextureMagFilter.Nearest)
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, TextureMinFilter.Nearest)
Return Tex
End Function
Rendering:
GL.MatrixMode(MatrixMode.Modelview)
GL.LoadIdentity()
GL.Viewport(0, 0, ControlWidth, ControlHeight)
For X As Byte = 0 To EZSize(0) - 1
For Y As Byte = 0 To EZSize(1) - 1
GL.Enable(EnableCap.Texture2D)
GL.BindTexture(TextureTarget.Texture2D, TextureList.Item(1).IntData)
GL.Begin(PrimitiveType.Quads)
GL.TexCoord2(X, Y) : GL.Vertex2(X, Y)
GL.TexCoord2(X + 1, Y) : GL.Vertex2(X + 1, Y)
GL.TexCoord2(X + 1, Y + 1) : GL.Vertex2(X + 1, Y + 1)
GL.TexCoord2(X, Y + 1) : GL.Vertex2(X, Y + 1)
GL.End()
GL.Disable(EnableCap.Texture2D)
Next
Next
GL.LoadIdentity()
GL.Flush()
GraphicsContext.CurrentContext.SwapInterval = True
GlControl1.SwapBuffers()
If texturing is enabled, then by default the color of the texel is multiplied by the current color, because by default the texture environment mode (GL_TEXTURE_ENV_MODE) is GL_MODULATE. See glTexEnv.
This causes that the color of the texels of the texture is "mixed" by the last color which you have set by glColor.
Set a "white" color before you render the texture, to solve your issue:
GL.Color3(Color.White)
The texture is flipped, because the lower left window coordinate is (0,0), but in the texture the upper right coordinate is (0, 0). You've to compensate that by flipping the v-component of the texture coordinate:
e.g.:
GL.Enable(EnableCap.Texture2D)
GL.BindTexture(TextureTarget.Texture2D, TextureList.Item(1).IntData)
GL.Color3(Color.White)
GL.Begin(PrimitiveType.Quads)
GL.TexCoord2(X, Y + 1) : GL.Vertex2(X, Y)
GL.TexCoord2(X + 1, Y + 1) : GL.Vertex2(X + 1, Y)
GL.TexCoord2(X + 1, Y) : GL.Vertex2(X + 1, Y + 1)
GL.TexCoord2(X, Y) : GL.Vertex2(X, Y + 1)
GL.End()
GL.Disable(EnableCap.Texture2D)
Likewise you can change the environment mode to GL_REPLACE, instead by glTexEnv:
GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, GL_REPLACE)
I have a short question. I don't know which values I have to put in this function and I can't find any valuable examples on the internet.
This is my function:
I already set up a node and everything else.
node.rotation = SCNVector4Make(x,y,z,w);
What are the values for x, y, z, and w when I want to turn my object with an angle of 45 degrees?
The first value is for "x"
SCNVector4Make(1,0,0,0)
The second is "Y"
SCNVector4Make(0,1,0,0)
The third is "Z"
SCNVector4Make(0,0,1,0)
The fourth "W" is rotation in radians. To rotate your object on the "x" axis 45 degs. It will look like so...
SCNVector4Make(1,0,0,M_PI/4)
M_PI is equal to 180 degs.
from the SCNNode reference:
The four-component rotation vector specifies the direction of the rotation axis in the first three components and the angle of rotation (in radians) in the fourth.
In Swift 4.2 you can use the following values for 45 degrees rotation in SCNVector4Make(x, y, z, w):
X-axis:
node.rotation = SCNVector4Make(1, 0, 0, .pi/4)
Y-axis:
node.rotation = SCNVector4Make(0, 1, 0, .pi/4)
Z-axis:
node.rotation = SCNVector4Make(0, 0, 1, .pi/4)
Remember, w parameter must be in Radians,
so 3.14159 / 4 = 0.78539 radians
(or 180 / 4 = 45 degrees).
I want to convert GPS location (latitude, longitude) into x,y coordinates.
I found many links about this topic and applied it, but it doesn't give me the correct answer!
I am following these steps to test the answer:
(1) firstly, i take two positions and calculate the distance between them using maps.
(2) then convert the two positions into x,y coordinates.
(3) then again calculate distance between the two points in the x,y coordinates
and see if it give me the same result in point(1) or not.
one of the solution i found the following, but it doesn't give me correct answer!
latitude = Math.PI * latitude / 180;
longitude = Math.PI * longitude / 180;
// adjust position by radians
latitude -= 1.570795765134; // subtract 90 degrees (in radians)
// and switch z and y
xPos = (app.radius) * Math.sin(latitude) * Math.cos(longitude);
zPos = (app.radius) * Math.sin(latitude) * Math.sin(longitude);
yPos = (app.radius) * Math.cos(latitude);
also i tried this link but still not work with me well!
any help how to convert from(latitude, longitude) to (x,y) ?
Thanks,
No exact solution exists
There is no isometric map from the sphere to the plane. When you convert lat/lon coordinates from the sphere to x/y coordinates in the plane, you cannot hope that all lengths will be preserved by this operation. You have to accept some kind of deformation. Many different map projections do exist, which can achieve different compromises between preservations of lengths, angles and areas. For smallish parts of earth's surface, transverse Mercator is quite common. You might have heard about UTM. But there are many more.
The formulas you quote compute x/y/z, i.e. a point in 3D space. But even there you'd not get correct distances automatically. The shortest distance between two points on the surface of the sphere would go through that sphere, whereas distances on the earth are mostly geodesic lengths following the surface. So they will be longer.
Approximation for small areas
If the part of the surface of the earth which you want to draw is relatively small, then you can use a very simple approximation. You can simply use the horizontal axis x to denote longitude λ, the vertical axis y to denote latitude φ. The ratio between these should not be 1:1, though. Instead you should use cos(φ0) as the aspect ratio, where φ0 denotes a latitude close to the center of your map. Furthermore, to convert from angles (measured in radians) to lengths, you multiply by the radius of the earth (which in this model is assumed to be a sphere).
x = r λ cos(φ0)
y = r φ
This is simple equirectangular projection. In most cases, you'll be able to compute cos(φ0) only once, which makes subsequent computations of large numbers of points really cheap.
I want to share with you how I managed the problem. I've used the equirectangular projection just like #MvG said, but this method gives you X and Y positions related to the globe (or the entire map), this means that you get global positions. In my case, I wanted to convert coordinates in a small area (about 500m square), so I related the projection point to another 2 points, getting the global positions and relating to local (on screen) positions, just like this:
First, I choose 2 points (top-left and bottom-right) around the area where I want to project, just like this picture:
Once I have the global reference area in lat and lng, I do the same for screen positions. The objects containing this data are shown below.
//top-left reference point
var p0 = {
scrX: 23.69, // Minimum X position on screen
scrY: -0.5, // Minimum Y position on screen
lat: -22.814895, // Latitude
lng: -47.072892 // Longitude
}
//bottom-right reference point
var p1 = {
scrX: 276, // Maximum X position on screen
scrY: 178.9, // Maximum Y position on screen
lat: -22.816419, // Latitude
lng: -47.070563 // Longitude
}
var radius = 6371; //Earth Radius in Km
//## Now I can calculate the global X and Y for each reference point ##\\
// This function converts lat and lng coordinates to GLOBAL X and Y positions
function latlngToGlobalXY(lat, lng){
//Calculates x based on cos of average of the latitudes
let x = radius*lng*Math.cos((p0.lat + p1.lat)/2);
//Calculates y based on latitude
let y = radius*lat;
return {x: x, y: y}
}
// Calculate global X and Y for top-left reference point
p0.pos = latlngToGlobalXY(p0.lat, p0.lng);
// Calculate global X and Y for bottom-right reference point
p1.pos = latlngToGlobalXY(p1.lat, p1.lng);
/*
* This gives me the X and Y in relation to map for the 2 reference points.
* Now we have the global AND screen areas and then we can relate both for the projection point.
*/
// This function converts lat and lng coordinates to SCREEN X and Y positions
function latlngToScreenXY(lat, lng){
//Calculate global X and Y for projection point
let pos = latlngToGlobalXY(lat, lng);
//Calculate the percentage of Global X position in relation to total global width
pos.perX = ((pos.x-p0.pos.x)/(p1.pos.x - p0.pos.x));
//Calculate the percentage of Global Y position in relation to total global height
pos.perY = ((pos.y-p0.pos.y)/(p1.pos.y - p0.pos.y));
//Returns the screen position based on reference points
return {
x: p0.scrX + (p1.scrX - p0.scrX)*pos.perX,
y: p0.scrY + (p1.scrY - p0.scrY)*pos.perY
}
}
//# The usage is like this #\\
var pos = latlngToScreenXY(-22.815319, -47.071718);
$point = $("#point-to-project");
$point.css("left", pos.x+"em");
$point.css("top", pos.y+"em");
As you can see, I made this in javascript, but the calculations can be translated to any language.
P.S. I'm applying the converted positions to an HTML element whose id is "point-to-project". To use this piece of code on your project, you shall create this element (styled as position absolute) or change the "usage" block.
Since this page shows up on top of google while i searched for this same problem, I would like to provide a more practical answers. The answer by MVG is correct but rather theoratical.
I have made a track plotting app for the fitbit ionic in javascript. The code below is how I tackled the problem.
//LOCATION PROVIDER
index.js
var gpsFix = false;
var circumferenceAtLat = 0;
function locationSuccess(pos){
if(!gpsFix){
gpsFix = true;
circumferenceAtLat = Math.cos(pos.coords.latitude*0.01745329251)*111305;
}
pos.x:Math.round(pos.coords.longitude*circumferenceAtLat),
pos.y:Math.round(pos.coords.latitude*110919),
plotTrack(pos);
}
plotting.js
plotTrack(position){
let x = Math.round((this.segments[i].start.x - this.bounds.minX)*this.scale);
let y = Math.round(this.bounds.maxY - this.segments[i].start.y)*this.scale; //heights needs to be inverted
//redraw?
let redraw = false;
//x or y bounds?
if(position.x>this.bounds.maxX){
this.bounds.maxX = (position.x-this.bounds.minX)*1.1+this.bounds.minX; //increase by 10%
redraw = true;
}
if(position.x<this.bounds.minX){
this.bounds.minX = this.bounds.maxX-(this.bounds.maxX-position.x)*1.1;
redraw = true;
};
if(position.y>this.bounds.maxY){
this.bounds.maxY = (position.y-this.bounds.minY)*1.1+this.bounds.minY; //increase by 10%
redraw = true;
}
if(position.y<this.bounds.minY){
this.bounds.minY = this.bounds.maxY-(this.bounds.maxY-position.y)*1.1;
redraw = true;
}
if(redraw){
reDraw();
}
}
function reDraw(){
let xScale = device.screen.width / (this.bounds.maxX-this.bounds.minX);
let yScale = device.screen.height / (this.bounds.maxY-this.bounds.minY);
if(xScale<yScale) this.scale = xScale;
else this.scale = yScale;
//Loop trough your object to redraw all of them
}
For completeness I like to add my python adaption of #allexrm code which worked really well. Thanks again!
radius = 6371 #Earth Radius in KM
class referencePoint:
def __init__(self, scrX, scrY, lat, lng):
self.scrX = scrX
self.scrY = scrY
self.lat = lat
self.lng = lng
# Calculate global X and Y for top-left reference point
p0 = referencePoint(0, 0, 52.526470, 13.403215)
# Calculate global X and Y for bottom-right reference point
p1 = referencePoint(2244, 2060, 52.525035, 13.405809)
# This function converts lat and lng coordinates to GLOBAL X and Y positions
def latlngToGlobalXY(lat, lng):
# Calculates x based on cos of average of the latitudes
x = radius*lng*math.cos((p0.lat + p1.lat)/2)
# Calculates y based on latitude
y = radius*lat
return {'x': x, 'y': y}
# This function converts lat and lng coordinates to SCREEN X and Y positions
def latlngToScreenXY(lat, lng):
# Calculate global X and Y for projection point
pos = latlngToGlobalXY(lat, lng)
# Calculate the percentage of Global X position in relation to total global width
perX = ((pos['x']-p0.pos['x'])/(p1.pos['x'] - p0.pos['x']))
# Calculate the percentage of Global Y position in relation to total global height
perY = ((pos['y']-p0.pos['y'])/(p1.pos['y'] - p0.pos['y']))
# Returns the screen position based on reference points
return {
'x': p0.scrX + (p1.scrX - p0.scrX)*perX,
'y': p0.scrY + (p1.scrY - p0.scrY)*perY
}
pos = latlngToScreenXY(52.525607, 13.404572);
pos['x] and pos['y] contain the translated x & y coordinates of the lat & lng (52.525607, 13.404572)
I hope this is helpful for anyone looking like me for the proper solution to the problem of translating lat lng into a local reference coordinate system.
Best
Its better to convert to utm coordinates, and treat that as x and y.
import utm
u = utm.from_latlon(12.917091, 77.573586)
The result will be (779260.623156606, 1429369.8665238516, 43, 'P')
The first two can be treated as x,y coordinates, the 43P is the UTM Zone, which can be ignored for small areas (width upto 668 km).
I want to animate a sprite from point y1 to point y2 with some sort of deceleration. when it reaches point y2, the speed of the object will be 0 so it will completely stop.
I Know the two points, and I know the object's starting speed.
The animation time is not so important to me. I can decide on it if needed.
for example: y1 = 0, y2 = 400, v0 = 250 pixels per second (= starting speed)
I read about easing functions but I didn't understand how do I actually implement it in the
update loop.
here's my update loop code with the place that should somehow implement an easing function.
-(void)onTimerTick{
double currentTime = CFAbsoluteTimeGetCurrent() ;
float timeDelta = self.lastUpdateTime - currentTime;
self.lastUpdateTime = currentTime;
float *pixelsToMove = ???? // here needs to be some formula using v0, timeDelta, y2, y1
sprite.y += pixelsToMove;
}
Timing functions as Bézier curves
An easing timing function is basically a Bézier curve from (0,0) to (1,1) where the horizontal axis is "time" and the vertical axis is "amount of change". Since a Bézier curve mathematically is as
start*(1-t)^3 + c1*t(1-t)^2 + c2*t^2(1-t) + end*t^3
you can insert any time value and get the amount of change that should be applied. Note that both time and change is normalized (in the range of 0 to 1).
Note that the variable t is not the time value, t is how far along the curve you have come. The time value is the x value of the point along the curve.
The curve below is a sample "ease" curve that starts off slow, goes faster and slows down in the end.
If for example a third of the time had passed you would calculate what amount of change that corresponds to be update the value of the animated property as
currentValue = beginValue + amountOfChange*(endValue-beginValue)
Example
Say you are animating the position from (50, 50) to (200, 150) using a curve with control points at (0.6, 0.0) and (0.5, 0.9) and a duration of 4 seconds (the control points are trying to be close to that of the image above).
When 1 second of the animation has passed (25% of total duration) the value along the curve is:
(0.25,y) = (0,0)*(1-t)^3 + (0.6,0)*t(1-t)^2 + (0.5,0.9)*t^2(1-t) + (1,1)*t^3
This means that we can calculate t as:
0.25 = 0.6*t(1-t)^2 + 0.5*t^2(1-t) + t^3
Wolfram Alpha tells me that t = 0.482359
If we the input that t in
y = 0.9*t^2*(1-t) + t^3
we will get the "amount of change" for when 1 second of the duration has passed.
Once again Wolfram Alpha tells me that y = 0.220626 which means that 22% of the value has changed after 25% of the time. This is because the curve starts out slow (you can see in the image that it is mostly flat in the beginning).
So finally: 1 second into the animation the position is
(x, y) = (50, 50) + 0.220626 * (200-50, 150-50)
(x, y) = (50, 50) + 0.220626 * (150, 100)
(x, y) = (50, 50) + (33.0939, 22.0626)
(x, y) = (50+33.0939, 50+22.0626)
(x, y) = (83.0939, 72.0626)
I hope this example helps you understanding how to use timing functions.