Need deep explanation for viewport /perspective/frustum calculations - opengl-es-2.0

I have a lot of tutorials & books, but I'm unable to understand how my viewport, my near & far distance etc are used to calc perspective / frustum matrix.
I have the learningwebgl lessons, but.... I dont understand what viewport & 3D space adjustments are made.... What is my initial window projection size ? Why I see the triangle & square placed at z = -7.
Another thing I dont understand . A near plane of 0.001 creates the window projection just in front of my nose ? So what is my projection window dimension ?
I need a very deeper and basic help....
Can anybody help me ? Some really usefull links? I need graphical examples showing & teaching how frustum is calculated.
Thanks

There's this
http://games.greggman.com/game/webgl-3d-perspective/
Imagine you're in 2D. You have a canvas that's 200x100 pixels. If you draw at x = 201 it will be off the canvas. Similarly at x = -1 it will be off the canvas.
In WebGL it works in a 3D space that goes from -1 to +1 in x, y and z. The perspective / frustum matrix is the matrix that takes your 3d scene and converts it to this -1 / +1 space. The near and far values define what range in world space get converted to the -1 / +1 "clipspace". Anything outside that range will be clipped just like the 2D example. If you set near to 10 and far to 100 then something at Z = 9 will be clipped because it's too near and something at 101 will also be clipped as something that's too far. More specifically the near and far settings will form a matrix such that when a point is at Z = near it will become -1 when multiplied by the matrix and when it's at Z = far it will become +1 when multiplied by the matrix.
The viewport setting tells WebGL how to convert from the -1 to +1 space back into pixels.

Related

How to find the point of collision between an irregular shape (built out of 3 circles) and a line

I'm making a program in which many weird shapes are drawn onto a canvas. Right now i'm trying to implement the last, and possebly hardest, one.
In this particular shape i need a way to find the location (on a 2d canvas) where the line hits the shape. The following image is an example of what i have right now.
The black dots are the points that a known to me (i also have the location of the center of the three open circles and the radius of these circles). Each of the three outer lines needs a line towards the center dot, ending at the point that it hits the circle. This shape can be turned 90, 180 or 270 degrees.
The shape should look something like the following:
If you need any other information, please ask me in the comments. I'm not very good at math so please be gentle, thanks!
If A and B are points forming a line, then you can describe any point on that line using coordinates:
x = t·Ax + (1−t)·Bx
y = t·Ay + (1−t)·By
0 ≤ t ≤ 1
You can also describe the circle with center M and radius r as
(x − Mx)2 + (y − My)2 = r2
So take the x and y from the equations of the line, and plug them into the equation of the circle. You obtain a quadratic equation in t. Its two solutions describe the two points of intersection between the line and circle. In your example, only one of them lies on the line segment, i.e. satisfies 0 ≤ t ≤ 1. The other describes a point on the extension of the segment past its endpoint. Take the correct value for t back to the equations of the line, and you obtain the x and y coordinates of the point of intersection.
If you don't know up front which circle you want to intersect with a given line, then intersect all three and choose the most appropriate point afterwards. Probably that is the point closest to the outside starting point of the line segment. The same goes in cases where both points of intersection lie on the segment.

Obtaining 3D location of an object being looked at by a camera with known position and orientation

I am building an augmented reality application and I have the yaw, pitch, and roll for the camera. I want to start placing objects in the 3D environment. I want to make it so that when the user clicks, a 3D point pops up right where the camera is pointed (center of the 2D screen) and when the user moves, the point moves accordingly in 3D space. The camera does not change position, only orientation. Is there a proper way to recover the 3D location of this point? We can assume that all points are equidistant from the camera location.
I am able to accomplish this independently for two axes (OpenGL default orientation). This works for changes in the vertical axis:
x = -sin(pitch)
y = cos(pitch)
z = 0
This also works for changes in the horizontal axis:
x = 0
y = -sin(yaw)
z = cos(yaw)
I was thinking that I should just make combine into:
x = -sin(pitch)
y = sin(yaw) * cos(pitch)
z = cos(yaw)
and that seems to be close, but not exactly correct. Any suggestions would be greatly appreciated!
It sounds like you just want to convert from a rotation vector (pitch,yaw,roll) to a rotation matrix. The conversion can bee seen on the Wikipedia article on rotation matrices. The idea is that once you have constructed your matrix, to transform any point simply.
final_pos = rot_mat*initial_pose
where final and initial pose are 3x1 vectors and rot_mat is a 3x3 matrix.

Barycentric coordinates texture mapping

I want to map textures with correct perspective for 3D rendering. I am using barycentric coordinates to locate points on the faces of triangles. Simple affine transformation gave me that standard, weird looking result. This is what I did to correct my perspective, but it seems to have only made the distortion greater:
three triangle vertices v1 v2 v3
vertex coordinates are v_.x v_.y v_.z
texture coordinates are v_.u v_.v
barycentric coordinates corresponding to vertices are b1 b2 b3
I am trying to get the correct texture coordinates U and V
z=b1/v1.z + b2/v2.z + b3/v3.z
U=(b1*v1.u/v1.z + b2*v2.u/v2.z + b3*v3.u/v3.z) / z
V=(b1*v1.v/v1.z + b2*v2.v/v2.z + b3*v3.v/v3.z) / z
This SHOULD work shouldn't it? Why isn't this working?
EDIT: The response on this page looks useful, but I am unsure what the w coordinate is. Maybe somebody could just explain that, which would also likely solve my problem. http://www.gamedev.net/topic/593669-perspective-correct-barycentric-coordinates/
note: My tags were all wrong at first. That is now fixed.
Okay, this one I DID manage to solve on my own. I was dividing by the z coordinate in screen space. The solution is to divide by the homogeneous w coordinate instead.
Well, that took a while to figure out.

Extract transform and rotation matrices from homography?

I have 2 consecutive images from a camera and I want to estimate the change in camera pose:
I calculate the optical flow:
Const MAXFEATURES As Integer = 100
imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp")
imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp")
grayA = imgA.Convert(Of Gray, Byte)()
grayB = imgB.Convert(Of Gray, Byte)()
imagesize = cvGetSize(grayA)
pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
features = MAXFEATURES
featuresA = grayA.GoodFeaturesToTrack(features, 0.01, 25, 3)
grayA.FindCornerSubPix(featuresA, New System.Drawing.Size(10, 10),
New System.Drawing.Size(-1, -1),
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03))
features = featuresA(0).Length
Emgu.CV.OpticalFlow.PyrLK(grayA, grayB, pyrBufferA, pyrBufferB, _
featuresA(0), New Size(25, 25), 3, _
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03D),
flags, featuresB(0), status, errors)
pointsA = New Matrix(Of Single)(features, 2)
pointsB = New Matrix(Of Single)(features, 2)
For i As Integer = 0 To features - 1
pointsA(i, 0) = featuresA(0)(i).X
pointsA(i, 1) = featuresA(0)(i).Y
pointsB(i, 0) = featuresB(0)(i).X
pointsB(i, 1) = featuresB(0)(i).Y
Next
Dim Homography As New Matrix(Of Double)(3, 3)
cvFindHomography(pointsA.Ptr, pointsB.Ptr, Homography, HOMOGRAPHY_METHOD.RANSAC, 1, 0)
and it looks right, the camera moved leftwards and upwards:
Now I want to find out how much the camera moved and rotated. If I declare my camera position and what it's looking at:
' Create camera location at origin and lookat (straight ahead, 1 in the Z axis)
Location = New Matrix(Of Double)(2, 3)
location(0, 0) = 0 ' X location
location(0, 1) = 0 ' Y location
location(0, 2) = 0 ' Z location
location(1, 0) = 0 ' X lookat
location(1, 1) = 0 ' Y lookat
location(1, 2) = 1 ' Z lookat
How do I calculate the new position and lookat?
If I'm doing this all wrong or if there's a better method, any suggestions would be very welcome, thanks!
For pure camera rotation R = A-1HA. To prove this consider image to plane homographies H1=A and H2=AR, where A is camera intrinsic matrix. Then H12=H2*H1-1=A-1RA, from which you can obtain R
Camera translation is harder to estimate. If the camera translates you have to a find fundamental matrix first (not homography): xTFx=0 and then convert it into an essential matrix E=ATFA; Then you can decompose E into rotation and translation E=txR, where tx means a vector product matrix. Decomposition is not obvious, see this.
The rotation you get will be exact while the translation vector can be found only up to scale. Intuitively this scaling means that from the two images alone you cannot really say whether the objects are close and small or far away and large. To disambiguate we may use a familiar size objects, known distance between two points, etc.
Finally note that a human visual system has a similar problem: though we "know" the distance between our eyes, when they are converged on the object the disparity is always zero and from disparity alone we cannot say what the distance is. Human vision relies on triangulation from eyes version signal to figure out absolute distance.
Well what your looking at is in simple terms a Pythagorean theorem problem a^2 + b^2 = c^2. However when it comes to camera based applications things are not very easy to accurately determine. You have found half of the detail you need for "a" however finding "b" or "c" is much harder.
The Short Answer
Basically it can't be done with a single camera. But it can be with done with two cameras.
The Long Winded Answer (Thought I'd explain in more depth, no pun intended)
I'll try and explain, say we select two points within our image and move the camera left. We know the distance from the camera of each point B1 is 20mm and point B2 is 40mm . Now lets assume that we process the image and our measurement are A1 is (0,2) and A2 is (0,4) these are related to B1 and B2 respectively. Now A1 and A2 are not measurements; they are pixels of movement.
What we now have to do is multiply the change in A1 and A2 by a calculated constant which will be the real world distance at B1 and B2. NOTE: Each one these is different according to measurement B*. This all relates to Angle of view or more commonly called the Field of View in photography at different distances. You can accurately calculate the constant if you know the size of each pixel on the camera CCD and the f number of the lens you have inside the camera.
I would expect this isn't the case so at different distances you have to place an object of which you know the length and see how many pixels it takes up. Close up you can use a ruler to make things easier. With these measurements. You take this data and form a curve with a line of best fit. Where the X-axis will be the distance of the object and the Y-axis will be the constant of pixel to distance ratio that you must multiply your movement by.
So how do we apply this curve. Well it's guess work. In theory the larger the measurement of movement A* the closer the object to the camera. In our example our ratios for A1 > A2 say 5mm and 3mm respectively and we would now know that point B1 has moved 10mm (2x5mm) and B2 has moved 6mm (2x6mm). But let's face it - we will never know B and we will never be able to tell if a distance moved is 20 pixels of an object close up not moving far or an object far away moving a much great distance. This is why things like the Xbox Kinect use additional sensors to get depth information that can be tied to the objects within the image.
What you attempting could be attempted with two cameras as the distance between these cameras is known the movement can be more accurately calculated (effectively without using a depth sensor). The maths behind this is extremely complex and I would suggest looking up some journal papers on the subject. If you would like me to explain the theory, I can attempt to.
All my experience comes from designing high speed video acquisition and image processing for my PHD so trust me, it can't be done with one camera, sorry. I hope some of this helps.
Cheers
Chris
[EDIT]
I was going to add a comment but this is easier due to the bulk of information:
Since it is the Kinect I will assume you have some relevant depth information associated with each point if not you will need to figure out how to get this.
The equation you will need to start of with is for the Field of View (FOV):
o/d = i/f
Where:
f is equal to the focal length of the lens usually given in mm (i.e. 18 28 30 50 are standard examples)
d is the object distance from the lens gathered from kinect data
o is the object dimension (or "field of view" perpendicular to and bisected by the optical axis).
i is the image dimension (or "field stop" perpendicular to and bisected by the optical axis).
We need to calculate i, where o is our unknown so for i (which is a diagonal measurement),
We will need the size of the pixel on the ccd this will in micrometres or µm you will need to find this information out, For know we will take it as being 14um which is standard for a midrange area scan camera.
So first we need to work out i horizontal dimension (ih) which is the number of pixels of the width of the camera multiplied by the size of the ccd pixel (We will use 640 x 320)
so: ih = 640*14um = 8960um
= 8960/1000 = 8.96mm
Now we need i vertical dimension (iv) same process but height
so: iv = (320 * 14um) / 1000 = 4.48mm
Now i is found by Pythagorean theorem Pythagorean theorem a^2 + b^2 = c^2
so: i = sqrt(ih^2 _ iv^2)
= 10.02 mm
Now we will assume we have a 28 mm lens. Again, this exact value will have to be found out. So our equation is rearranged to give us o is:
o = (i * d) / f
Remember o will be diagonal (we will assume of object or point is 50mm away):
o = (10.02mm * 50mm) / 28mm
17.89mm
Now we need to work out o horizontal dimension (oh) and o vertical dimension (ov) as this will give us the distance per pixel that the object has moved. Now as FOV α CCD or i is directly proportional to o we will work out a ratio k
k = i/o
= 10.02 / 17.89
= 0.56
so:
o horizontal dimension (oh):
oh = ih / k
= 8.96mm / 0.56 = 16mm per pixel
o vertical dimension (ov):
ov = iv / k
= 4.48mm / 0.56 = 8mm per pixel
Now we have the constants we require, let's use it in an example. If our object at 50mm moves from position (0,0) to (2,4) then the measurements in real life are:
(2*16mm , 4*8mm) = (32mm,32mm)
Again, a Pythagorean theorem: a^2 + b^2 = c^2
Total distance = sqrt(32^2 + 32^2)
= 45.25mm
Complicated I know, but once you have this in a program it's easier. So for every point you will have to repeat at least half the process as d will change on therefore o for every point your examining.
Hope this gets you on your way,
Cheers
Chris

Objective C Game Geometry question

I'm creating simple game and reached the point where I feel helpless. I was good in geometry but it was long time back in school, now trying to refresh my mind.
Let's say i have iPad screen. Object's xy position at one given point of time and xy position at another point of time stored in 2 variables .
Question:
how to find the third position of the object at the end of the screen being given previous 2 position, considering the object moves in the same direction (line) from point 1 to point 2.
Thanks in advance.
Let us have that v1 and v2 are the vectors representing the two points. Let t0 be the time between the two points. Let t be the current time.
Then our location vector v3 is given by v3 = v1 + (v2 - v1)t/t0
If the object is moving in the same direction and you have an horizontal line, the next position given x and y would be
x+1, y
If the object is moving in the same direction in a vertical line it would be
x, y+1
If the object is moving in a diagonal up-right
x+1,y+1
diagonal down-right
x+1, y+1
diagonal down-left
x-1, y-1
diagonal up-left
x-1, y+1
So something general would be :
newPosition = (x+1,y) //if you wish to move forward to the right, try to handle all
cases
All the cases above work if the object is moving forward, if it is moving backwards just change the + by - . Basically think of the object as moving in a cartesian coordinate system, where x is horizontal and y is vertical.
I think you can get the idea out of this three cases ;)