calculate gaze velocity in VR using normalized direction - physics

I'd like to calculate 3d velocity vector of user eye movement in a VR space.
For gaze data, I have my gaze with a normalized gaze origin and a normalized gaze direction.
How can I use the normalized gaze direction to calculate my velocity in 3d?
Thanks

Related

2D shape detection in 3D pointcloud

I'm working on a project to detect the position and orientation of a paper plane.
To collect the data, I'm using an Intel Realsense D435, which gives me accurate, clean depth data to work with.
Now I arrived at the problem of detecting the 2D paper plane silhouette from the 3D point cloud data.
Here is an example of the data (I put the plane on a stick for testing, this will not be in the final implementation):
https://i.stack.imgur.com/EHaEr.gif
Basically, I have:
A 3D point cloud with points on the plane
A 2D shape of the plane
I would like to calculate what rotations/translations are needed to align the 2D shape to the 3D point cloud as accurate as possible.
I've searched online, but couldn't find a good way to do it. One way would be to use Iterative Closest Point (ICP) to first take a calibration pointcloud of the plane in a known orientation, and align it with the current orientation. But from what I've heard, ICP doesn't perform well if the pointclouds aren't kind of already closely aligned at the start.
Any help is appreciated! Coding language doesn't matter.
Does your 3d point cloud have outliers? How many in what way?
How did you use ICP exactly?
One way would be using ICP, with a hand-crafted initial guess using
pcl::transformPointCloud (*cloud_in, *cloud_icp, transformation_matrix);
(to mitigate the problem that ICP needs to be close to work.)
What you actually want is the plane-model that describes the position and orientation of your point-cloud right?
A good estimator of your underlying function can be found with: pcl::ransac
pcl::ransace model consensus
You can then get the computedModel coefficents.
Now finding the correct transformation is just: How to calculate transformation matrix from one plane to another?

Compute road plane normal with an embedded camera

I am developing some computer vision algorithms for vehicle applications.
I am in front of a problem and some help would be appreciated.
Let say we have a calibrated camera attached to a vehicle which captures a frame of the road forward the vehicle:
Initial frame
We apply a first filter to keep only the road markers and return a binary image:
Filtered image
Once the road lane are separated, we can approximate the lanes with linear expressions and detect the vanishing point:
Objective
But what I am looking for to recover is the equation of the normal n into the image without any prior knowledge of the rotation matrix and the translation vector. Nevertheless, I assume L1, L2 and L3 lie on the same plane.
In the 3D space the problem is quite simple. In the 2D image plane, since the camera projective transformation does not keep the angle properties more complex. I am not able to find a way to figure out the equation of the normal.
Do you have any idea about how I could compute the normal?
Thanks,
Pm
No can do, you need a minimum of two independent vanishing points (i.e. vanishing points representing the images of the points at infinity of two different pencils of parallel lines).
If you have them, the answer is trivial: express the image positions of said vanishing points in homogeneous coordinates. Then their cross product is equal (up to scale) to the normal vector of the 3D plane said pencils define, decomposed in camera coordinates.
Your information is insufficient as the others have stated. If your data is coming from a video a common way to get a road ground plane is to take two or more images, compute the associated homography then decompose the homography matrix into the surface normal and relative camera motion. You can do the decomposition with OpenCV's decomposeHomographyMatmethod. You can compute the homography by associating four or more point correspondences using OpenCV's findHomography method. If it is hard to determine these correspondences it is also possible to do it with a combination of point and line correspondences paper, however this is not implemented in OpenCV.
You do not have sufficient information in the example you provide.
If you are wondering "which way is up", one thing you might be able to do is to detect the line on the horizon. If K is the calibration matrix then KTl will give you the plane normal in 3D relative to your camera. (The general equation for backprojection of a line l in the image to a plane E through the center of projection is E=PTl with a 3x4 projection matrix P)
A better alternative might be to establish a homography to rectify the ground-plane. To do so, however, you need at least four non-collinear points with known coordinates - or four lines, no three of which may be parallel.

Triaxial accelerometer tilt compensation

I have a board which provides acceleration values of a triaxial accelerometer (X, Y, Z: Y is the up vector). I want to get the acceleration direction in the XZ-plane. But the board may be mounted with a tilt. Can I compensate the tilt and how would I do that? I appreciate any hint. It would be nice if someone could point me into the right direction.
You need to calibrate all accelerometer products so that they know what's normally the down direction. Based on your calibration, you get the true (x,y,z) coordinates in relation to the gravity component. The calibration values have to be added/subtracted from each accelerometer read.
Alternatively (and less professionally), you could make some sort of adaptive system which continuously saves the (x,y,z) coordinates whenever there is a total acceleration of 1G +/- margins. You can then apply a median filter to the sorted samples and hopefully you'll get the real coordinates of (x,y,z) corresponding to the gravity component. In order for this to be reliable, you'd have to implement some kind of AI, so that the program learns over time and stores the likely coordinates in NVM. Otherwise the program would always fail each time you get a use case where the total acceleration is 1G in any direction.

How can the auto-focus of camera be explained using pinhole camera model?

Shifting the auto-focus in real-world camera doesn't change the focal length, rotation, or any other camera parameter in pinhole camera model. However, it does shift the image plane and affect the depth of field. How is this possible?
I understand that complex mechanism of real-world camera cannot be easily explained with pinhole camera model. However, I believe that there should be some link between them as we use this simplified model in various real-world computer vision applications.
Short answer: it cannot. The pinhole camera model has no notion of 'focus'.
A more interesting question is, I think, the effect of changing the focusing distance on a pinhole approximation of the lens+camera combination, the approximation itself being estimated, for example, through a camera calibration procedure.
With "ordinary" consumer-type lenses having moderate non-linear distortion, usually one observes significant changes in:
The location of the principal point (which is anyway hard to estimate precisely, and confused with the center of the distortion)
The amount of nonlinear distortion (especially with cheaper lenses and wide FOV).
The "effective" field of view - due to the fact that a change in nonlinear distortion will "pull-in" a wider or thinner view at the edges.
The last item implies a change of the calibrated focal length, and this is sometimes "surprising" for novices, who are taught that a lens's focus and focal length do not mix. To convince yourself that the FOV change is in fact happening, visualize the bounding box of the undistorted image, which is "butterfly"-shaped in the common case of barrel distortion. The pinhole model FOV angle is twice the arctangent of the ratio between the image half-width and the calibrated approximation to the physical focal length (which is the distance between the sensor and the lens's last optical surface). Changing the distortion stretches or squeezes that half-width value.

How to get one non-manifod mesh with adaptive point distribution

all
I try to obtain one triangle mesh from one point cloud. The mesh is expected to be manifold, the triangles are well shaped or equilateral and the distribution of the points are adaptive in terms of the curvature.
There are valuable information provided on this website.
robust algorithm for surface reconstruction from 3D point cloud?
Mesh generation from points with x, y and z coordinates
I try Poisson reconstruction algorithm, but the triangles are not well shaped.
So I need to improve the quality of the triangles. I learn that centroidal voronoi tessellation(CVT) can achieve that, but I don't know whether the operation will introduce non-manifold vertices and self-intersection. I hope to get some information about it from you.
The mesh from the following post looks pretty good.
How to fill polygon with points regularly?
Delaunay refinement algorithm is used. Can delaunay refinement algorithm apply to triangle mesh directly? Do I first need to delaunay triangulation of the point cloud of the mesh, and then use the information from delaunay triangulation to perform delaunay refinement?
Thanks.
Regards
Jogging
I created the image in the mentioned post: You can insert all points into a Delaunay triangulation and then create a Zone object (area) consisting of these triangles. Then you call refine(pZone,...) to get a quality mesh. Other options are to create the Zone from constraint edges or as the result of a boolean operation. However, this library is made for 2D and 2.5D. The 3D version will not be released before 2014.
Do you know the BallPivoting approach?