2D from 3D points in HALCON - halcon

Given a 3D point coordinate, the internal and the external camera parameters in a calibrated stereo camera setup, is there a HALCON method that gives me the 2D pixel coordinates on each camera?
Regards,
MSK

You can project the coordinates into 2D world, if the camera parameters are known, using the following:
* Project 3D points into image.
project_3d_point(X, Y, Z, CameraParam, Row, Column)
Please refer to description here

Related

What algorithm do i need to convert a 2D image file into a representative 2D triangle mesh file?

I am looking for some advice to point me in the direction of the algorithm I would need to convert an image file into a mesh. Note that I am not asking to convert from 2D into 3D - the output mesh is not required to have any depth.
For image file I mean a black and white image of a relatively simple shape such as a stick figure stored in a simple to read uncompressed bitmap file. The shape would have a high contrast between the black and white areas of the image to help detect the edges of the image by an algorithm.
For the static mesh I mean the data that can be used to construct a typical indexed triangle mesh (list of vertices and a list of indices) in a modern 3D game engine such as Unreal. The mesh would need to represent the shape of the image in 2D but is not required to have any 3D depth in itself, ie. zero thickness. The mesh will ultimately be used in a 3D environment like a cardboard cut-out shape for example imagine it standing on a ground plane.
This conversion is not required to work in any real time environment - it can be batched processed and then it is intended the mesh data read in by the game engine.
Thanks in advance.

3D Human Pose Estimation

I am working on human pose estimation work.
I am able to generate 2d coordinates of different joints of a person in an image.
But I need 3d coordinates to solve the purpose of my project.
Is there any library or code available to generate 3d coordinates of joints ?
Please help.
for 3d coordinates on pose estimation there is a limit for you. you cant get 3d pose with only one camera (monocular). you have 2 way to estimate those :
use RGBD ( red, green, blue and depth) cameras like Kinect
or use stereo vision with using at least two camera.
for RGBD opencv contrib has a library for that.
but if you want to use stereo vision you have some steps:
1.Get camera calibration parameters
for calibration you can follow this.
2.then you should get undistorted of your points with using calibration parameters.
3.then you should get projection matrix of your both cameras.
4.at last, you can use opencv triangulation for getting 3D coordinates.
for more info about each step, you can search about stereo vision, camera calibration, triangulation and etc.

Camera's extrinsic matrix

I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html

Pinhole camera model - Finding rotation from optical axis

In the pinhole camera model, is it possible to determine the rotation required from the optical/principal axis (the axis which pierces the image plane) to intersect a given pixel coordinate (u,v)?
I have an image where I am detecting a marker in space, and have the intrinsic and extrinsic camera parameters available. I am using the extrinsic parameters to cast a 2d ray into a separately constructed map (which is overhead and 2d), however I would like the ray angle to change depending on if the detected marker is to the left or right inside of the image.
My first thought was to use arctan utilizing the focal length and the u coordinate (x-axis on image plane from center of image) to determine an angle, however I don't think the units of measurement cooperate: one should be in real world meters and the other is arbitrary pixels.

Camera 2D coords from 3D object coords

I am working on a geometry editor tool and I am dealing with the way how to get manipulators vector coordinates on the screen/camera plane so I can use these for mouse dragging. I've got access to vectors world coordinates matrix (or any objects matrix), projection matrix and camera direction, position etc.