3D Human Pose Estimation - tensorflow

I am working on human pose estimation work.
I am able to generate 2d coordinates of different joints of a person in an image.
But I need 3d coordinates to solve the purpose of my project.
Is there any library or code available to generate 3d coordinates of joints ?
Please help.

for 3d coordinates on pose estimation there is a limit for you. you cant get 3d pose with only one camera (monocular). you have 2 way to estimate those :
use RGBD ( red, green, blue and depth) cameras like Kinect
or use stereo vision with using at least two camera.
for RGBD opencv contrib has a library for that.
but if you want to use stereo vision you have some steps:
1.Get camera calibration parameters
for calibration you can follow this.
2.then you should get undistorted of your points with using calibration parameters.
3.then you should get projection matrix of your both cameras.
4.at last, you can use opencv triangulation for getting 3D coordinates.
for more info about each step, you can search about stereo vision, camera calibration, triangulation and etc.

Related

Pose tracking with mediapipe with two cameras and receive the same coordinates for corresponding points in both images

I'm fairly new to computer vision and currently trying to make the following happen, but had no success so far.
My situation: I want to track different landmarks of a person with mediapipe. Tracking with a single camera works fine and also tracking with two cameras at the same time. What I want is to receive the same coordinate from each camera for a point that has been detected. For example: Camera 1 found the landmark of the left shoulder with the x and y coordinates (1,2). For camera 2 the same landmark has obviously different coordinates lets say (2,3). Ideally there is a way to map or transform the coordinates of camera 2 to camera 1.
the following picture shows the camera setup
Camera setup (I can't post images yet)
So far I've tried to use stereo camera calibration as described here: https://temugeb.github.io/opencv/python/2021/02/02/stereo-camera-calibration-and-triangulation.html. But this doesn't seem to do the trick. I receive a rotation and translation matrix as an output from the calibration, but when I concatenate them to a transformation matrix and multiply it with the coordinates of camera 2, the results don't match with the coordinates of camera 1.
I've also tried to implement planar homography, but since the observed scene isn't limited to a plane, it isn't working well.
The idea behind this is to increase to probability that the landmarks will be detected and use both camera streams to build a full set of coordinates for all desired landmarks.
Is it possible to do what I want to do? If so, what's the name of this process?
I'm really grateful for any help. Thanks in advance.

How to highlight the desk in RGB with assistence of 3d point cloud?

What I describe next is a preprocess of object detection.
I have a desk with some snacks and pasters,and a Depth Camera(Intel Realsense R300).Now I want to avoid the paster on it(set the pixel value to 0), and avoid the interfering object around the desk(but not on the desk,instead,on the ground).To make full use of Depth Camera,I think that I can using Depth Camera to get the depth image and use RANSAC algorithm to recognize the plane of the desk,which can avoid the paster at the same time.
However, now the problem is, I successfully get the plane of desk with RANSAC algorithm in 3D point cloud, but I don't know how to use it to fit 2D RGB image to set the pixel value of the plane to 0.

Camera's extrinsic matrix

I am trying to use MATLAB's camera calibrator to calibrate an infrared camera. I was able to get the intrinsic matrix by just feeding around 100 images to the calibrator. But I'm struggling with how to get the extrinsic matrix [R|t].
Because the extrinsic matrix is used to map the world frame with the camera frame, so in theory, when the camera(object) is moving, there will be many extrinsic matrices.
In the picture below, if the intrinsic matrix is determined using 50 images, then there are 50 extrinsic matrices correspond to each image. Am I correct?
You are right. Usually, a by-product of an intrinsic calibration is the extrinsic matrix for each pattern observed; this is mostly used to draw the patterns with respect to the camera as in the picture you posted.
What you usually do afterwards is to define some external reference frame that makes sense for you application, also known as the 'world' reference frame, and compute the pose of the camera with respect to it. That's the extrinsic matrix you always hear about.
For this, you:
Define the reference frame and take some points with known 3D coordinates on it; this can be a grid drawn on the floor, for example.
Take a picture of the 3D points with the calibrated camera and get a list of the correspondent 2D (image) coordinates of the points.
Use a pose estimation function that takes: the camera intrinsic parameters, the 3D points and the correspondent 2D image points. I am more familiar with OpenCV, but the Matlab function that seems to do the job is: https://www.mathworks.com/help/vision/ref/estimateworldcamerapose.html

2D from 3D points in HALCON

Given a 3D point coordinate, the internal and the external camera parameters in a calibrated stereo camera setup, is there a HALCON method that gives me the 2D pixel coordinates on each camera?
Regards,
MSK
You can project the coordinates into 2D world, if the camera parameters are known, using the following:
* Project 3D points into image.
project_3d_point(X, Y, Z, CameraParam, Row, Column)
Please refer to description here

iOS - 2d image turn into a 3d

I was checking out this cool app called Morfo. According to their product description -
Use Morfo to quickly turn a photo of your friend's face into a
talking, dancing, crazy 3D character! Once captured, you can make your
friend say anything you want in a silly voice, rock out, wear makeup,
sport a pair of huge green cat eyes, suddenly gain 300lbs, and more.
So if you take a normal 2D image of steve jobs & feed it to this app it converts it into a 3D model of that image & the user can interact with it.
My questions are as following -
How are they doing this?
How is this possible in iPad?
Isn't it computationally intensive to render and convert 2D image into 3D?
Any pointers, links to websites or libraries in objectiveC which do this is very much appreciated.
UPDATE: this demo of this product here shows how morfo, uses a template mechanism to do the conversion. i.e. after a 2D image is fed, one needs to set the boundaries of the face, where the eyes are located, size & length of lips. then it goes off to convert it into a 3D model. How is this part done? What frameworks or libraries they might be using?
This is a broad question but i can point you in the right direction of how 3D Rendering works, trust me this is a huge subject with decades of work behind it and to much to put here. Not sure how up to speed you are on 3D Rendering techniques so i will give you a basic idea of texturing and point you to a good set of tutorials.
How are they doing this?
The idea is that in 3D Rendering, 3D models can be textured with a 2d image known as a texture map. You use a 2D image and wrap it around a 3d model, be that a simple primitive like a sphere of a cube or more advanced such as the classic teapot or the model of a human head e.t.c. A texture can be taken from anywhere, I have used the camera feed in the past to texture meshes with the video from the camera stream, I have used photos from the camera which s how there doing it. So this is how the face is rendered to the 3D Model.
Is this efficient?
On iOS and most mobile devices 3D rendering uses hardware acceleration utilizing OpenGLES. In regards to your question this is really fast depending on how you implement your render code.
The way it uses the mapping (scale rotate template in the video) as mentioned by anticyclope allows you to make the texture fit a model and also place the eyes which are part of there render code.
So if you want to pick this up i recommend reading Jeff Lamarche Tutorial "from the ground up" as a primer:
http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html
Second to that i have read about 4 books on OpenGLES, for general design and for platforms specifics. I recommend this book:
http://www.amazon.co.uk/iPhone-Programming-Developing-Graphical-Applications/dp/0596804822/ref=sr_1_1?ie=UTF8&qid=1331114559&sr=8-1
In my opinion, there is how they doing it. Just my thoughts, haven't saw the application in real-life.
They have a 3D model of human's head. When you click on certain points on 2D image, they are adjusting corresponding points in 3D model, so it is represents a specific face's features like distance between eyes, lips width and so on. Next, texture from 2D image is applied to 3D model using that control points, so we have a textured 3D model of human's head. Given the fact, that our perception is able to reconstruct a 3D shape from 2D images (say, we looking at 2D photo and still imagining a 3D person), there's no need to reconstruct 3D shape accurately, texture will do the work.
There is an issue in the rendering of 3D images, called UV mapping, takes the 3D model and defines a set of edges, and this creates an image that is used to generate different textures to the model.
Now if you notice in Morfo, you define the edge of the head, eyes, mouth and nose. with this information the Morfo knows how to place it texture to the model that has defined.
the process of loading a texture on a model is not very complex and this can be done on any device that has support of some technology such as OpenGL
Isn't it computationally intensive to render and convert 2D image into 3D?
Apple is sinking billions of dollars into developing custom chipsets, and recent models have impressive performance, considering the battery life and low operating temperature (no fans).