Kinect depthmap distance to sensor position or sensor plane - kinect

The Kinect creates a depthmap by measuring the 3D euclidean distance between a point and the sensor position for every pixel. This depthmap can then be reprojected into 3D camera space, for example as described in http://nicolas.burrus.name/index.php/Research/KinectCalibration
In particular, the z coordinate of the projected point is set to the measured depth of that pixel, which seems wrong to me, because it implies that the depth is measured as orthogonal distance to the sensor plane, not as 3D euclidean distance to the sensor position.
So which one is correct? Distance to sensor plane or distance to sensor position?

The measured depth is calculated as orthogonal distance, as described by 1 and given by the figure below:
Another similar answer can be seen here..
1: M. Andersen, T. Jensen, P. Lisouski, A. Mortensen, M. Hansen, T. Gregersen, and P. Ahrendt, “Kinect Depth Sensor Evaluation for Computer Vision Applications,” Technical Report Electronics and Computer Engineering, vol. 1, no. 6, 2015.

Related

Cosine factor in directional hemispherical reflectance

Directional hemispherical reflectance is the ratio between exitance to incoming irradiance. It is used to estimate degree of energy conservation
R(l) = exitance / irradiance
I can understand the relation between BRDF and R(l). BRDF is target to specific viewing angle. so the numerator is radiance. R(l) can be obtain by integrating brdf over all viewing direction.
R(l) = integral(brdf(l, v) * cos(theta0) * dw0)
theta0 is angle between n and v
My question is where do the cosine factor come from?
The angle theta0 should be the angle between incident light and surface normal. The closer incident light flux comes to surface normal, the greater energy surface receives (this is the case for cos(theta0) = 1).
The formula itself comes from the derivation of BRDF.
The directional hemispherical reflectance is the albedo of a surface which is being illuminated by a single, directional light source. This means that there is only a single direction for which there is illumination reaching the surface.
Since the lighting term is a delta function, the dimensionality of the integral is reduced and we only have to integrate over the viewing directions. The cosine term projects the differential solid angle of the light onto the surface
So it's basically a stripped down version of the rendering equation using the simplified light source.

lat lon coordinates (WGS84) conversion to local x, y plane

Currently I'm trying the following: I have points from google earth (WGS84) which I want to transform to a local x,y coordinate system: a tangential plane with y positive from south to north and x positive from west to east.
There is no need for the plane to be part of a global coordinate system more than the relation (x=0, y=0) = (lat,lon). The scale at which I'm working is in the order of say 100 kilometers (maximum of for example 200 km's). Very small errors (due to for example the curvature of the earth) are acceptable.
I have relatively little understanding of this topic as of yet. Can anybody help me out? Where would I need to look for example.
Thanks!
I haven't found the answer mathematically but have found that the package basemap (of the mpl_toolkit) should help with this respect (from wgs84 to a transverse mercator projection).

Getting kinect 3D position value

I'm working with a Kinect 1414 with the library kinect4WinSDK (processing).
First, I have my depth value as a raw value between 6400 and 30 000. I convert it with this:
if (raw==0x0000) z=0.0;
else if (p.raw>=0x8000) p.z=4.0;
else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
However, my x and y value are in a range [0,1] for x and [-1,1] for y. I wanted these value in meter. Can you help me?
Thanks,
Yoann
The thing is you need to get the corresponding real world 1 meter in computer space(Kinect space). The kinect itself is not measuring respect to any standard unit. So you need to map a real world 1 meter to Kinect distance. For this get a 4 meter distance and mark it with kinect skeleton hand for both ends and get the distance in the computer space and let's say it is x. Now if you get a d distance in real world then the corresponding Kinect space distance is (x/4c)*d. C can be calculated.

kinect object measuring

I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.

How to calculate area which was compose with mulit- Coordinates?

as topic, the Coordinates value (Latitude and Longitude) is known , these Coordinates will compose as polygonal area , my question is how to calculate the area of the polygonal that is base the geography ?
thanks for your help .
First you would need to know whether the curvature of the surface would be significant. If it is a relatively small then you can get a good approximation by projecting the coordinates onto a plane.
Determine units of measure per degree of latitude (eg. meters per degree)
Determine units of meature per degree of longitude at a given latitude (the conversion factor varies as you go North or South)
Convert latitude and longitude pairs to (x,y) pairs in the plane
Use an algorithm to compute area of a polygon. See StackOverflow 451425 or Paul Bourke
If you are calculating a large area then spherical techniques must be used.
If I understand your question correctly - triangulation should help you. Basically you break the polygonal to triangles in such a way that they don't overlap and sum their areas.