Kinect2 raw depth to distance in meters - kinect

How to convert the Kinect2 raw depthdata to the distance in meters. The raw depthdata was got by Windows Kinect2 SDK. It's all Integral data.
Point: It's kinect2 rather than kinect1.
I already got the equation of Kinect1,but it's not match.
So anyone can help me.
The equation of Kinect1:
if (depthValue < 2047)
{
depthM = 1.0 / (depthValue*-0.0030711016 + 3.3309495161);
}

The Kinect API Overview (2) indicates:
The DepthFrame Class represents a frame where each pixel represents the distance of the closest object seen by that pixel. The data for this frame is stored as 16-bit unsigned integers, where each value represents the distance in millimeters. The maximum depth distance is 8 meters, although reliability starts to degrade at around 4.5 meters. Developers can use the depth frame to build custom tracking algorithms in cases where the BodyFrame Class isn’t enough.
Also V. Pterneas comments in his blog that the values are in meters, he refers to this code. So you could retrieve it from the DepthFrame in C# (extracted partially from this code):
public override void Update(DepthFrame frame)
{
ushort minDepth = frame.DepthMinReliableDistance;
ushort maxDepth = frame.DepthMaxReliableDistance;
//..
Width = frame.FrameDescription.Width;
Height = frame.FrameDescription.Height;
public ushort[] DepthData;
//...
frame.CopyFrameDataToArray(DepthData);
// now Depthdata contains the depth
}
The minimum and maximum depth are given by the functions DepthMinReliableDistance and DepthMaxReliableDistance, also in millimeters.

Related

Pass audio spectrum to a shader as texture in libGDX

I'm developing an audio visualizer using libGDX.
I want to pass the audio spectrum data (an array containing the FFT of the audio sample) to a shader I took from Shadertoy: https://www.shadertoy.com/view/ttfGzH.
In the GLSL code I expect an uniform containing the data as texture:
uniform sampler2D iChannel0;
The problem is that I can't figure out how to pass an arbitrary array as a texture to a shader in libGDX.
I already searched in SO and in libGDX's forum but there isn't a satisfying answer to my problem.
Here is my Kotlin code (that obviously doesn't work xD):
val p = Pixmap(512, 1, Pixmap.Format.Alpha)
val t = Texture(p)
val map = p.pixels
map.putFloat(....) // fill the map with FFT data
[...]
t.bind(0)
shader.setUniformi("iChannel0", 0)
You could simply use the drawPixel method and store your data in the first channel of each pixel just like in the shadertoy example (they use the red channel).
float[] fftData = // your data
Color tmpColor = new Color();
Pixmap pixmap = new Pixmap(fftData.length, 1, Pixmap.Format.RGBA8888);
for(int i = 0; i < fftData.length i++)
{
tmpColor.set(fftData[i], 0, 0, 0); // using only 1 channel per pixel
pixmap.drawPixel(i, 0, Color.rgba8888(tmpColor));
}
// then create your texture and bind it to the shader
To be more efficient and require 4x less memory (and possibly less samples depending on the shader), you could use 4 channels per pixels by splitting your data accross the r, g, b and a channels. However, this will complexify the shader a bit.
This data being passed in the shader example you provided is not arbitrary though, it has pretty limited precision and ranges between 0 and 1. If you want to increase precision you may want to store the floating point accross multiple channels (although the IEEE recomposition in the shader may be painful) or passing an integer to be scaled down (fixed point). If you need data between -inf and inf you may use sigmoid and anti sigmoig functions, at the cost of highly reducing the precision again. I believe this technique will work for your example though, as they seem to only require values between 0 and 1 and precision is not super important because the result is smoothed.

Rendering squares via DirectX 11

Intro
I am trying to render squares in DirectX 11 in the most efficient way. Each square has a color (float3) and a position (float3). Typical count of squares is about 5 millions.
I tried 3 ways:
Render raw data
Use geometry shader
Use instanced rendering
Raw data means, that each square is represented as 4 vertices in vertex buffer and two triangles in index buffer.
Geometry shader and instanced rendering mean, that each square has just one vertex in vertex buffer.
My results (on nvidia GTX960M) for 5M squares are:
Geometry shader 22 FPS
Instanced rendering 30 FPS
Raw data rendering 41 FPS
I expected that geometry shader is not the most efficient method. On the other hand I am surprised that Instanced rendering is slower than raw data. Computation in vertex shader is exactly the same. It is just multiplication with transform matrix stored in constant buffer + addition of Shift variable.
Raw data input
struct VSInput{
float3 Position : POSITION0;
float3 Colot : COLOR0;
float2 Shift : TEXCOORD0;// This is xy deviation from square center
};
Instanced rendering input
struct VSInputPerVertex{
float2 Shift : TEXCOORD0;
};
struct VSInputPerInstance{
float3 Position : POSITION0;
float3 Colot : COLOR0;
};
Note
For bigger models (20M squares) is more efficient instanced rendering (evidently because of memory traffic).
Question
Why is instanced rendering slower (in case of 5M squares), than raw data rendering? Is there another efficient way how to accomplish this rendering task? Am I missing something?
Edit
StrcturedBuffer method
One of possible solutions is to use StructuredBuffer as #galop1n suggested (for details see his answer).
My results (on nvidia GTX960M) for 5M squares
StructuredBuffer 48 FPS
Observations
Sometimes I observed that StructuredBuffer method was oscilating between 30 FPS - 55 FPS (accumulated number from 100 frames). It seems to be little unstable. Median is 48 FPS. I did not observe this using previous methods.
Consider balance between draw calls and StructuredBuffer sizes. I reached the fastest behavior, when I used buffers with 1K - 4K points, for smaller models. When I tried to render 5M square model, I had big number of draw calls and it was not efficient (30 FPS). The best behavior I observe with 5M squares was with 16K points per buffer. 32K and 8K points per buffer seemed to be slower settings.
Small vertex count per instance is usually a good way to underused the hardware. I suggest you that variant, it should provide good performance on every vendors.
VSSetShaderResourceViews(0,1,&quadData);
SetPrimitiveTopology(TRIANGLE);
Draw( 6 * quadCount, 0);
In the vertex shader, you have
struct Quad {
float3 pos;
float3 color;
};
StructuredBuffer<Quad> quads : register(t0);
And to rebuild you quads in the vertex shader :
// shift for each vertex
static const float2 shifts[6] = { float2(-1,-1), ..., float2(1,1) };
void main( uint vtx : SV_VertexID, out YourStuff yourStuff) {
Quad quad = quads[vtx/6];
float2 offs = shifts[vtx%6];
}
Then rebuild the vertex and transform as usual. You have to note, because you bypass the input assembly stage, if you want to send colors as rgba8, you need to use a uint and unpack yourself manually. The bandwidth usage will lower if you have millions of quads to draw.

Does anyone know algorithm of MKMapPointForCoordinate function in ObjectiveC MapKit

MapKit has function MKMapPointForCoordinate, It accept lat lng as argument and return point x,y.
https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitFunctionsReference/index.html
lat = 59.90738808515509
lng = 10.724523067474365
if we pass above lat, lng then function return
x = 142214284, y = 78089986
I check with lag lng wot UTM but it gives different result
http://www.latlong.net/lat-long-utm.html
MKMapPointForCoordinate doesn't return UTM Coordinates.
Coordinates refer to a position on the earth (a pseudo-sphere), but sometimes you need to do calculation refering to a 2D map (much simpler) and then convert again to coordinates. This is the goal of the conversion.
So, the MKMapPoint struct returned by MKMapPointForCoordinate is a 2D representation of the coordinates, but it doesn't match any standard known.
At this link: https://developer.apple.com/library/prerelease/ios/documentation/MapKit/Reference/MapKitDataTypesReference/index.html#//apple_ref/doc/c_ref/MKMapPoint
in the MKMapPoint documentation, you can read:
The actual units of a map point are tied to the underlying units used
to draw the contents of an MKMapView, but you should never need to
worry about these units directly. You use map points primarily to
simplify computations that would be complex to do using coordinate
values on a curved surface.
EDIT
for Coordinates-UTM Conversion in a previous project I used this Open Source Code

Measuring hip length using kinect

For an app I have to take the measurements of a person. Is there any way to measure the size of his hip from the IR and image data I have ?
The depth image stream has an option to mark what pixels belong to each person. You can use that and the position of the hip joint to measure the hip using a simple method: simply start at the hip joint position, and use a loop to move right and left. Once you have the pixels on the depth image that correspond to the edge of the user's silhouette, convert them to world coordinates to get their distance in millimeters. But be warned that you will not get accurate results. Things like clothing and accesories can make the Kinect not detect the user's shape correctly.
To determine if a pixel in the depth frame belongs to a user, take a look at this example from the MSDN (https://msdn.microsoft.com/en-us/library/jj131025.aspx):
private void FindPlayerInDepthPixel(short[] depthFrame)
{
foreach(short depthPixel in depthFrame)
{
int player = depthPixel & DepthImageFrame.PlayerIndexBitmask;
if (player > 0 && this.skeletonData != null)
{
// Found the player at this pixel
// ...
}
}
}

Gravitational Pull

Does anyone know of a tutorial that would deal with gravitational pull of two objects? Eg. a satellite being drawn to the moon (and possibly sling shot past it).
I have a small Java game that I am working on and I would like to implement his feature in it.
I have the formula for gravitational attraction between two bodies, but when I try to use it in my game, nothing happens?
There are two object on the screen, one of which will always be stationary while the other one moves in a straight line at a constant speed until it comes within the detection range of the stationary object. At which point it should be drawn to the stationary object.
First I calculate the distance between the two objects, and depending on their mass and this distance, I update the x and y coordinates.
But like I said, nothing happens. Am I not implementing the formula correctly?
I have included some code to show what I have so far.
This is the instance when the particle collides with the gates detection range, and should start being pulled towards it
for (int i = 0; i < particle.length; i++)
{
// **************************************************************************************************
// GATE COLLISION
// **************************************************************************************************
// Getting the instance when a Particle collides with a Gate
if (getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()) <=
sumOfRadii(particle[i].getRadius(), barrier.getRadius()))
{
particle[i].calcGravPull(particle[i].getMass(), barrier.getMass(),
getDistanceBetweenObjects(gate.getX(), particle[i].getX(), gate.getY(), particle[i].getY()));
}
And the method in my Particle class to do the movement
// Calculate the gravitational pull between objects
public void calcGravPull(int mass1, int mass2, double distBetweenObjects)
{
double gravityPull;
gravityPull = GRAV_CONSTANT * ((mass1 * mass2) / (distBetweenObjects * distBetweenObjects));
x += gravityPull;
y += gravityPull;
}
Your formula has problems. You're calculating the gravitational force, and then applying it as if it were an acceleration. Acceleration is force divided by mass, so you need to divide the force by the small object's mass. Therefore, GRAV_CONSTANT * ((mass1) / (distBetweenObjects * distBetweenObjects)) is the formula for acceleration of mass2.
Then you're using it as if it were a positional adjustment, not a velocity adjustment (which an acceleration is). Keep track of the velocity of the moving mass, use that to adjust its position, and use the acceleration to change that velocity.
Finally, you're using acceleration as a scalar when it's really a vector. Calculate the angle from the moving mass to the stationary mass, and if you're representing it as angle from the positive x-axis multiply the x acceleration by the cosine of the angle, and the y acceleration by the sine of the angle.
That will give you a correct representation of gravity.
If it does nothing, check the coordinates to see what is happening. Make sure the stationary mass is large enough to have an effect. Gravity is a very weak force, and you'll have no significant effect with much smaller than a planetary mass.
Also, make sure you're using the correct gravitational constant for the units you're using. The constant you find in the books is for the MKS system - meters, kilograms, and seconds. If you're using kilometers as units of length, you need to multiply the constant by a million, or alternately multiply the length by a thousand before plugging it into the formula.
Your algorithm is correct. Probably the gravitational pull you compute is too small to be seen. I'd remove GRAV_CONSTANT and try again.
BTW if you can gain a bit of speed moving the result of getDistanceBetweenObjects() in a temporary variable.