Object height using Kinect - kinect

For example, I am standing in-front of my Kinect. The Kinect can identify the joints, and it will expose them as a data structure. Till this point I am clear.
So, can we define the height as the difference between Head joint - ((LeftAnkle + RightAnkle)/2)?
I have tried trigonometric formulas, but there are two problems I am facing. One is identifying the person in the view. The second one is identifying the exact positions of Top of head and bottom of foot.
I have tried the point cloud, but got lost in how to generate the point cloud specific to a person. I mean without including the background objects.
Please suggest some ideas about how I can calculate the height of a person using the Kinect?

You can convert the Head Joint into global coordinate system. There is no need to do any math. The y coordinate in global coordinate will be his height.
All you need to do is check what pixel the head joint is and convert the pixel + depth informations into word coordinate space in mm.
I don't know what API you are using, but if it's being capable to segment a human and return his joint's, probably you are using OpenNI/NITE or Microsoft SDK. Both of them have a function that converts a pixel + depth coordinate into a x,y,z in mm. I don't know exactly what are the functions but their names would be something like : depth_to_mm, or disparity_to_mm. You need to check both documentations to find it, or you can do it by yourself.
This site have informations on how to convert depth to mm: http://nicolas.burrus.name/index.php/Research/KinectCalibration

I have extracted the two points - Head and Left foot (or Right Foot), then i found the euclidean distance between these points gave the distance with 4 inch variation. My test results are satisfactory, so we are using this approach as temporary work around.

An old question but i found a very nice explanation and example here.
It also explains that height isnt mearly a function of the head and ankle points, but instead a function of the following line segments:
Head - ShoulderCenter
ShoulderCenter - Spine
Spine - HipCenter
HipCenter - KneeLeft or KneeRight
KneeLeft / KneeRight - AnkleLeft /
AnkleRight
AnkleLeft / AnkleRight - FootLeft / FootRight

Here is a formula for Kinect SDK 2.0. Full project available at https://github.com/jhealy/kinect2/tree/master/020-FaceNSkin_HowTallAmI ....
using System;
using Microsoft.Kinect;
// Skeleton is now Bones
public enum BodyHeightMeasurementSystem
{
Meters = 0, Imperial = 1
}
public static class BodyHeightExtension
{
// change this to change the way values are returned, by default everything is meters
public static BodyHeightMeasurementSystem MeasurementSystem = BodyHeightMeasurementSystem.Meters;
/// <summary>
/// Get Height of a body in CM
/// </summary>
/// <param name="TargetBody">used for extension method purposes - uses should not see</param>
/// <returns>
/// positive value: height in meters
/// -1.0 : null body passed in
/// -2.0 : body not tracked, no height available
/// </returns>
public static double Height( this Body TargetBody )
{
if ( TargetBody == null ) return -1.0;
if (TargetBody.IsTracked == false) return -2.0;
const double HEAD_DIVERGENCE = 0.1;
Joint _head = TargetBody.Joints[JointType.Head];
Joint _neck = TargetBody.Joints[JointType.Neck];
// var spine = skeleton.Joints[JointType.Spine]; // ?
Joint _spine = TargetBody.Joints[JointType.SpineShoulder];
// var waist = skeleton.Joints[JointType.HipCenter]; // ?
// jeh: spinemid is ignored
Joint _waist = TargetBody.Joints[JointType.SpineBase];
Joint _hipLeft = TargetBody.Joints[JointType.HipLeft];
Joint _hipRight = TargetBody.Joints[JointType.HipRight];
Joint _kneeLeft = TargetBody.Joints[JointType.KneeLeft];
Joint _kneeRight = TargetBody.Joints[JointType.KneeRight];
Joint _ankleLeft = TargetBody.Joints[JointType.AnkleLeft];
Joint _ankleRight = TargetBody.Joints[JointType.AnkleRight];
Joint _footLeft = TargetBody.Joints[JointType.FootLeft];
Joint _footRight = TargetBody.Joints[JointType.FootRight];
// Find which leg is tracked more accurately.
int legLeftTrackedJoints = NumberOfTrackedJoints(_hipLeft, _kneeLeft, _ankleLeft, _footLeft);
int legRightTrackedJoints = NumberOfTrackedJoints(_hipRight, _kneeRight, _ankleRight, _footRight);
double legLength = legLeftTrackedJoints > legRightTrackedJoints ? Length(_hipLeft, _kneeLeft, _ankleLeft, _footLeft)
: Length(_hipRight, _kneeRight, _ankleRight, _footRight);
// default is meters. adjust if imperial to feet
double _retval = Length(_head, _neck, _spine, _waist) + legLength + HEAD_DIVERGENCE;
if (MeasurementSystem == BodyHeightMeasurementSystem.Imperial) _retval = MetricHelpers.MetersToFeet(_retval);
return _retval;
}
/// <summary>
/// Returns the upper height of the specified skeleton (head to waist). Useful whenever Kinect provides a way to track seated users.
/// </summary>
/// <param name="skeleton">The specified user skeleton.</param>
/// <returns>The upper height of the skeleton in meters.</returns>
public static double UpperHeight( this Body TargetBody )
{
Joint _head = TargetBody.Joints[JointType.Head];
// used to be ShoulderCenter. Think its SpineMid now
Joint _neck = TargetBody.Joints[JointType.SpineMid];
// .Spine is now .SpineShoulder
Joint _spine = TargetBody.Joints[JointType.SpineShoulder];
// HipCenter is now SpineBase
Joint _waist = TargetBody.Joints[JointType.SpineBase];
return Length(_head, _neck, _spine, _waist);
}
/// <summary>
/// Returns the length of the segment defined by the specified joints.
/// </summary>
/// <param name="p1">The first joint (start of the segment).</param>
/// <param name="p2">The second joint (end of the segment).</param>
/// <returns>The length of the segment in meters.</returns>
public static double Length(Joint p1, Joint p2)
{
return Math.Sqrt(
Math.Pow(p1.Position.X - p2.Position.X, 2) +
Math.Pow(p1.Position.Y - p2.Position.Y, 2) +
Math.Pow(p1.Position.Z - p2.Position.Z, 2));
}
/// <summary>
/// Returns the length of the segments defined by the specified joints.
/// </summary>
/// <param name="joints">A collection of two or more joints.</param>
/// <returns>The length of all the segments in meters.</returns>
public static double Length(params Joint[] joints)
{
double length = 0;
for (int index = 0; index < joints.Length - 1; index++)
{
length += Length(joints[index], joints[index + 1]);
}
return length;
}
/// <summary>
/// Given a collection of joints, calculates the number of the joints that are tracked accurately.
/// </summary>
/// <param name="joints">A collection of joints.</param>
/// <returns>The number of the accurately tracked joints.</returns>
public static int NumberOfTrackedJoints(params Joint[] joints)
{
int trackedJoints = 0;
foreach (var joint in joints)
{
// if (joint.TrackingState == JointTrackingState.Tracked)
if ( joint.TrackingState== TrackingState.Tracked )
{
trackedJoints++;
}
}
return trackedJoints;
}
/// <summary>
/// Scales the specified joint according to the specified dimensions.
/// </summary>
/// <param name="joint">The joint to scale.</param>
/// <param name="width">Width.</param>
/// <param name="height">Height.</param>
/// <param name="MaxX">Maximum X.</param>
/// <param name="MaxY">Maximum Y.</param>
/// <returns>The scaled version of the joint.</returns>
public static Joint ScaleTo(Joint joint, int width, int height, float MaxX, float MaxY)
{
// SkeletonPoint position = new SkeletonPoint()
Microsoft.Kinect.CameraSpacePoint position = new Microsoft.Kinect.CameraSpacePoint()
{
X = Scale(width, MaxX, joint.Position.X),
Y = Scale(height, MaxY, -joint.Position.Y),
Z = joint.Position.Z
};
joint.Position = position;
return joint;
}
/// <summary>
/// Scales the specified joint according to the specified dimensions.
/// </summary>
/// <param name="joint">The joint to scale.</param>
/// <param name="width">Width.</param>
/// <param name="height">Height.</param>
/// <returns>The scaled version of the joint.</returns>
public static Joint ScaleTo(Joint joint, int width, int height)
{
return ScaleTo(joint, width, height, 1.0f, 1.0f);
}
/// <summary>
/// Returns the scaled value of the specified position.
/// </summary>
/// <param name="maxPixel">Width or height.</param>
/// <param name="maxBody">Border (X or Y).</param>
/// <param name="position">Original position (X or Y).</param>
/// <returns>The scaled value of the specified position.</returns>
private static float Scale(int maxPixel, float maxBody, float position)
{
float value = ((((maxPixel / maxBody ) / 2) * position) + (maxPixel / 2));
if (value > maxPixel)
{
return maxPixel;
}
if (value < 0)
{
return 0;
}
return value;
}
}

Related

How to include function in documentation test example?

I am trying to make a simple documentation example but it doesn't compile. I tried:
/// Given an axis in 3D returns the indices of the 3 basis axis in the order such
/// that the first index represents forward (the input) the next the side and the final
/// the up. i.e. it computes the even permutation of basis index that creates the
/// rotation that aligns the basis with the selected axis.
///
/// # Arguments
///
/// * index an index from 0-2 selecting the forward axis.
///
/// # Examples
///
/// ```
/// use crate::axis_index_to_basis_indices;
/// let x_index = 0;
/// let y_index = 1;
/// let z_index = 2;
/// let (forward, side, up) = axis_index_to_basis_indices(x_index);
/// assert!(forward == x_index);
/// assert!(side == y_index);
/// assert!(up == z_index);
///
/// let (forward, side, up) = axis_index_to_basis_indices(z_index);
/// assert!(forward == z_index);
/// assert!(side == x_index);
/// assert!(up == y_index);
/// ```
fn axis_index_to_basis_indices(index: i32) -> (i32, i32, i32)
{
match index
{
0 => (0, 1, 2),
1 => (1, 2, 0),
2 => (2, 0, 1),
_ => panic!(),
}
}
Which gives:
error[E0432]: unresolved import `crate::axis_index_to_basis_indices`
--> dual_contouring.rs:107:5
|
3 | use crate::axis_index_to_basis_indices;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `axis_index_to_basis_indices` in the root
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
Couldn't compile the test.
failures:
dual_contouring.rs - axis_index_to_basis_indices (line 106)
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.02s
error: doctest failed, to rerun pass `--doc`
The official docs are incomplete in this regard. Reading them one would assume the include statement should be use doc::name_of_crate but that also doesn't work.

WebView2 Maps API from point to lat long with higher dpi not accurate

I am using webview2 with google maps api scripts. With 96 dpi (100% scale) the static map is inserted correctly into AutoCAD, the lat and long are calculated correcly based on the screen point. If I increase the display scale higher than 100%, the lat and long are calculated wrong based on the screen point, the static map is inserted not far from the right position in AutoCAD.
What would be the solution with higher dpi?
Here is the script I use
function getMapType() {
return map.getMapTypeId();
}
function getMapZoom() {
return map.getZoom();
}
function getMapCenter() {
var c = map.getCenter();
return c.lat() + "|" + c.lng();
}
function getMapProjection() {
projection = map.getProjection();
topRight = projection.fromLatLngToPoint(map.getBounds().getNorthEast());
bottomLeft = projection.fromLatLngToPoint(map.getBounds().getSouthWest());
scale = 1 << map.getZoom();
}
function getLatLongByScreenPoint(x, y) {
var c = projection.fromPointToLatLng(new google.maps.Point(x / scale + bottomLeft.x, y / scale + topRight.y));
return c.lat() + "|" + c.lng();
}

Resume a ml-agents training after changing hyperparameters and adding new observation vectors

I am working on training an agents thanks to ml-agents with Unity. When I changed the number of stacked vector, the observation vectors and the hyperparameters I can not resume the training from the last training because tensorflow tells me there is a problem for the lhs rhs shape that are not the same.
I would like to be able to change the agent scripts and config scripts and resume the training with this new parameters not to loose the past progress the agent made...Because for the moment I must restart a new training or not change the number of observations vectors etc.
How to do so ?
Thank you very much.
EDIT : Here an example of what I want to test and what errors I got with RollerBall ML-agents tutorial. See here https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Create-New.md
GOAL : I want to see the impact of the observations vector choice on the agent's training.
I ran a learning with the basic script for the agent given in the tutorial. Here it is :
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class RollerAgent : Agent
{
Rigidbody rBody;
void Start()
{
rBody = GetComponent();
}
public Transform Target;
public override void OnEpisodeBegin()
{
if (this.transform.localPosition.y < 0)
{
// If the Agent fell, zero its momentum
this.rBody.angularVelocity = Vector3.zero;
this.rBody.velocity = Vector3.zero;
this.transform.localPosition = new Vector3(0, 0.5f, 0);
}
// Move the target to a new spot
Target.localPosition = new Vector3(Random.value * 8 - 4,
0.5f,
Random.value * 8 - 4);
}
public override void CollectObservations(VectorSensor sensor)
{
// Target and Agent positions
sensor.AddObservation(Target.localPosition);
sensor.AddObservation(this.transform.localPosition);
// Agent velocity
sensor.AddObservation(rBody.velocity.x);
sensor.AddObservation(rBody.velocity.z);
}
public float speed = 10;
public override void OnActionReceived(float[] vectorAction)
{
// Actions, size = 2
Vector3 controlSignal = Vector3.zero;
controlSignal.x = vectorAction[0];
controlSignal.z = vectorAction[1];
rBody.AddForce(controlSignal * speed);
// Rewards
float distanceToTarget = Vector3.Distance(this.transform.localPosition, Target.localPosition);
// Reached target
if (distanceToTarget < 1.42f)
{
SetReward(1.0f);
EndEpisode();
}
// Fell off platform
if (this.transform.localPosition.y < 0)
{
EndEpisode();
}
}
public override void Heuristic(float[] actionsOut)
{
actionsOut[0] = Input.GetAxis("Horizontal");
actionsOut[1] = Input.GetAxis("Vertical");
}
}
I stopped the training before the agent hit the benchmark.
I suppressed the observation vectors concerning the velocity observation of the agent and adjusted the number of observation vector in unity from 8 to 6. Here is the new code :
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class RollerAgent : Agent
{
Rigidbody rBody;
void Start()
{
rBody = GetComponent();
}
public Transform Target;
public override void OnEpisodeBegin()
{
if (this.transform.localPosition.y < 0)
{
// If the Agent fell, zero its momentum
this.rBody.angularVelocity = Vector3.zero;
this.rBody.velocity = Vector3.zero;
this.transform.localPosition = new Vector3(0, 0.5f, 0);
}
// Move the target to a new spot
Target.localPosition = new Vector3(Random.value * 8 - 4,
0.5f,
Random.value * 8 - 4);
}
public override void CollectObservations(VectorSensor sensor)
{
// Target and Agent positions
sensor.AddObservation(Target.localPosition);
sensor.AddObservation(this.transform.localPosition);
// Agent velocity
//sensor.AddObservation(rBody.velocity.x);
//sensor.AddObservation(rBody.velocity.z);
}
public float speed = 10;
public override void OnActionReceived(float[] vectorAction)
{
// Actions, size = 2
Vector3 controlSignal = Vector3.zero;
controlSignal.x = vectorAction[0];
controlSignal.z = vectorAction[1];
rBody.AddForce(controlSignal * speed);
// Rewards
float distanceToTarget = Vector3.Distance(this.transform.localPosition, Target.localPosition);
// Reached target
if (distanceToTarget < 1.42f)
{
SetReward(1.0f);
EndEpisode();
}
// Fell off platform
if (this.transform.localPosition.y < 0)
{
EndEpisode();
}
}
public override void Heuristic(float[] actionsOut)
{
actionsOut[0] = Input.GetAxis("Horizontal");
actionsOut[1] = Input.GetAxis("Vertical");
}
}
I ran again with the same ID and I RESUMED the training so as to keep the advancement made during the last training. But when I pressed the play button on the Unity editor I got this error :
tensorflow.python.framework.errors_impl.InvalidArgumentError:
Restoring from checkpoint failed. This is most likely due to a
mismatch between the current graph and the graph from the checkpoint.
Please ensure that you have not altered the graph expected based on
the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [6,128]
rhs shape= [8,128]
[[node save_1/Assign_26 (defined at c:\users\jeann\anaconda3\envs\ml-agents-1.0.2\lib\site-packages\mlagents\trainers\policy\tf_policy.py:115)
]]
Errors may have originated from an input operation.
I know that it makes non-sense to use the advancement of the last training whereas I use a new brain configuration for the agent, but in the project I am currently working on, I need to keep the improvement made by the agent before even if we change the observation vectors. Is there a way to do so or it is impossible ?
Thank you :)

Intercept of sunrise on an Airplane

I want to calculate predicted Time of closest approach between an aircraft and Sunrise or Sunset keeping in mind:
Airplane Flying South-westbound as sunrise approaches
Red line is the GreatCircle Track on airplane.
Blue circle is the Airplane.
moment of intersection with sunrise and the Airplane
1- sun Declination (latitude) and crossing Longitude is known , plus the radius of sunrise which is approx 5450 Nautical miles, so sunrise can be shown as a circle with known centre and radius.
2- I used 2D Vector code which did not work since Great circle Path can not be applies to XY plane.
2- The Airplane is flying on Great circle Track which is curved and Latitude change is not Linear, how can I use Airplane Speed as Velocity Vector if latitude change is not constant ?
/// Va - Velocity of circle A.
Va = new Vector2(450, 0);
I used c# code
/// Calculate the time of closest approach of two moving circles. Also determine if the circles collide.
///
/// Input:
/// Pa - Position of circle A.
/// Pb - Position of circle B.
/// Va - Velocity of circle A.
/// Vb - Velocity of circle B.
/// Ra - Radius of circle A.
/// Rb - Radius of circle B.
// Set up the initial position, velocity, and size of the circles.
Pa = new Vector2(150, 250);
Pb = new Vector2(600, 400);
Va = new Vector2(450, 0);
Vb = new Vector2(-100, -250);
Ra = 60;
Rb = 30;
/// Returns:
/// collision - Returns True if a collision occured, else False.
/// The method returns the time to impact if collision=true, else it returns the time of closest approach.
public float TimeOfClosestApproach(Vector2 Pa, Vector2 Pb, Vector2 Va, Vector2 Vb, float Ra, float Rb, out bool collision)
{
Vector2 Pab = Pa - Pb;
Vector2 Vab = Va - Vb;
float a = Vector2.Dot(Vab, Vab);
float b = 2 * Vector2.Dot(Pab, Vab);
float c = Vector2.Dot(Pab, Pab) - (Ra + Rb) * (Ra + Rb);
// The quadratic discriminant.
float discriminant = b * b - 4 * a * c;
// Case 1:
// If the discriminant is negative, then there are no real roots, so there is no collision. The time of
// closest approach is then given by the average of the imaginary roots, which is: t = -b / 2a
float t;
if (discriminant < 0)
{
t = -b / (2 * a);
collision = false;
}
else
{
// Case 2 and 3:
// If the discriminant is zero, then there is exactly one real root, meaning that the circles just grazed each other. If the
// discriminant is positive, then there are two real roots, meaning that the circles penetrate each other. In that case, the
// smallest of the two roots is the initial time of impact. We handle these two cases identically.
float t0 = (-b + (float)Math.Sqrt(discriminant)) / (2 * a);
float t1 = (-b - (float)Math.Sqrt(discriminant)) / (2 * a);
t = Math.Min(t0, t1);
// We also have to check if the time to impact is negative. If it is negative, then that means that the collision
// occured in the past. Since we're only concerned about future events, we say that no collision occurs if t < 0.
if (t < 0)
collision = false;
else
collision = true;
}
// Finally, if the time is negative, then set it to zero, because, again, we want this function to respond only to future events.
if (t < 0)
t = 0;
return t;
}
I accept any answer in any language:
JAVA , JS, Objective-C , Swift , C#.
All I am looking for is the Algorithm. and how to Represent the Airplane Speed as Velocity Vector2D or Vector3D.

XNA - how to tell if a thumb stick was "twitched" in a certain direction

Is there anything in the API (3 or 4) to tell me if the stick moved in one direction, as in a menu where it's equivalent to hitting a direction on the DPad? There appear to be some Thumbstick* members in the Buttons enum, but I can't find decent documentation on them.
Just want to make sure I'm not missing something obvious before I go and roll my own. Thanks!
There is no XNA method to tell you if a thumbstick was "twitched" this frame.
The easiest method is to store the old thumbstick state. If the state was zero and is now non-zero, it has been twitched.
Addition:
Instead of checking if the state was zero and is now non-zero. You can use the thumbstick buttons from the enumeration you mention in your question to determine if the stick has been "twitched". In this case you are treating the stick like a DPad and have to test each direction independently. The following code shows this method:
private void ProcessUserInput()
{
GamePadState gamePadState = GamePad.GetState(PlayerIndex.One);
if (m_lastGamePadState.IsButtonUp(Buttons.LeftThumbstickUp) && gamePadState.IsButtonDown(Buttons.LeftThumbstickUp))
{
PrevMenuItem();
}
if (m_lastGamePadState.IsButtonUp(Buttons.LeftThumbstickDown) && gamePadState.IsButtonDown(Buttons.LeftThumbstickDown))
{
NextMenuItem();
}
m_lastGamePadState = gamePadState;
}
The thumbsticks on an Xbox 360 controller can be pushed "in" like buttons, which map to GamePadButtons.LeftStick and GamePadButtons.RightStick. These are obviously not what you want.
Here is the code that I use for detecting "presses" in any direction (where padLeftPushActive is stored between frames):
Vector2 padLeftVector = gamePadState.ThumbSticks.Left;
bool lastPadLeftPushActive = padLeftPushActive;
if(padLeftVector.Length() > 0.85f)
padLeftPushActive = true;
else if(padLeftVector.Length() < 0.75f)
padLeftPushActive = false;
if(!lastPadLeftPushActive && padLeftPushActive)
{
DoSomething(Vector2.Normalize(padLeftVector));
}
It should be fairly simple to modify this so that it detects just presses in the particular directions necessary for your menu.
Is the GamePadState.Thumbsticks property what you're looking for?
Here's the solution I came up with, in case it's useful for anyone:
enum Stick {
Left,
Right,
}
GamePadState oldState;
GamePadState newState;
/// <summary>
/// Checks if a thumbstick was quickly tapped in a certain direction.
/// This is useful for navigating menus and other situations where
/// we treat a thumbstick as a D-Pad.
/// </summary>
/// <param name="which">Which stick to check: left or right</param>
/// <param name="direction">A vector in the direction to check.
/// The length, which should be between 0.0 and 1.0, determines
/// the threshold.</param>
/// <returns>True if a twitch was detected</returns>
public bool WasStickTwitched(Stick which, Vector2 direction)
{
if (direction.X == 0 && direction.Y == 0)
return false;
Vector2 sold, snew;
if (which == Stick.Left)
{
sold = oldState.ThumbSticks.Left;
snew = newState.ThumbSticks.Left;
}
else
{
sold = oldState.ThumbSticks.Right;
snew = newState.ThumbSticks.Right;
}
Vector2 twitch = snew;
bool x = (direction.X == 0 || twitch.X / direction.X > 1);
bool y = (direction.Y == 0 || twitch.Y / direction.Y > 1);
bool tnew = x && y;
twitch = sold;
x = (direction.X == 0 || twitch.X / direction.X > 1);
y = (direction.Y == 0 || twitch.Y / direction.Y > 1);
bool told = x && y;
return tnew && !told;
}