Calculating Center of mass of body being tracked using kinect? - kinect

I am working on Kinect for my research project . I have worked previously to calculate the joint angle of kinect and the joint coordinates. I would like to calculate the center of mass of the body which is being tracked.
Any idea would be appreciated and code snippets would be immensely helpful.
I owe a lot to stack overflow without the community help it would had not been possible to do such a thing.
Thanks in Advance
Please find the code where i want to include this center of mass function. This function tracks the skeleton.
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
{
if (skeletonFrameData == null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
//get the first tracked skeleton
Skeleton first = (from s in allSkeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
I have tried using this code in my code but its not getting accustomed , can any one please help me include the center of mass code.
oreach (SkeletonData data in skeletonFrame.Skeletons) {
SkeletonFrame allskeleton = e.SkeletonFrame;
// Count passive and active person up to six in the group
int numberOfSkeletonsT = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.Tracked select s).Count();
int numberOfSkeletonsP = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.PositionOnly select s).Count();
// Count passive and active person up to six in the group
int totalSkeletons = numberOfSkeletonsP + numberOfSkeletonsT;
//Console.WriteLine("TotalSkeletons = " + totalSkeletons);
//======================================================
if (data.TrackingState == SkeletonTrackingState.PositionOnly)
{
foreach (Joint joint in data.Joints)
{
if (joint.Position.Z != 0)
{
double centerofmassX = com.Position.X;
double centerofmassY = com.Position.Y;
double centerofmassZ = com.Position.Z;
Console.WriteLine( centerofmassX + centerofmassY + centerofmassZ );
}
}

See a couple of resources here:
http://mathwiki.ucdavis.edu/Calculus/Vector_Calculus/Multiple_Integrals/Moments_and_Centers_of_Mass#Three-Dimensional_Solids
http://www.slideshare.net/GillianWinters/center-of-mass-presentation
http://en.wikipedia.org/wiki/Locating_the_center_of_mass
Basically no matter what, you are going to need to find the mass of your user. This can be a simple input, then you can determine how much weight the person puts on each foot and use the equations described at all of these sources. Another option may be to use plumb lines on a planar shape representation of the user in 2D, However that won't be the actually accurate 3D center of mass.
Here is an example of how to find what amount of mass is on each foot. using the equation found on http://www.vitutor.com/geometry/distance/line_plane.html
Vector3 v = new Vector3(skeleton.Joints[JointType.Head].Position.X, skeleton.Joints[JointType.Head].Position.Y, skeleton.Joints[JointType.Head].Position.Z);
double mass;
double leftM, rightM;
double A = sFrame.FloorClipPlane.X,
B = sFrame.FloorClipPlane.Y,
C = sFrame.FloorClipPlane.Z;
//find angle
double angle = Math.ASin(Math.Abs(A * v.X + B * v.Y * C * v.Z)/(Math.Sqrt(A * A + B * B + C * C) * Math.Sqrt(v.X * v.X + v.Y * v.Y + v.Z * v.Z)));
if (angle == 90.0)
{
leftM = mass / 2.0;
rightM = mass / 2.0;
}
double distanceFrom90 = 90.0 - angle;
if (distanceFrom90 > 0)
{
double leftMultiple = distanceFrom90 / 90.0;
leftM = mass * leftMultiple;
rightM = mass - leftM;
}
else
{
double rightMultiple = distanceFrom90 / 90.0;
rightM = rightMultiple * mass;
leftM = mass - rightMultiple;
}
This is of course assuming that the user is on both feet, but you could modify the code to create a new plane based off the users feet instead of the automatic one generated by Kinect.
The code to then find the center of mass you have to choose a datum. I would choose the head as that is the top of the person, and you can measure down from it easily. Using the steps found here:
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footLeftX, 2) + Math.Pow(headY - footLeftY, 2) + Math.Pow(headZ - footLeftZ, 2));
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footRightX, 2) + Math.Pow(headY - footRightY, 2) + Math.Pow(headZ - footRightZ, 2));
double momentLeft = distanceFromDatumLeft * leftM;
double momentRight = distanceFromDatumRight * rightM;
double momentSum = momentLeft + momentRight;
//measured in units from the datum
double centerOfGravity = momentSum / mass;
You then can of course show this on the screen by passing a point to plot that is centerOfGravity points below the head.

Related

Unable to detect if a rectangle is touched in libgdx

I have a rectangle with a sprite on it and I have to detect if the touch position lies within the rectangle.
This is my code,
if (Gdx.input.isTouched())
{
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
Vector3 inputs = new Vector3(x1, y1, 0);
gamecam.unproject(inputs);
Gdx.app.log("x" + inputs.x, "y" + inputs.y);
Gdx.app.log("rect" + rectangle.x, "rect" + rectangle.y);
if(rectangle.contains(inputs.x,inputs.y))
{
Gdx.app.log("x" + inputs.x, "y" + inputs.y);
Gdx.app.log("rect" + rectangle, "rect" + rectangle.y);
}
}
Rectangle definition,
BodyDef bdef = new BodyDef();
bdef.type = BodyDef.BodyType.StaticBody;
b2body = screen.getWorld().createBody(bdef);
rectangle = new Rectangle();
rectangle.setHeight(55);
rectangle.setWidth(55);
PolygonShape head = new PolygonShape();
rectangle.setX(300);
rectangle.setY(10);
bdef.position.set((rectangle.getX() - rectangle.getWidth() / 2) / MyJungleGame.PPM, (rectangle.getY() - rectangle.getHeight() / 2) / MyJungleGame.PPM);
head.setAsBox(rectangle.getWidth() / 2 / MyJungleGame.PPM, rectangle.getHeight() / 2 / MyJungleGame.PPM);
FixtureDef fdef = new FixtureDef();
fdef.shape = head;
setPosition(b2body.getPosition().x - getWidth() / 2, b2body.getPosition().y - getHeight() / 2);
This is my output,
The small rectangle at the bottom of the screen is the rectangle I created. But, nothing happens when I click it. I checked the coordinates and here is the log,
x-0.925: y-0.5625
rect300.0: rect10.0
x-0.925: y-0.5625
rect300.0: rect10.0
x-0.925: y-0.5625
I tried checking the touch using the below method,
if (inputs.x > sprite.getX() && inputs.x < sprite.getX() + sprite.getWidth())
{
if (inputs.y > sprite.getY() && inputs.y < sprite.getY() + sprite.getHeight())
{
Gdx.app.log("sprite touched", "");
}
}
This too doesn't work. Any idea where I made the mistake ? Please help . Thanks in advance
Since you are using Box2D, to detect collisions via the common way is more complicated to new users.
However, looking on your code...
I would advice taking this coordinates in consideration with PPM of your world :
int x1 = Gdx.input.getX();
int y1 = Gdx.input.getY();
Vector3 inputs = new Vector3(x1, y1, 0);
Also, If you are going to build a collision system with box2d, you should use this : http://www.aurelienribon.com/blog/2011/07/box2d-tutorial-collision-filtering/

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

Filter latitude and longitude records based on for given latitude and longitude for given n kilometre [duplicate]

I have data with latitude and longitude stored in my SQLite database, and I want to get the nearest locations to the parameters I put in (ex. My current location - lat/lng, etc.).
I know that this is possible in MySQL, and I've done quite some research that SQLite needs a custom external function for the Haversine formula (calculating distance on a sphere), but I haven't found anything that is written in Java and works.
Also, if I want to add custom functions, I need the org.sqlite .jar (for org.sqlite.Function), and that adds unnecessary size to the app.
The other side of this is, I need the Order by function from SQL, because displaying the distance alone isn't that much of a problem - I already did it in my custom SimpleCursorAdapter, but I can't sort the data, because I don't have the distance column in my database. That would mean updating the database every time the location changes and that's a waste of battery and performance. So if someone has any idea on sorting the cursor with a column that's not in the database, I'd be grateful too!
I know there are tons of Android apps out there that use this function, but can someone please explain the magic.
By the way, I found this alternative: Query to get records based on Radius in SQLite?
It's suggesting to make 4 new columns for cos and sin values of lat and lng, but is there any other, not so redundant way?
1) At first filter your SQLite data with a good approximation and decrease amount of data that you need to evaluate in your java code. Use the following procedure for this purpose:
To have a deterministic threshold and more accurate filter on data, It is better to calculate 4 locations that are in radius meter of the north, west, east and south of your central point in your java code and then check easily by less than and more than SQL operators (>, <) to determine if your points in database are in that rectangle or not.
The method calculateDerivedPosition(...) calculates those points for you (p1, p2, p3, p4 in picture).
/**
* Calculates the end-point from a given source at a given range (meters)
* and bearing (degrees). This methods uses simple geometry equations to
* calculate the end-point.
*
* #param point
* Point of origin
* #param range
* Range in meters
* #param bearing
* Bearing in degrees
* #return End-point from the source given the desired range and bearing.
*/
public static PointF calculateDerivedPosition(PointF point,
double range, double bearing)
{
double EarthRadius = 6371000; // m
double latA = Math.toRadians(point.x);
double lonA = Math.toRadians(point.y);
double angularDistance = range / EarthRadius;
double trueCourse = Math.toRadians(bearing);
double lat = Math.asin(
Math.sin(latA) * Math.cos(angularDistance) +
Math.cos(latA) * Math.sin(angularDistance)
* Math.cos(trueCourse));
double dlon = Math.atan2(
Math.sin(trueCourse) * Math.sin(angularDistance)
* Math.cos(latA),
Math.cos(angularDistance) - Math.sin(latA) * Math.sin(lat));
double lon = ((lonA + dlon + Math.PI) % (Math.PI * 2)) - Math.PI;
lat = Math.toDegrees(lat);
lon = Math.toDegrees(lon);
PointF newPoint = new PointF((float) lat, (float) lon);
return newPoint;
}
And now create your query:
PointF center = new PointF(x, y);
final double mult = 1; // mult = 1.1; is more reliable
PointF p1 = calculateDerivedPosition(center, mult * radius, 0);
PointF p2 = calculateDerivedPosition(center, mult * radius, 90);
PointF p3 = calculateDerivedPosition(center, mult * radius, 180);
PointF p4 = calculateDerivedPosition(center, mult * radius, 270);
strWhere = " WHERE "
+ COL_X + " > " + String.valueOf(p3.x) + " AND "
+ COL_X + " < " + String.valueOf(p1.x) + " AND "
+ COL_Y + " < " + String.valueOf(p2.y) + " AND "
+ COL_Y + " > " + String.valueOf(p4.y);
COL_X is the name of the column in the database that stores latitude values and COL_Y is for longitude.
So you have some data that are near your central point with a good approximation.
2) Now you can loop on these filtered data and determine if they are really near your point (in the circle) or not using the following methods:
public static boolean pointIsInCircle(PointF pointForCheck, PointF center,
double radius) {
if (getDistanceBetweenTwoPoints(pointForCheck, center) <= radius)
return true;
else
return false;
}
public static double getDistanceBetweenTwoPoints(PointF p1, PointF p2) {
double R = 6371000; // m
double dLat = Math.toRadians(p2.x - p1.x);
double dLon = Math.toRadians(p2.y - p1.y);
double lat1 = Math.toRadians(p1.x);
double lat2 = Math.toRadians(p2.x);
double a = Math.sin(dLat / 2) * Math.sin(dLat / 2) + Math.sin(dLon / 2)
* Math.sin(dLon / 2) * Math.cos(lat1) * Math.cos(lat2);
double c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));
double d = R * c;
return d;
}
Enjoy!
I used and customized this reference and completed it.
Chris's answer is really useful (thanks!), but will only work if you are using rectilinear coordinates (eg UTM or OS grid references). If using degrees for lat/lng (eg WGS84) then the above only works at the equator. At other latitudes, you need to decrease the impact of longitude on the sort order. (Imagine you're close to the north pole... a degree of latitude is still the same as it is anywhere, but a degree of longitude may only be a few feet. This will mean that the sort order is incorrect).
If you are not at the equator, pre-calculate the fudge-factor, based on your current latitude:
<fudge> = Math.pow(Math.cos(Math.toRadians(<lat>)),2);
Then order by:
((<lat> - LAT_COLUMN) * (<lat> - LAT_COLUMN) +
(<lng> - LNG_COLUMN) * (<lng> - LNG_COLUMN) * <fudge>)
It's still only an approximation, but much better than the first one, so sort order inaccuracies will be much rarer.
I know this has been answered and accepted but thought I'd add my experiences and solution.
Whilst I was happy to do a haversine function on the device to calculate the accurate distance between the user's current position and any particular target location there was a need to sort and limit the query results in order of distance.
The less than satisfactory solution is to return the lot and sort and filter after the fact but this would result in a second cursor and many unnecessary results being returned and discarded.
My preferred solution was to pass in a sort order of the squared delta values of the long and lats:
((<lat> - LAT_COLUMN) * (<lat> - LAT_COLUMN) +
(<lng> - LNG_COLUMN) * (<lng> - LNG_COLUMN))
There's no need to do the full haversine just for a sort order and there's no need to square root the results therefore SQLite can handle the calculation.
EDIT:
This answer is still receiving love. It works fine in most cases but if you need a little more accuracy, please check out the answer by #Teasel below which adds a "fudge" factor that fixes inaccuracies that increase as the latitude approaches 90.
In order to increase performance as much as possible I suggest improve #Chris Simpson's idea with the following ORDER BY clause:
ORDER BY (<L> - <A> * LAT_COL - <B> * LON_COL + LAT_LON_SQ_SUM)
In this case you should pass the following values from code:
<L> = center_lat^2 + center_lon^2
<A> = 2 * center_lat
<B> = 2 * center_lon
And you should also store LAT_LON_SQ_SUM = LAT_COL^2 + LON_COL^2 as additional column in database. Populate it inserting your entities into database. This slightly improves performance while extracting large amount of data.
Try something like this:
//locations to calculate difference with
Location me = new Location("");
Location dest = new Location("");
//set lat and long of comparison obj
me.setLatitude(_mLat);
me.setLongitude(_mLong);
//init to circumference of the Earth
float smallest = 40008000.0f; //m
//var to hold id of db element we want
Integer id = 0;
//step through results
while(_myCursor.moveToNext()){
//set lat and long of destination obj
dest.setLatitude(_myCursor.getFloat(_myCursor.getColumnIndexOrThrow(DataBaseHelper._FIELD_LATITUDE)));
dest.setLongitude(_myCursor.getFloat(_myCursor.getColumnIndexOrThrow(DataBaseHelper._FIELD_LONGITUDE)));
//grab distance between me and the destination
float dist = me.distanceTo(dest);
//if this is the smallest dist so far
if(dist < smallest){
//store it
smallest = dist;
//grab it's id
id = _myCursor.getInt(_myCursor.getColumnIndexOrThrow(DataBaseHelper._FIELD_ID));
}
}
After this, id contains the item you want from the database so you can fetch it:
//now we have traversed all the data, fetch the id of the closest event to us
_myCursor = _myDBHelper.fetchID(id);
_myCursor.moveToFirst();
//get lat and long of nearest location to user, used to push out to map view
_mLatNearest = _myCursor.getFloat(_myCursor.getColumnIndexOrThrow(DataBaseHelper._FIELD_LATITUDE));
_mLongNearest = _myCursor.getFloat(_myCursor.getColumnIndexOrThrow(DataBaseHelper._FIELD_LONGITUDE));
Hope that helps!

how to use skeletal joint to act as cursor using bounds (No gestures)

I just want to be able to do something when my skeletal joint (x,y,z) coordinates are over the x,y,z coordinates of the button . . I have the following code but somehow it doesnt work properly . .as soon as my hand moves it will do something without my hand reaching the button
if (skeletonFrame != null)
{
//int skeletonSlot = 0;
Skeleton[] skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength];
skeletonFrame.CopySkeletonDataTo(skeletonData);
Skeleton playerSkeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();
if (playerSkeleton != null)
{
Joint rightHand = playerSkeleton.Joints[JointType.HandRight];
handPosition = new Vector2((((0.5f * rightHand.Position.X) + 0.5f) * (640)), (((-0.5f * rightHand.Position.Y) + 0.5f) * (480)));
var rightHands = playerSkeleton.Joints[JointType.HandRight];
var rightHandsX = rightHands.Position.X;
var rightHandsY = rightHands.Position.Y;
var rightHandsZ = rightHands.Position.Z;
if (Math.Sqrt(Math.Pow(rightHandsX - position.X, 2) + Math.Pow(rightHandsY - position.Y, 2)) < 20)
{
// Exit();
}
if (Math.Sqrt(Math.Pow(rightHandsX - start_bttn.Bounds.X, 1) + Math.Pow(rightHandsY - start_bttn.Bounds.Y, 1)) < 10)
{
currentGameState = GameState.Selection;
// Exit();
}
if ((rightHandsX < GraphicsDevice.Viewport.Width / 2 + 150 && rightHandsX > GraphicsDevice.Viewport.Width / 2 - 75) && (rightHandsY > GraphicsDevice.Viewport.Height / 2 && rightHandsY < GraphicsDevice.Viewport.Height / 2 + 50))
{
currentGameState = GameState.Selection;
}
}
Here is my hand tracking function. See if it does what you want, or gets you closer...
private void TrackHandMovement(Skeleton skeleton)
{
Joint leftHand = skeleton.Joints[JointType.HandLeft];
Joint rightHand = skeleton.Joints[JointType.HandRight];
Joint leftShoulder = skeleton.Joints[JointType.ShoulderLeft];
Joint rightShoulder = skeleton.Joints[JointType.ShoulderRight];
Joint rightHip = skeleton.Joints[JointType.HipRight];
// the right hand joint is being tracked
if (rightHand.TrackingState == JointTrackingState.Tracked)
{
// the hand is sufficiently in front of the shoulder
if (rightShoulder.Position.Z - rightHand.Position.Z > 0.4)
{
double xScaled = (rightHand.Position.X - leftShoulder.Position.X) / ((rightShoulder.Position.X - leftShoulder.Position.X) * 2) * SystemParameters.PrimaryScreenWidth;
double yScaled = (rightHand.Position.Y - rightShoulder.Position.Y) / (rightHip.Position.Y - rightShoulder.Position.Y) * SystemParameters.PrimaryScreenHeight;
// the hand has moved enough to update screen position (jitter control / smoothing)
if (Math.Abs(rightHand.Position.X - xPrevious) > MoveThreshold || Math.Abs(rightHand.Position.Y - yPrevious) > MoveThreshold)
{
RightHandX = xScaled;
RightHandY = yScaled;
xPrevious = rightHand.Position.X;
yPrevious = rightHand.Position.Y;
// reset the tracking timer
trackingTimerCounter = 10;
}
}
}
}
There is a bit of math in there to translate the hand position to the screen position. Different strokes for different folks, but my logic is:
Shoulders = top of screen
Hips = bottom of screen
Left Should = left most on screen
To get the right most screen position, I take the distance between the left & right shoulder and add it to the right shoulder.

Getting minimum number of MKMapRects from a MKPolygon

So I have a function that takes two MKMapRect's and the second intersects with the first one. So the function creates an MKPolygon that is the first rect without the intersecting parts:
-(void) polygons:(MKMapRect)fullRect exclude:(MKMapRect)excludeArea{
NSLog(#"Y is: %f height: %f",excludeArea.origin.y,excludeArea.size.height);
double top = excludeArea.origin.y - fullRect.origin.y;
double lft = excludeArea.origin.x - fullRect.origin.x;
double btm = (fullRect.origin.y + fullRect.size.height) - (excludeArea.origin.y + excludeArea.size.height);
double rgt = (fullRect.origin.x + fullRect.size.width) - (excludeArea.origin.x + excludeArea.size.width);
double ot = fullRect.origin.y, it = (ot + top);
double ol = fullRect.origin.x, il = (ol + lft);
double ob = (fullRect.origin.y + fullRect.size.height), ib = (ob - btm);
double or = (fullRect.origin.x + fullRect.size.width), ir = (or - rgt);
MKMapPoint points[11] = {{ol,it}, {ol,ot}, {or,ot}, {or,ob}, {ol,ob}, {ol,it}, {il,it}, {ir,it}, {ir,ib}, {il,ib}, {il,it}};
MKPolygon *polygon = [MKPolygon polygonWithPoints:points count:11];
}
And my question is now how do I get the minimum number of MKMapRects from this MKPolygon? I have done some googling as well as looking through the forum but havn't found anything.
EDIT:
So the goal is the following:
I have a MKMapRect rect1, then I have a list of rectangles, rectList, which is MKMapRects intersecting with rect1 and what I want to do is create a rectilinear MKPolygon of rect1, remove the surface of all MKMapRects in rectList from rect1 and then create the minimum number of MKMaprects from the created rectilinear MKPolygon.
Right now the problem is the following: I am able to create a polygon when removing one MKMapRect from rect1 but I dont know how to remove the following maprects from rect1 and I dont know how to extract the minimum set of MkMapRects from the polygon created.
Best regards
Peep
I'm not sure if this is what you're looking for or if I understand the question fully, but if all you need to know is the minimum number of rectangles in a polygon that's created by subtracting one rectangle from another you should be able to do it by checking the number of corner points in the second rectangle that are contained in the first rectangle. In pseudo code:
int minNumRects(MKRect r1, MKRect r2) {
int numPointsContained = 0;
for (Point p in r2) {
if (MKMapRectContainsPoint(r1, p)) {
numPointsContained++;
}
}
if (numPointsContained == 1) {
return 2;
} else if (numPointsContained == 2) {
return 3;
} else if (numPointsContained == 4) {
return 4;
} else {
return 0;
}
}
P.S. - This assumes that the rectangles are axis-aligned but as far as I know that's the case with MKRects