Get coordinates of head in Kinect [closed] - kinect

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
i am making an App in Open GL (C++) with Kinect.
I want to get the coordinates of head (Skeleton).
I saw the function:
void CSkeletalViewerApp::Nui_DrawSkeletonSegment( NUI_SKELETON_DATA * pSkel, int numJoints, ... ) ,
but I dont know how to use it and extract the coordinates of the head.

Judging from the code snipped you posted, we can assume you are using Microsoft's Kinect for Windows SDK.
The coordinates of the joints are stored in the SkeletonPositions member of the NUI_SKELETON_DATA structure. Instances of this structure can be found in the SkeletonData member of the NUI_SKELETON_FRAME structure, which is provided whenever the skeleton tracking engine finishes tracking.
Of course, this will only work, if the sensor is initialized properly. Please have a look at the sample projects that come with the SDK, and read Microsoft's online documentation.
Also, be aware that the Kinect's coordinate system has its origin at the sensor and provides coordinate values ranging roughly from -2.2 to 2.2 on the x-axis, from -1.6 to 1.6 on the y-axis, and from 0.0 to 4.0 on the z-axis (depth). Thus, you might need to apply some transformations.

Related

Draw along a path detection in iOS [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I am writing a writing app for Kids.
I need to enforce writing along a define path like in Dora ABC apps. For example, writing A.
When user touch and draw, it draw only when user finger are along the defined path and can detect whether user follow that path or not.
My path are defined in UIBezierpath.
I try to use CGRectContainsPoint but it seem to be too much code when I have so many alphabet.
Any suggestion is much appreciated.
Thanks!
you may try another approach-
Using Custom gestures in ios: Using this feature you can have predefined gestures for alphabets and you can use these gestures for detection. demo code available at https://github.com/britg/MultistrokeGestureRecognizer-iOS
Or if you want to implement it from scratch- http://blog.federicomestrone.com/2012/01/31/creating-custom-gesture-recognisers-for-ios/

How to identify and extract vector graphics from PDF using xpdf library? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Does anyone have a sample code demonstrating how to extract vector graphics objects (such as those representing charts and flow diagrams) from a PDF using XPDF library? There doesn't seem to be any documentation available on the Web for xpdf library nor could I find any any sample code that uses the library to extract information from PDF. I am going through xpdf's code base but any pointers to its documentation or a sample code would be very helpful.
OutputDev class has stroke, fill, clip ... virtual members definitions. Just implement those and extract path and colour information from GfxState. You'll find path iteration in OutputDev based classes in xpdf code base such as TextOutputDev or ImageOutputDev
edit: This outputdev may give you the example you need

What is SensorKinect? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
So, from the following link (Difference between OpenNi and NITE), I understand what OpenKinect and NITE are. What is SensorKinect and how does it fit in the picture? From my current understanding, we don't need it.
In case anyone is wondering, I'm intending to use these libraries for skeleton + depth tracking.
SensorKinect is an OpenNI module that allows it to talk with the Kinect. Basically, OpenNI and NITE are middleware and SensorKinect is the hardware driver.
If you're using a Kinect, you need it. If you are using a PrimeSense sensor, you don't.

General approach for an Mac OSX drawing app (with obj-c) [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am planning to make a simple OSX drawing/painting app in objective-c/cocoa and thinking that the best approach is to (in a nutshell) use quartz in an NSView sub-class.
Question: should I look into using OPEN GL or will Quartz do the trick? Will using OPEN GL mean a big performance advantage?
The app would be very basic and should (for example) be able to:
-paint in color
-paint with bitmap textures
-use gradient fills
-programmatic paint brushes
There's already "an app for that" - it's the Sketch example available from Apple's developer site.

How do I implement an eraser tool for a drawing/painting app? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I am trying to develop a drawing/painting app for my portfolio.
The functions I have now are "Write", "Change Size", and "Change Color".
Now, I am trying to implement an eraser tool that will totally erase what's written. What I did so far was copy the same code I used for writing, using white as the color, but instead of erasing what's written it just overwrites the first one. Is that the right way, or is there another way to implement this?
I don't honestly know too much about OpenGL as of this writing, but I suspect that painting over with white is the wrong thing to do. Consider what happens if you decide to allow users to change the background color. The your code gets a little more complicated. What if you decide to add support for gradients?
I suggest finding a way to simply clear the data at a given point, for example, where your brush is. You might want to look at an open source graphics program, like GIMP, to see how they do it.