Kinect windows sdk 1.5 missing references - kinect

I'm a new comer to the kinect environment, and i was trying to modify some classes but couldn't find them,
SkeletonData skeleton;
Dictionary jointMap;
The SkeletonData and Joint ID is not found as references
what i'm trying to do is to apply this example:http://www.youtube.com/watch?v=g-3EQ6xcFM8&feature=related
and this is the source code used for the modification but i dont know where to put it to get it working
http://codepaste.net/8j3pef
i need to display the angles of each joint of the skeleton so if anyone could help me or send me a project doing this for only one joint and then i'll apply on the other
Many thanks in advance

Some of the API's have changed and you will need to migrate your code.
SkeletonData is now Skeleton.
JointID is now Joint.
See: http://robrelyea.wordpress.com/2012/02/01/k4w-code-migration-from-beta2-to-v1-0-managed/
Scroll down to see Skeleton API Changes
I don't have a project for you, sorry.

Related

Obtaining Sensor Data and motor control values?

I am currently using Webots and new to the software framework. I need to implement a robot at the moment and get the sensor data and motor control values from it. The robot is a self-made robot and is not one of them already implemented tutorials. Can someone elaborate on how to get those values? I am trying to implement it in C++ if someone could help me with the syntax of the code to obtain the values?
You should start by following the Webots tutorials, there is one specific for controller which explains exactly what you are trying to do and is available in c++: https://cyberbotics.com/doc/guide/tutorial-4-more-about-controllers?tab-language=c++
There is one tutorial for building your own robot too: https://cyberbotics.com/doc/guide/tutorial-6-4-wheels-robot?tab-language=c++
In any case, I would recommend following at least tutorials 1 to 6 to get familiar with Webots.

Xamarin Forms - How to get camera stream and play with it?

Since a while, but without success, I'm trying to achieve a cross-platform solution that makes me able to use a custom camera with custom functionalities. However, no one on the internet seems to get it done over each platform (Often, only Android & iOS are implemented, but no UWP) and I still don't understand why...
I've been searching for the past months how to make something, like a service, a dependency service like, from which you can get the stream/frames of the camera. Once you get it, be able to put it into an Xamarin.Forms.Image.
The principle of this conception would allow developers to implement functions, inside of the dependency service, such as taking video or taking pictures from the native stream camera.
You could say "But you can already use NuGet as Xam.Plugin.Media from James Montemagno.". Yes, but with his package, you call the native built-in camera so you can't implement your own design or your own functionalities..
So my question is: "Does someone has any tips or any project that can help to realize this project/idea?". If I can make it work, then I will create a project on my public GitHub, in order to help future people who would like to realize it.
Thank for any help
PS: There is some results about some researches I made: https://forums.xamarin.com/discussion/comment/284359/#Comment_284359
This article looks to be similar to what you are after:
Full Page Camera in Xamarin
It derives a camera page from ContentPage then creates platform specific custom renderers based on PageRenderer.
Bonus - there is source code on GitHub

Create kinect skeleton for comparison

I'm going build an application where the user is supposed to try to mimic a static pose of a person on a picture. So I'm thinking that a Kinect is the most suitable way to get the information about the users pose.
I have found answers here on Stackoverflow suggesting that the comparison of the two skeletons (the skeleton defining the pose on the picture and the skeleton of the user) is best done by comparing the joint angles etc. I was thinking that there already would be some functionality for comparing poses of skeletons in the SDK but haven't found any information saying otherwise.
One thing makes me very unsure:
Is it possible to manually define a skeleton so I can make the static pose from the picture somehow? Or do I need to record it with help of Kinect Studio? I would really prefer some tool for creating the poses by hand...
If you are looking for users to pose and get recognized for the correct pose made by the user. Then you can follow these few steps to have it implemented in c#.
You can refer to the sample project Controls Basics-WPF provided by microsoft in the SDK Browser v2.0( Kinect for Windows)
Steps:
Record in Kinect studio 2 the position you want the pose to be.
open up Visual gesture builder to train your clips( selection of the clip that is correct)
build the vgbsln in the visual gesture builder to produce a gbd file( this will be imported into your project as the file that the gesturedetector.cs will read and implement into your project.
code out your own logic on what will happen when user have matching poses in the gestureresultview.cs.
Start off with one and slowly make the files into an array to loop when you have multiple poses.
I would prefer this way instead of coding out the exact skeleton joints of the poses.
Cheers!

Kinect Hand Gestures

I have been working with Kinect gestures for a while now and so far the tools that are available to create gestures are only limited to track entire body movements for instance swiping your arm to left and right. The JOINT TYPES available in the original Kinect SDK involves elbows, wrists, hands, shoulders etc but doesn’t include minor details like index finger, thumb, and middle finger. I am mentioning al this because I am trying to create gestures involving only hand movements (like victory sign, thumb up/down). Can anyone guide me though this? Is there a blog or website where codes for hand movements are written?
I have been developing application with Kinect one year ago, and then it was very hard or nearly impossible to do that. Now Google shows me projects like this, be sure to check it out. If you generally want to focus on hands gestures, I really advise you to use LEAP Motion
My friends at SigmaRD have developed something called the SigmaNIL Framework. You can get it from the OpenNI website.
It offers "HandSegmentation", "HandSkeleton", "HandShape" and "HandGesture" modules which may cover your needs.
Also check out the rest of the OpenNI Middleware and Libraries that you can download from their website. Some of them also work with the Microsoft SDK.

Import Unity along with UIKit project

We are working on a game which is based on MapKit from Apple. Choosing to attack some venues from the map trigger some built-in minigames which currently are made in cocos2d. We want to change one of this minigames and replace it with a Unity-based game, so I wonder if it's possible to run code generated by Unity along with the rest of the project.
I don't want some deep integration between Unity and the rest of the project, just to start the Unity-based game and to stop it when the player finished the minigame. Do you know if this is possible and what are the steps to achieve this?
Thank you