Create kinect skeleton for comparison - kinect

I'm going build an application where the user is supposed to try to mimic a static pose of a person on a picture. So I'm thinking that a Kinect is the most suitable way to get the information about the users pose.
I have found answers here on Stackoverflow suggesting that the comparison of the two skeletons (the skeleton defining the pose on the picture and the skeleton of the user) is best done by comparing the joint angles etc. I was thinking that there already would be some functionality for comparing poses of skeletons in the SDK but haven't found any information saying otherwise.
One thing makes me very unsure:
Is it possible to manually define a skeleton so I can make the static pose from the picture somehow? Or do I need to record it with help of Kinect Studio? I would really prefer some tool for creating the poses by hand...

If you are looking for users to pose and get recognized for the correct pose made by the user. Then you can follow these few steps to have it implemented in c#.
You can refer to the sample project Controls Basics-WPF provided by microsoft in the SDK Browser v2.0( Kinect for Windows)
Steps:
Record in Kinect studio 2 the position you want the pose to be.
open up Visual gesture builder to train your clips( selection of the clip that is correct)
build the vgbsln in the visual gesture builder to produce a gbd file( this will be imported into your project as the file that the gesturedetector.cs will read and implement into your project.
code out your own logic on what will happen when user have matching poses in the gestureresultview.cs.
Start off with one and slowly make the files into an array to loop when you have multiple poses.
I would prefer this way instead of coding out the exact skeleton joints of the poses.
Cheers!

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Visual Basic Windows Forms App and capturing video

I have a simple VB Forms based Shuffleboard scoreboard app. I would like to add two overhead live video feeds that show the two scoring areas. This is a one off program that will never see the outside of my basement. I have played with a couple of the available vidcap libraries but there is no way I can justify purchasing a license to get rid of the demo popups.
My C++ skills are near nonexistent and the VB App is well developed and very user friendly, so I really don't want to rebuild the project. All I really want is the background of the scoreboard to be the two video feeds. (This is so that spectators can see the scoring area without having to stand at the table.)
I have been toying with the DirectX libs and C++ using Visual Studio 19. I can achieve the two cam views but cannot find a way to incorporate that into VB. I cannot seem to get the DirectX extensions to expose themselves when I go back to VB. Is this even doable using Visual Basic or am I going to have to rebuild in C++?

Image comparison and visual testing for Windows desktop application and C# with the usage of WinAPppDriver

Please help in choosing a tool for testing watermark/image overlay. The transparency can be 0%, it should not be a problem.
The application under test is a WPF desktop application on Windows, the autotests are written in Winappdriver + C#, now it looks like I have to take a screenshot of a specific element and compare the actual image with the ideal sample by a mask.
The product under test is a video camera with the ability to insert a logotype/watermark and/or additional details (date/name/address) on the image and video. The task is to verify automatically the correctness of the inserted logo and the correctness of the inserted details in the image/video (size, color, if the logo was mirrored after insert or whatever if a name was entered badly...).
At the moment I am thinking about using OpenCV or Sikuli. I know that Appium had something similar but it probably won't work with my driver.
It is also unclear how and what can be tested with video. Just to take one frame randomly and make a test for it as for an image?
Many thanks for your help and suggestions!
Perhaps not a complete answer to you your questions but a few words on how Sikuli works and what might be a disadvantage, if I understand your needs correctly. First of all, Sikuli is using OpenCV internally by calling the Imgproc.matchTemplate() function. There is not much control over it from Sikuli but you can set a minimum similarity score that varies between 0 (everything will match) and 1 (pixel perfect comparison). Given you intend to use it for video originated patterns, you'd want to be somewhere in the middle. Having said that, I am not sure what quality of comparison you'd like to obtain so not sure if the minimum similarity by itself will be enough.
Another thought is to integrate the OpenCv lib itself in your code and use it directly. This is not an easy task and some basic understand of image processing techniques might be required.

plotting robot path

I am writing code for rrt(rapidly exploring random trees) which is a sampling based motion planning algorithm.I wrote the code in MATLAB but now i am writing it in c++.
I want to know how can we plot the robot path in real time with all the obstacles.
What I want is this: I want to see my robot traversing the space.So basically it's about the graphics.I am trying to use sfml but I am having problems with it.Some people suggested using opencv or opengl but I think they are not easy to use.I am looking for a simple to use library.
If opencv or opengl is the answer then please tell me what specifically i need to use in these libraries.I am working on linux(ubuntu 11.10)
You might want too look into using the internal matlab compiler for generating a standalone application directly from your M-code. That way you don't have to rewrite everything form scratch.
I have used the following link a couple of times just to refresh my memory
http://technologyinterface.nmsu.edu/5_1/5_1f/5_1f.html
Eg if you have made an M function with the following content(Example from link):
function y=PolyValue(poly,x)
poly=[1 2 -1 4 -5];
x=[5, 6];
y=polyval(poly, x)
you could use the command
mcc -m PolyValue
to compile the program.
This command would then give you the files necessary for implementation in a larger c++ program.
It should even support Gui elements and graphs.
Something like http://www.ros.org/news/2011/01/open-motion-planning-library-ompl-released.html
may be what you are looking for.
I've worked in both OpenCV for some image recognition projects and OpenGL for rendering displays and whether you go with a library like above or render it yourself is really up to how complex the display needs to be. Ask yourself some questions about how many different obstacle scenarios you are looking at. Are there a large multitude of possible shapes for the obstacles and the robot? Is the problem deterministic (in terms of both the robot's movement and the environment)?
In terms of OpenGL and OpenCV being not easy to use for those new to them, this is very much the case, but choosing to work in C++ makes the problem more difficult for beginners. As another user has mentioned, wrapping your Matlab code instead of throwing it away may be a viable option. Even running the matlab engine in the background to run your scripts through C++ may be viable, if speed is not a critical factor. See http://au.mathworks.com/help/matlab/matlab_external/introducing-matlab-engine.html for more information.

3d files in vb.net

I know this will be a difficult question, so I am not necessarily looking for a direct answer but maybe a tutorial or a point in the right direction.
What I am doing is programing a robot that will be controlled by a remote operator. We have a 3D rendering of the robot in SolidWorks. What I am looking to do is get the 3D file into VB (probably using DX9) and be able to manipulate it using code so that the remote operator will have a better idea of what the robot is doing. The operator will also have live video to look at, but that doesn't really matter for this question.
Any help would be greatly appreciated. Thanks!
Sounds like a tough idea to implement. Well, for VB you are stuck with MDX 1.1(Comes with DirectX SDK) or SlimDX (or other 3rd party Managed DirectX wrapper). The latest XNA (replacement for MDX 1.1/2.0b) is only available for C# coder. You can try some workaround but it's not recommended and you won't get much community support. These are the least you need to get your VB to display some 3d stuffs.
If you want to save some trouble, you could use ready made game engine to simplified you job. Try Ogre, and it's managed wrapper MOgre. It was one of the candidate for my project. But I ended up with SlimDX due to Ogre not supporting video very well. But since video is not your requirement, you can really consider it. Most sample would be in C# also, so you need to convert to VB.Net to use. It won't be hard.
Here comes the harder part, you need to export your model exported from SolidWorks to DirectX Format (*.x). I did a quick search in google and only found a few paid tools to do that. You might need to spend a bit on that or spend more time looking for free converter tools.
That's about it. If you have more question, post again. Good Luck
I'm not sure what the real question is but what I suspect that you are trying to do is to be able to manipulate a SW model of a robot with some sort of a manual input. Assuming that this is the correct question, there are two aspects that need to be dwelt with:
1) The Solidworks module: Once the model of the robot is working properly in SW, a program can be written in VB.Net that can manipulate the positional mates for each of the joints. Also using VB, a window can be programmed with slide bars etc. that will allow the operator to be able to "remotely" control the robot. Once this is done, there is a great opportunity to setup a table that could store the sequencial steps. When completed, the VB program could be further developed to allow the robot to "cycle" through a sequence of moves. If any obstacles are also added to the model, this would be a great tool for collission detection and training off line.
2) If the question also includes the incorporation of a physical operator pendent there are a number of potential solutions for this. It would be hoped that the robot software would provide a VB library for communicating and commanding the Robot programatically. If this is the case, then the VB code could then be developed with a "run" mode where the SW robot is controlled by the operator pendent, instead of the controls in the VB window, (as mentioned above). This would then allow the opertor to work "offline" with a virtual robot.
Hope this helps.