Kinect v2 XAML performance vs WPF performance - kinect

I've recently adquired a new MS Kinect v2 for Windows, and i'm messing with it in order to learn how it works, and how I would aproach my future ideas for it.
By now, i'm only teasing the samples that comes with the Kinect browser (Downloaded with the new SDK), using an almost new Toshiba C55 NoteBook (i5 2.5GH, 8GB RAM, NVidia 710M).
The fact is that i've tried the "Coordinate Mapping basics" sample, that comes in many forms (D2D, XAML, HTML and WPF). This sample just removes the background using the depth frame.
I've tried the all the versions so far, and the XAML sample runs very very very very slow... while the rest are running very very very smooth...
So i've tried an external code extracted from GitHub which technically does the same, also using XAML. And it also runs too slow.
Due the fact that i'm not used to develop for MS platforms, i don't know if it is really a hardware problem, or if XAML has higer requirements, and I cannot figure out why is it behaving so bad only with XAML.
I tried to find any similar questions, but didn't found any that seemed useful for my case.
I know that is probably my fault, but I don't know why... Maybe a misunderstanding of the whole setup?
The external sample I found: https://github.com/Vangos/kinect-2-background-removal
Also tried the CoordinateMapper from the same GitHub, same issue: https://github.com/Vangos/kinect-2-coordinate-mapping
Thank you all.
UPDATE:
After developing and deploying the WPF app succefully, I'd started to check the performance of the Kinect with Windows RT, and I'd find lots of problems at memory level, W8.1 RT is slow, and does not support Kinect V2 very well, at least in my testing HW. This problems may lead to the symptoms described in this other question I found: Kinect camera freeze
This issue also made me note that the new Kinect V2 is VERY VERY sensitive to ambient temperature.
Hope this helps some Overflowed developars with similar problems :).

The Coordinate Mapper XAML and Coordinate Mapper WPF samples both use XAML. The version marked "XAML" is a Windows Store App. The version marked "WPF" is a Windows Desktop app. I didn't see much of a difference on my machine between the two until I ran the Performance and Diagnostics tools in Visual Studio 2013. I suggest running them and creating an analysis report. That will help you discover what exactly is causing the differences.

Related

Opening Kinect datasets and/or SDK Samples

I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.

Cocos2dx performance issue on Windows Phone 8

I'm trying to port an android/iOS game to windows phone 8(cocos2dx v 2.2). I'm using the exact same code base that I've used for android and iOS. The game functions just fine, but I facing some major FPS drop. The game runs flawlessly at 60FPS in android and iOS, but I'm getting roughly about 35FPS on wp8. Has this got to do anything with differences in OpenGL and directX?
I doubt its got to do with the game's logic and calculations because when the game starts in windows phone, it starts with 60FPS on the main menu, which has got like 5 sprites. But as I add more sprites on the screen, say about 30 of them(average number of sprites when I'm IN the game) the FPS rapidly drops to 35-40 range. Note that there are no schedulers or update functions running at this point. I did the same test on Android, but the FPS didn't drop. Does the win8 port of cocos2dx suck?
Any help,comments or redirection to useful articles would be appreciated.
Thank you.
In case anyone runs into similar issue, I reduced the number of children in the scene and deployed the build in release mode. Gave a major boost to the FPS. Also, I had a bunch of float to string and int to string conversions happening in every frame inside the update function. That was eating away on the processing speed too.
Actually, the Cocos2dx port for WP8 is ok, but outdated. Cocos2d-x is now at 3.0 beta, but the WP8 was left at 2.0 alpha.
Anyway... in Cocos there are some recursive drawing functions which are very heavy on the CPU, and also, keep in mind that even though WP8 is supposed tu support arrays, lists, maps etc. they are very slow on WP8.
And since you came to this subject, Please let me know if you managed to successfully put cocos2d-x on an XAML+D3D Interop project. I am getting tons of crashes.
EDIT: Indeed, the recursive calls which process (draw or update) child "CCNode"s are very heavy on the device. However, after putting Cocos2d-x ver. 2.0alpha for WP8 into a XAML+D3D interop project, I found a whole lot of memory related issues. Apparently, after doing this (or just because I don't know how to properly configure my VS project and allow loose addressing), a lot of uninitialized pointers and data cause some memory overlaps, leading to major crashes.
This proves only that it was truely an alpha release :) Too bad no newer version of Cocos2d-x for Wp8 is available.

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.

OpenKinect Maturity

I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.