green or white screen with processing + kinect - kinect

I am in win7 x64 Pro
Kinect xbox 360 is connected and samples (openNI and NITE) run well. When i use Processing 2.1, sketch with kinect run without error notification but there nothing on screen and he is gray or white... Yet all simple processing samples (without kinect) run well...What is wrong? I'm disappointed
intalled :
OpenNI-Win32-1.5.7-Dev.msi
NITE-Win32-1.5.2-Dev.msi
SensorKinect093-Bin-Win32-v5.1.2.1.msi
SimpleOpenNI 1.96
KinectSDK-v1.7
Ms Visual Studio 2012
Processing 2.1
I tried
OpenNIx64-1.5.7-Dev.msi
NiTE-Windows-x64-2.0.0
Sensor-Win64-5.1.6.6
but I have same problem
kinect run well (all samples from openNI and Nite) but whith Processing 2.1 (or 2.0.3 and 1.5.1) It still doesn't work : nothing on screen (gray or white) without notification error.
All simple processing samples (without kinect) run well...
Thank you for your help

I would recommend that you use the J4K (Java for Kinect) library in Processing. J4K is an open-source library and communicates with the native Microsoft Kinect SDK. You can download the j4k-processing.zip from here: j4k-processing.zip
There is a Green-Screen example in the examples folder (you need to unzip the j4k-processing.zip) called example5_basicAugmentedReality. This example shows you in the middle of a simple 3D virtual scene. In addition there are other examples for 2D interaction, reading the values of the accelerometer, etc.

Try to use SimpleOpenNI 0.27 on Processing 1.5.1
You can download SimpleOpenNI 0.27 from this link
if it didn't solve your problem. You should write more detail about your system that you use such as the version of SimpleopenNI, Kinect driver, etc

Related

Kinect Infrared Camera Not working

I am using the Kinect 2 with the newest available SDK version 2.0. Everything works except IR, tested it with both SDK infrared demo example and Kinect Studio, both result in a black screen. I also looked at the code and capturing Kinect IR frames does not result in any errors, it just consists of all minimum values.
This is quite weird, as I thought IR was used to calculate depth and I can successfully read depth information. Also, I checked (with my cellphone camera) that the IR emitter is turned on and off correctly, just data is not received correctly for some reason.
After encountering the problem I made a fresh install on another computer, as I suspected I had broken my system somehow. I got exactly the same results after installing Kinect SDK. Everything except infrared works.
Has anyone seen this kind of behaviour before?
check this out
https://social.msdn.microsoft.com/Forums/en-US/70dcceb7-8d2f-485f-b3e9-f2d4b399fbe7/kinect-v2-infrared-not-working?forum=kinectv2sdk
try updating graphics card drivers

Kinect v2 XAML performance vs WPF performance

I've recently adquired a new MS Kinect v2 for Windows, and i'm messing with it in order to learn how it works, and how I would aproach my future ideas for it.
By now, i'm only teasing the samples that comes with the Kinect browser (Downloaded with the new SDK), using an almost new Toshiba C55 NoteBook (i5 2.5GH, 8GB RAM, NVidia 710M).
The fact is that i've tried the "Coordinate Mapping basics" sample, that comes in many forms (D2D, XAML, HTML and WPF). This sample just removes the background using the depth frame.
I've tried the all the versions so far, and the XAML sample runs very very very very slow... while the rest are running very very very smooth...
So i've tried an external code extracted from GitHub which technically does the same, also using XAML. And it also runs too slow.
Due the fact that i'm not used to develop for MS platforms, i don't know if it is really a hardware problem, or if XAML has higer requirements, and I cannot figure out why is it behaving so bad only with XAML.
I tried to find any similar questions, but didn't found any that seemed useful for my case.
I know that is probably my fault, but I don't know why... Maybe a misunderstanding of the whole setup?
The external sample I found: https://github.com/Vangos/kinect-2-background-removal
Also tried the CoordinateMapper from the same GitHub, same issue: https://github.com/Vangos/kinect-2-coordinate-mapping
Thank you all.
UPDATE:
After developing and deploying the WPF app succefully, I'd started to check the performance of the Kinect with Windows RT, and I'd find lots of problems at memory level, W8.1 RT is slow, and does not support Kinect V2 very well, at least in my testing HW. This problems may lead to the symptoms described in this other question I found: Kinect camera freeze
This issue also made me note that the new Kinect V2 is VERY VERY sensitive to ambient temperature.
Hope this helps some Overflowed developars with similar problems :).
The Coordinate Mapper XAML and Coordinate Mapper WPF samples both use XAML. The version marked "XAML" is a Windows Store App. The version marked "WPF" is a Windows Desktop app. I didn't see much of a difference on my machine between the two until I ran the Performance and Diagnostics tools in Visual Studio 2013. I suggest running them and creating an analysis report. That will help you discover what exactly is causing the differences.

Kinect - 2 features simultaneously

I am trying to extract 2 features from the Kinect :
Captured video - I followed this guide:
http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/4ee6e7ca-123d-4838-82b6-e5816bf6529c
and succeeded to use the kinect as webcam and then used DirectShow in order to capture the video. Works just fine.
skeleton - I use the 1.7 Kinect SDK and the skeleton feature works sweet!
The Problem: Those 2 features don't work simultaneously
Each one of them works great by itself, but they just don't work together.
I have also tried checking the captured video in Skype's video settings section, while running the Skeleton Basics in the "Kinect for Windows Developer Toolkit 1.7"
Do you know why that happens and how can I fix that problem and enjoy the 2 features simultaneously?
Thanks a lot,
Guy.
this cannot be happen. I'm also working on a virtual dressing room concept and I could access the kinect joints and also the video stream too. I'm using xna so its cool with me to get the video buffer to the kinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
kinectSensor.SkeletonStream.Enable(new TransformSmoothParameters()
I don't know what's your approach

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

recognizing facial expressions using Kinect SDK

I am trying to do some work using Kinect and the Kinect SDK.
I was wondering whether it is possible to detect facial expressions (e.g. wink, smile etc) using the Kinect SDK Or, getting raw data that can help in recognizing these.
Can anyone kindly suggest any links for this ? Thanks.
I am also working on this and i considered 2 options:
Face.com API:
there is a C# client library and there are a lot of examples in their documentation
EmguCV
This guy Talks about the basic face detection using EmguCV and Kinect SDK and you can use this to recognize faces
Presently i stopped developing this but if you complete this please post a link to your code.
This is currently not featured within the Kinect for Windows SDK due to the limitations of Kinect in producing high-resolution images. That being said, libraries such as OpenCV and AForge.NET have been sucessfuly used to detected finger and facial recognition from both the raw images that are returned from Kinect, and also RGB video streams from web cams. I would use this computer vision libraries are a starting point.
Just a note, MS is releasing the "Kinect for PC" along with a new SDK version in february. This has a new "Near Mode" which will offer better resolution for close-up images. Face and finger recognition might be possible with this. You can read a MS press release here, for example:
T3.com
The new Kinect SDK1.5 is released and contains the facial detection and recognition
you can download the latest SDK here
and check this website for more details about kinect face tracking