Kinect Infrared Camera Not working - kinect

I am using the Kinect 2 with the newest available SDK version 2.0. Everything works except IR, tested it with both SDK infrared demo example and Kinect Studio, both result in a black screen. I also looked at the code and capturing Kinect IR frames does not result in any errors, it just consists of all minimum values.
This is quite weird, as I thought IR was used to calculate depth and I can successfully read depth information. Also, I checked (with my cellphone camera) that the IR emitter is turned on and off correctly, just data is not received correctly for some reason.
After encountering the problem I made a fresh install on another computer, as I suspected I had broken my system somehow. I got exactly the same results after installing Kinect SDK. Everything except infrared works.
Has anyone seen this kind of behaviour before?

check this out
https://social.msdn.microsoft.com/Forums/en-US/70dcceb7-8d2f-485f-b3e9-f2d4b399fbe7/kinect-v2-infrared-not-working?forum=kinectv2sdk
try updating graphics card drivers

Related

kinectic getting started & troubleshooting

I am planning to design gesture based virtual trial room using kinect xbox. I am new to kinetic & android application. To get started which software need to be downloaded. I downloaded kinectic SDKv2.0.
which software used to write code here. I downloaded Brekel Kinect Pro Body Trial v1.38 32bit which recognise the gesture.
to get started with gesture which platform is good.
Some people say openCV, OpenNI.I could able to differentiate between them. could someone give some idea over it.
Open kinects
Now go with Kinect Windows SDK. Nothing else needed to get the Kinect started. You can track all the joints and even Kinect device tilt motor. But I don't think that you can do anything with android. To get started you can refer to these video set since it explain starting from the device itself.
Kinect getting started

Kinect - 2 features simultaneously

I am trying to extract 2 features from the Kinect :
Captured video - I followed this guide:
http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/4ee6e7ca-123d-4838-82b6-e5816bf6529c
and succeeded to use the kinect as webcam and then used DirectShow in order to capture the video. Works just fine.
skeleton - I use the 1.7 Kinect SDK and the skeleton feature works sweet!
The Problem: Those 2 features don't work simultaneously
Each one of them works great by itself, but they just don't work together.
I have also tried checking the captured video in Skype's video settings section, while running the Skeleton Basics in the "Kinect for Windows Developer Toolkit 1.7"
Do you know why that happens and how can I fix that problem and enjoy the 2 features simultaneously?
Thanks a lot,
Guy.
this cannot be happen. I'm also working on a virtual dressing room concept and I could access the kinect joints and also the video stream too. I'm using xna so its cool with me to get the video buffer to the kinectSensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
kinectSensor.SkeletonStream.Enable(new TransformSmoothParameters()
I don't know what's your approach

Color and depth stream don't work anymore

I used until yesterday afternoon a Kinect for XBOX 360 on my computer, a MacBook Pro 15" Late 2011, whose specifications are available here. I use Windows 7 (natively installed, without using virtual machines). The version of the SDK I had installed was 1.0.
All of a sudden, from today the Kinect no longer worked. Initially I thought it was some error in my code, but I noticed that the program remained stuck at the beginning, when I called the method KinectSensor.Start().
I started looking for information on the internet. I read about a solution obtained by reinstalling the drivers. It did not work, and then I tried to install version 1.6 of the SDK. Unfortunately, even that did not work.
I've seen at this stage, there might be compatibility issues with certain USB host controllers, such as the Intel 5 Series/3400 Series Chipset USB host controller. In my case, however, there should be no problems (because there were not ever been up to yesterday):
To check if the problem was really due to the sensor, and not to my application, I run one of the test applications provided with the SDK, called Kinect Explorer. However, I encountered the same problem with this test application. After waiting about one minute, when the Kinect Explorer starts I cannot see neither the color stream, nor the depth stream, nor information about skeleton. The only thing I can do is move the Kinect up and down, changing the angle of the neck. Even the microphone array seems to work properly.
I read two interesting posts about this kind of problem: this and this, which have not been answered.
In the first of these two links, the user who reports the problem says that the hardware has been compromised. I thought the same thing myself until I started again Kinect Explore, initially with the sensor unplugged. Once started this program, I plug-in the cable, and I noticed that Kinect Explorer has marked the Kinect sensor as Connected. After a short initialization phase, I again see the color stream, while the depth stream showed an image of uniform color (green-gray):
This situation lasted a few seconds, after which the image is locked and the question came up. Also, sometimes the FPS value drops from 30 to 29.
I am able to reproduce this latter situation only after keeping the Kinect unplugged for a while (10 minutes are sufficient).
How can I solve this strange and terrible problem? Is it possible to restore the Kinect sensor, and make it works again? Or do I have to conclude that the sensor is irretrievably broken?

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

Can the Kinect SDK be run with saved Depth/RGB videos, instead of a live Kinect?

This question relates to the Kaggle/CHALEARN Gesture Recognition challenge.
You are given a large training set of matching RGB and Depth videos that were recorded from a Kinect. I would like to use the Kinect SDK's skeletal tracking on these videos, but after a bunch of searching, I haven't found a conclusive answer to whether or not this can be done.
Is it possible to use the Kinect SDK with previously recorded Kinect video, and if so, how? thanks for the help.
It is not a feature within the SDK itself, however you can use something like the Kinect Toolbox OSS project (http://kinecttoolbox.codeplex.com/) which provides Skeleton record and replace functionality (so you don't need to stand in front of your Kinect each time). You do however still need a Kinect plugged in to your machine to use the runtime.