Kinect IR Emitter Continuous or Pulsed? - kinect

I'm a student intern and have a work project using Kinect cameras. Unfortunately I can't go into project details due to confidentiality, but need to know if the IR dot array that is emitted from the IR blasters within the Kinect is a continuous stream or pulsed? Just the emitted IR light, not the reception from the IR camera. It would be shining on some other IR light sensors within the environment that detect when something passes through their IR field of view, but I have been told that it would not interfere as long as the stream is continuous.
I would appreciate any help/ information you guys could give.

The kinect 360 cameras has a static pattern of points that are unevenly distributed. The pattern is continuous and not pulsed as far as I know.

Related

Can you use DAQmx without an NI optical encoder?

Does DAQmx solely work with NI hardware or can it work with other brands of equipment? If it cannot how would I start to make a block diagram for an optical encoder that stores the position of a stepper motor? Sorry for the newbie question, thank you.
As #AndreTec mentioned, Arduino would be good solution if you have moderate speed motor. You can connect your encoder outputs to two interrupt pins.
However, I encourage you to use pure serial communication between Arduino and LabVIEW to avoid data loss since no LabVIEW addons deals with interrupt up to the best of my knowledge.

Use PsychPy to activate MS Kinect (or other IR devices)

Is there a way to activate the MS Kinect out of PsychoPy?
I am using PsychoPy for an experiment and I am using infrared (IR) cameras to capture participants movements. I want to automatically send a marker that is visible to the IR cameras out of the PsychPy environment. The idea is to use the Kinect or another USB IR device (e.g., asus Primesence, or a simple USB mounted IR LED) to send markers for certain events in my experiments (meaning whenever event X is happening in PsychoPy, and IR signal should be emitted from the kinect or another IR Device).
(I cannot use the sound jack to trigger an IR LED since I need the audio exit for the experiment.)
Thanks!
PsychoPy can send signals over the serial or parallel ports or connect to specific equipment like LabJack. See the API docs here: http://www.psychopy.org/api/serial.html, http://www.psychopy.org/api/parallel.html, http://www.psychopy.org/api/hardware.html.
If you can find or build a piece of hardware with LEDs that can respond to one of the ways in which PsychoPy can communicate, then yes, you could control LED IR pulses as required.

Turning off the Kinect IR emitter

I'd like to use two XBOX Kinect cameras simultaneously, but their IR signals interfere.
Therefore, I'd like to be able to switch their IR emitters in an alternating fashion. I read that if using the Microsoft SDK, there was a ForceInfraredEmitterOff flag that can be set, but would not work with XBOS Kinects.
Other sources on the web say that the IR emitter stops when you stop the depth buffer stream. However, that does not seem to work.
I'm using the openni2 python bindings provided by PrimeSense. Is there any way I can shut the IR emitters off with that?

Libfreenect VS OpenNI

So I know this question has been done before but most of the other time it was still when both OpenNI and Libfreenect where being diveloped. My question are:
1)I want to know it what state the are now.
2)The differences between this two (pros, cons and anything else)
3)Specifically for skeleton tracking, which is better and give more data about the skeleton (for example in Microsoft SDK they give data for 20 joints, is it the same in this two, more, less?)
Libfreenect is mainly a driver which exposes the Kinect device's features:
- depth stream
- IR stream
- color(RGB) stream
- motor control
- LED control
- accelerometer
It does not provide any advanced processing features like scene segmentation, skeleton tracking, etc.
On the other hand, OpenNI allows generic access to Kinect's feature (mainly the image streams), but also provides rich processing features such as:
- scene segmentation
- skeleton tracking
- hand detection and tracking
- gesture recognition
- user interface elements
etc.
but no low level controls to device features like motor/LED/accelerometer.
As opposed to libfreenect which AFAIK works only with the Kinect sensor, OpenNI
works with Kinect but with other sensors as well like Asus Xtion Pro, Carmine, etc.
You've mentioned the Kinect SDK. It's good to bare in mind the are multiple Kinect sensors:
- Kinect for Xbox
- Kinect for Windows
The Kinect for Windows sensor for example allows a close mode and has a longer range.
I don't know how the skeleton tracking differs.
Also, there is a MS Kinect-OpenNI bridge bridge project and OpenNI2 works plays nice with Kinect

How to detect heart pulse rate without using any instrument in iOS sdk?

I am right now working on one application where I need to find out user's heartbeat rate. I found plenty of applications working on the same. But not able to find a single private or public API supporting the same.
Is there any framework available, that can be helpful for the same? Also I was wondering whether UIAccelerometer class can be helpful for the same and what can be the level of accuracy with the same?
How to implement the same feature using : putting the finger on iPhone camera or by putting the microphones on jaw or wrist or some other way?
Is there any way to check the blood circulation changes ad find the heart beat using the same or UIAccelerometer? Any API or some code?? Thank you.
There is no API used to detect heart rates, these apps do so in a variety of ways.
Some will use the accelerometer to measure when the device shakes with each pulse. Other use the camera lens, with the flash on, then detect when blood moves through the finger by detecting the light levels that can be seen.
Various DSP signal processing techniques can be used to possibly discern very low level periodic signals out of a long enough set of samples taken at an appropriate sample rate (accelerometer or reflected light color).
Some of the advanced math functions in the Accelerate framework API can be used as building blocks for these various DSP techniques. An explanation would require several chapters of a Digital Signal Processing textbook, so that might be a good place to start.