a virtual Kinect device - kinect

Like daemon tools, which creates a virtual drive for loading the iso files, is there any way we can run and compile(basically for testing) our kinect programs without the kinect device .I understand that the code would require the depth and ir sensor of kinect..

Try Fakenect from OpenKinect:
http://openkinect.org/wiki/Fakenect

I don't think that is possible. Device is what interacting with the real world. So how are you going to get depth data without the device and check whether your program works..??

Related

Limit usb power output

I work with an embedded device that has a USB host port. I would like to connect an iPhone to it and communicate via USB. I have done development on this and ported the functionality to connect to usbmux on the iPhone and have successful communication, however there is another problem.
All development was done with the iPhone connected to a powered USB hub that was connected to my device, as soon as I connected it directly, after enumeration it starts to drain the battery of my embedded device and causes a tension (voltage) drop that causes my device to turn off.
I know that after enumeration usb devices can draw up to 500 mA from the usb port, but I was wondering if there was a way to limit that to 100 mA (while still having the iPhone registered).
I found various questions regarding controlling voltage on the data pins or vcc from the usb port and I understand that's not possible, I'm looking for a software solution (although hardware solutions are welcome).
tl;dr: Is there a way to supply the iPhone with less than 500 mA after enumeration? Could I do this in software? Or do I need a hardware solution? I don't want to turn the port on/off, just limit the power draw of the iPhone.
NOTE: I am using Windows CE 6.0, if it is something that can only be done by modifying the drivers, or having direct access, there is no problem.
P.S. also, if there is a way to do this in *nix (or some other open source OS) that I could look at the source code and port it to Windows CE please let me know.
When a device shares its available configurations (see USB chapter 9), it specifies how much power it requires for each configuration. The host should look at all the available configurations and choose which one it wants.
In practice, however, these things don't work so smoothly.
The last time I looked at this, Windows always chose the first configuration. MacOS always chose the lowest power configuration (or highest, I can't remember). I never looked at WinCE or Linux.
If you're writing/modifying the driver, you can set your own rules for which configuration to choose, including looking for one that's 'self powered'. The iPhone, however, might only have one descriptor that always requests 500mA, bus powered. If so, then you're pretty much screwed since there's no way to let the iPhone know it's not OK to draw power.
That being said, I believe all the iPhone accessories are actually USB host (as opposed to USB device), and given that they don't always supply power, the iPhone must be capable of enumerating self powered.
I like the answer by Russ Schultz but I want to add another one:
No.
The descriptor of the peripheral device, iPhone in this case contains bMaxPower. If you enumerate this device, you also accept the power demand. It is not possible to only supply less, lets say 300 mA, if you already enumerated the device with the 500 mA desriptor. If this is what you wanted.
If the device provides multiple configurations, you are as mentioned by Russ free to write a driver which selects the configuration with less power. Hopefully, the device will then only consume the granted power.
Many peripheral devices just don't care. Most devices only provide one configuration with 500 mA. And there are a lot of devices which just consume more than they say ...

Kinect depth data ONLY

Is there a way in linux (raspbian) to capture only the depth data stream from a kinect? I'm trying to reduce the amount of processing needed to capture Kinect information so I want to ship the data stream to another computer to assemble the data.
Note:
I have freenect installed but anything that requires opengl will not run on rasbian.
I have installed this example which captures the data stream with a b+w visual depth display.
librekinect is a Linux kernel module that lets you use the depth image like a standard webcam. It's known to work with the Raspberry Pi.
But if you want to use libfreenect for full video/depth/motor support, you'll need a more powerful board like the ODROID XU-3 Lite. By the way, libfreenect only requires opengl for some examples. The rest of the project compiles and runs fine without.

What's the best way to determine if a HID device driver can be written in user space on OSX?

I need to write a number of drivers for both HID USB devices as well as some old serial devices. The drivers are to pull data off the device and then send the data over to an application that then consumes it. Since the Apple Docs mention that a lot of USB and HID communication can be done from the user space I had assumed that I would not need to write a kernel extension, at least not for the HID devices. Could some one tell me a more solid way to determine this?
Thanks!
If you're writing a single application that must talk to one or more USB HID devices you may well find you can just access the devices straight from the application using the application-level USB APIs.
A kernel driver would be more for something like a networking or mass storage device that needed to integrate with the kernel to be be available to multiple applications.
This Apple document Common QA and Roadmap for USB Software Development on Mac OS X goes into some detail on the matter and links to example code too.

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

suitable Embedded system to be used for image processing and gps/gsm

i am working on a project, where i would like to install an embedded system in a certain location , the system is provided with a camera , the system has to perform image processing functions on the images obtained from the camera.
The system must be attached with gps and gsm modules.
i am in the process of choosing the hardware needed, i am thinking of using a beagle board or FPGA , which one is more suitable for my application ? do you recommend other boards? do you know any gsm or gps modules that can be interfaced with these modules?
Thank you
If your image processing algorithms are too CPU intensive I'll suggest you consider FPGAs. Otherwise, Beagle board is fine.
What is the interface to your camera? USB / FireWire / I2C / other? If the Beagle Board supports what you need, and can handle the processing, that's probably the easiest way to go - FireWire and USB interfaces are not exactly trivial to do on an FPGA, unless you can get a board and a matching Linux distro for it, where everything is configured and working out of the box (and it's probably going to be expensive then...).
GPS modules typically connect over a simple serial connection, so that shouldn't be an issue for either solution.