Kinect motor control via Processing - kinect

I'm hacking the Kinect using some simple-openni based processing apps for a talk I plan to give soon and I found an API that appears to control the motor. There is a moveKinect method that appears to be added to the main ContextWrapper interface but I can't seem to get it to work. Looking through the svn history and release notes it appears to have been added last year with a note that explains it doesn't work with the newest drivers(5.1.02,Linux64). I've tried calling the method giving it values in degress and radians but nothing happens. I get no error and no movement. Has anyone else played with this? I'm running with the 2nd to latest processing 2.0 build (the link to processing 2.0.1 doesn't work) and the latest SImpleOpenNI package I could download.

SimpleOpenNI is the wrapper for OpenNI which allows access to the RGB/IR/Depth streams and the middleware for body/hand detection, but does not allow access to hardware like the LED, accelerometer or motor.
You should try Kinect P5 which uses libfreenect behind the scenes and supports motor control. Bare in mind you won't have support for the middleware.
If you need both middleware and hardware access you can try OpenFrameworks with the ofxOpenNI addon. It has a has a hardware class that works on OSX and Linux (as sudo) allowing use of both the middleware and motor.

Related

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Failed Win8 App Certification: 3.10 - If your app includes an ARM or a Neutral package it must support Microsoft Direct3D feature level 9_1

My C# app uses a C++ WinRT component I've written to get a list of system fonts using Direct X.
This is based on this example:
http://msdn.microsoft.com/en-us/library/dd756582(v=VS.85).aspx
My app is published in the store, but my latest update failed to pass the store review process on point 3.10 complaining about my use of Direct 3D and how this might not run on ARM tablets. As far as I know I'm not using Direct 3D and the only Direct X feature I'm using is GetSystemFontCollection.
How can I make sure I don't fail this requirement and do I need to remove some rogue reference in my component to Direct3D?
Also, why am I failing this now, when it passed before?
Did you target all three platforms or choose any cpu in your release?
Does this page help:
http://msdn.microsoft.com/en-gb/library/windows/apps/hh994923.aspx
It looks like you may have inadvertently requested a higher level.
I submitted again and included a note to the tester explaining that my app didn't use any Direct 3D features, and I told them the exact DirectX function I did use.
I still failed, but the Direct3D reason was no longer one of the reasons.
Apparently my app is crashing, which was another failure reason the first time round, but I thought this must be related to the Direct3D problem. I can't reproduce the crash, but at least I now know that I can stop looking at my use of DirectX. This was a red herring.

Playstation Eye with Labiew

Does anyone know how to integrate the Playstation Eye with Labview? Can a driver somehow be used to allow Labview to recognize it as a webcam?
You should be able to do this with vision (install IMAQdx and Vision Dev Module)- it seems to be DirectShow, which IMAQ can do- or try out the code found on this page: http://www.labviewforum.de/thread-21279.html - it uses the original dlls.
as there are NO official dll´s for the PS3 Eye on Windows, the ONLY Option is to use the 3rd Party drivers from Code Laboratries or directly interface the Hardware via USB-RAW commands. Code Laboratries PS3 Implementation however does not seem to be 100% conform with the Direct Show standard. You can get a PS3 Eye to work with Labview (via Direct Show and IMAQ), but you will be limited by the usable framerates.
I tried to interface the dll from code laboratries directly, but got stuck on a stange error with the second function i tried (see the already referenced Thread http://www.labviewforum.de/thread-21279.html). However it seems as for now there is a Vi Package available for the PS3 Eye to support LabView under OSX with the full available framerate. More Information can be found here:
http://labview.epfl.ch/
Hope this helps.
Best Regards,
Jan

Supporting rotation sensors in Symbian across multiple devices in one executable

I'm puzzling my head as to how some application appear to support the couple of Rotational Sensor APIs for Symbian, specifically the Sensor API and the Sensor Framework (both the 5th ed. and the 3rd ed. FP2 backport).
For example, I believe that Gravity will support rotation in N95 and also newer models from the same binary (could be wrong there...).
If I use the Sensor Framework then my app will not install on an N95 (it gives me a System Error -1), whereas if I use the Sensor API (RRSensor) then it will only install on an N95 and no other phones. This is most likely due to the available libraries on those devices.
I am trying to find some way of abstracting things such that I can use exactly the same binary for all devices. The only alternative I can see is trying to use ECOM plugins and then installing the relevant library using conditionals in my PKG file.
Does anyone know of a better/easier way?
If you need to use different APIs, I suggest making multiple DLLs that implement the same interface and selectively install them to device depending on device ID. SIS files allow that.

How do I get input from an XBox 360 controller?

I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)