I have been working with Kinect gestures for a while now and so far the tools that are available to create gestures are only limited to track entire body movements for instance swiping your arm to left and right. The JOINT TYPES available in the original Kinect SDK involves elbows, wrists, hands, shoulders etc but doesn’t include minor details like index finger, thumb, and middle finger. I am mentioning al this because I am trying to create gestures involving only hand movements (like victory sign, thumb up/down). Can anyone guide me though this? Is there a blog or website where codes for hand movements are written?
I have been developing application with Kinect one year ago, and then it was very hard or nearly impossible to do that. Now Google shows me projects like this, be sure to check it out. If you generally want to focus on hands gestures, I really advise you to use LEAP Motion
My friends at SigmaRD have developed something called the SigmaNIL Framework. You can get it from the OpenNI website.
It offers "HandSegmentation", "HandSkeleton", "HandShape" and "HandGesture" modules which may cover your needs.
Also check out the rest of the OpenNI Middleware and Libraries that you can download from their website. Some of them also work with the Microsoft SDK.
Related
I have a project just starting up that requires the kind of expertise I have none of (yet!). Basically, I want to be able to use the user's webcam to track the position of their index finger, and make a particular graphic follow their finger around, including scaling and rotating (side to side of course, not up and down).
As I said, this requires the kind of expertise I have very little of - my experience consists mostly of PHP and some Javascript. Obviously I'm not expecting anyone to solve this for me, but if anyone was able to direct me to a library or piece of software that could get me started, I'd appreciate it.
Cross compatibility is of course always preferred but not always possible.
Thanks a lot!
I suggest you to start reading about OpenCV
OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez.1 It is free for use under the open-source BSD license. The library is cross-platform. It focuses mainly on real-time image processing.
But first, as PHP and JavaScript are mainly used for web development, you should start reading about a programming language which is supported by OpenCV like C, C++, Java, Python or even C# (using EmguCV) etc.
Also, here are some nice tutorials to get you started with hand gestures recognition using OpenCV. Link
Good luck!
I know this has been all over the net. I browsed a lot and found a lot of information about Gesture recognition .However i got more confused after reading most of them seem to work on Kinect Official SDK , Microsoft's SDK and i don't want to use them.
I need to develop it for Linux and was wondering if OpenNI, NITE gives me the flexibility to recognize a gesture.
I am trying to develop a sign language program, which recognizes the gestures and draws stuff(spheres,cubes etc) on screen,
Could any one give me a clear guidelines to begin this project.I have no clue where to begin it from.
Any help is welcomed.
Thanks
For getting starting with understanding gestures I suggest checking out a blog post I made a while back:
http://www.exceptontuesdays.com/gestures-with-microsoft-kinect-for-windows-sdk-v1-5/
This, plus the post it links to, go into what makes up a "gesture" and how to implement them using Kinect for Windows SDK.
You can also check out thine Kinect Toolbox, which also does gestures for the official Kinect SDK:
http://kinecttoolbox.codeplex.com
These will give you a good understanding of how to deal with gestures. You can then move those concepts and ideas into an OpenNI environment.
If in your sign language system you want to use hand gestures you can have a look at Kinect 3D handtracking. If it includes whole body movement then have a look at KineticSpace. Both tools work for linux but the first tool requires CUDA enabled GPU as well.
I think the Gesture recognition is not depended on whether you use Kinect Official SDK or OpenNI, if you can get the skeleton data or depeth image from the Kinect,then you can extract the gesture or postion from the relations between skeltons with times. As far as i know ,those SDKs are all provide those informations.
I developed kinect on windows ,but the priciple are the same, I still suggest to know the principle about the Gesture recognition, then you can find the way to do recognition using other SDKs.
I'm developing a typical "Windows GUI" based app for iPhone using MONO technologies. I need to add a little AR based functionality to it. It is just about opening up the camera, and showing information to the user regarding nearby businesses.
How can I do this using mono?
Of course it is possible. I have created a project and works very nice. It is quite complicated and I would need three pages to explain it and, the time to do it which I do not have.
In general, you need to look into:
CLLocationManager for location and
compass.
MapKit, if you want to provide
reverse geocoding information.
Implement an overlay view over the
UIImagePickerController which will
act as your canvas.
And of course, drawing.
I hope these guidelines will get you started.
We're in process of developing a desktop application which needs to record user's screen once he clicks a button. I read a tutorial about Adobe AIR, which says it is easy to do with AIR: http://www.adobe.com/devnet/air/flex/articles/air_screenrecording.html
But our preference is Titanium as we've explored it a little bit. So I want to know is that even possible? If yes, how can we get started with?
There's also an interesting solution which uses Java applet for recording, as demonstrated here: http://www.screencast-o-matic.com/create?step=info&sid=default&itype=choose
But again, we're not sure about JAVA and would like to know how can it be done? or if its even possible to run a JAVA applet in Titanium?
When you say "record screen", I'm assuming you mean video. Correct?
The only way to do this in Titanium Desktop right now is to take a bunch of screenshots and string them together (encoding would probably need to be done server-side).
Depending on how long your videos need to be, this probably won't work for you. I'm also not confident in how quickly you could capture screenshots, and if it would have a high enough frame rate to be usable.
Past that, a module could be developed for Desktop to support some native APIs to record video. That's not something I see on the horizon, though.
I hope this helps, albiet a rather dismal answer. -Dawson
I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)