I am not sure if this has been tried before but I am trying to use Kinect and detect gestures made by the Nao robot.
I have made a Kinect application, a gesture based picture viewer and it detects humans fine(Obviously it does!) What I wanted to try was (lazy as I am), to see if I could use some (say, voice) command to tell the Nao to do a Swipe Right gesture and have my application identify that gesture. The Nao can easily identify my command and do some gesture. The problem however is, when I put the Nao in front of the Kinect sensor, the Kinect does not track it.
What I want to know is, are there some basics behind Kinect's human body motion tracking that essentially fails when a robot is placed in front of it instead of a human?
PS: I have kept the Nao at the right distance from the sensor. I have also checked if the entire robot is in the field of view of the sensor.
The NAO Robot doesn't have the same proportion as a human, and moreover its size is not of an human being (too short). For those reason, classic skeleton detection doesn't detect NAO as a human.
To do that you should take a current skeleton detection, than change threshold and constants. Sadly I don't hear about that kind of algorithm being opensource...
Just let me know...
Related
I am trying to make a basic rhythm game in Godot, but with unique controls. A few years ago, I played a cool game called Fast Like a Fox. The controls were unique, because you tapped on the back of your device to move your character to move, not on the screen. I thought the controls were cool, and I want to try to replicate them in a simple one-button rhythm game for mobile. Does anyone know if it would be possible for Godot to take that kind of input, either in a built-in function or something else?
They read the accelerometer (and maybe other sensors), which Godot supports through accelerometer, gravity and gyroscope. Accelerometers are accurate enough to read passwords as they're being typed so you can even get a rough estimate on where the user is tapping, which is used in Fast Like a Fox use case where internally they poll the sensor and raise an event when particular changes happen in one or multiple axes. In your case, it might be enough to just treat any sudden changes as an event if you simply care about the user tapping anything.
Try writing an app that will display the delta of each axis measurement then tap your phone around, you'll figure it out. Remember to test on various conditions (device being held upside down while laying on a bed, sitting on a chair, laying on one's side, etc) since different axes will register the changes.
I have a mouse mouse from speedlink that is able to do a lot of things, like changing the colours of the leds, but I can only do those things with the provided software from speedlink.
Is it possible to code your own software that controls the led lights of the mouse?
Yes, but you would have to have the hardware specifications to know what needs to be sent to the mouse to accept the commands you're looking for. Usually these things are not published or readily accessible.
I bought two Microsoft Basic Optical mice, identical to the one which performed the functions I needed really well. The first one would grab and flip the grid with object in one position within the grid with a right click of the Mouse. The Blender 2.79 3D modelling app is what I am using the mice for, I plugged the two new mice into two computers, to try them out, they would NOT do the grid grabbing and flipping, though they would perform the other functions. the grid grabbing function is important so that You can move the scene and inspect the model, or appear to walk around a solid object in the real world.
Good day,
I would like to take this opportunity to give my many thanks to the people of stackoverflow.com.
I have been new to coding, .net, over the past year, and I have always found stakoverflow to be a fantastic base of knowledge for learning. I spent the last couple weeks working, in depth, on a speech recognition project I am going to use with the upcoming release of Media Browser 3. Originally, the idea was to build a recognizer and have it control media. However as I moved through the different namespaces for speech recognition, it lead me into the realm of the Microsoft Kinect sensor. The more I use the kinect device, the more I would like to use some of the skeleton tracking it has to offer. Which leads me to my question.
I am not interested in build a WPF application that displays a window of what the kinect is seeing. This is part of a Form application, in which I would like to support only two of three gestures.
The idea is for it to watch for three gestures and simulate a key press on the keyboard.
So first I enable skeletonframe before the the audio for the recognizer, because I had read on here somewhere that enabling the skeleton after the audio canceled the audio for some reason.
Then I add some event handlers to my form.
I added skeletonFrameReady event.
I suppose my main questions would be, am I on the right track with skeleton tracking? Is it possible to do this from a form application without trying to draw th skeleton?
Thank you again,
I hope I made sense, sorry for my ignorance.
Ben
It is possible of course. For gesture recognition you can make a comparison between the positions of the joints (In the method that skeletonFrameReady event calls, which is called several times per second).
If you want to recognize complex gestures (like waving a hand), I suggest you take a look at this page http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx and download the sample code there. (which is hidden in the last paragraph :)
Main idea is checking for predefined gesture segments with the correct order (If the segment1 is successful, look at segment2. If segment2 is paused, look at segment2 again until it is either successful or failed).
Hope this helps.
I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.
I just bought a Magic Mouse and I like it pretty much. But as a Mac Developer it's even cooler. But there's one problem: is there already an API available for it? I want to use it for one of my applications. For, example, detect the user's finger positions, swipe or stretch gestures etc...
Does anyone know if there's an API for it (and how to use it)?
The Magic Mouse does not use the NSTouch API. I have been experimenting with it and attempting to capture touch information. I've had no luck so far. The only touch method that is common to both the mouse and the trackpad is the swipeWithEvent: method. It is called for a two finger swipe on the device only.
It seems the touch input from the mouse is being interpreted somewhere else, then forwarded on to the public API. I have yet to find the private API that is actually doing the work.
get a look here: http://www.iphonesmartapps.org/aladino/?a=multitouch
there's a full working proof-of-concept using the CGEventPost method.
--
all the best!
I have not tested, but I would be shocked if it didn't use NSTouch. NSTouch is the API you use to interact with the multi-touch trackpads on current MacBook Pros (and the new MacBooks that came out this week). You can check out the LightTable sample project to see how it is used.
It is part of AppKit, but it is a Snow Leopard only API.
I messed around with the below app before getting my magic mouse. I was surprised to find that the app also tracked the multi touch points on the mouse.
There is a link in the comments to some source that gets the raw data similarly, but there is no source to this actual app.
http://lericson.blogg.se/code/2009/november/multitouch-on-unibody-macbooks.html