Extending Functionality of Magic Mouse: Do I Need a kext? - objective-c

I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira

How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)

Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P

I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best

If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.

Related

How to monitor for swipe gesture globally in OS X

I'd like to make an OSX application that runs in the background and performs some function when a swipe down with four fingers is detected on the trackpad.
Seems easy enough. Apple's docs show almost exactly this here. Their example monitors for mouse down events. As a simple test, I put the following in applicationDidFinishLaunching: in my AppDelegate.
void (^handler)(NSEvent *e) = ^(NSEvent *e) {
NSLog(#"Left Mouse Down!");
};
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:handler];
This works as expected. However, changing NSLeftMouseDownMask to NSEventMaskSwipe does not work. What am I missing?
Well, the documentation for NSEvent's +addGlobalMonitorForEventsMatchingMask:handler: gives a list of event it supports and NSEventMaskSwipe is not listed so... it's to be expected that it not work.
While the API obviously supports the tracking of gesture locally within your own application (through NSResponder), I believe gestures can't be track globally by design. Unlike key combinations, there are much lower forms/types of gestures... essentially only:
pinch in/out (NSEventTypeMagnify)
rotations (NSEventTypeRotation)
directional swipes with X amount of fingers (NSEventTypeSwipe)
There's not as much freedom. With keys, you have plenty of modifiers (control, option, command, shift) and the whole alphanumeric keys making plenty of possible combinations so it'd be easier to reduce the amount of conflicts with local-events and global-events. Similarly, mouse events are region-based; clicking in one region can easily be differenciated from clicking in another region (from both the program's and user's point-of-view).
Because of this lower possible combination of touch events, I believe Apple might purposely be restricting global (as in, one app, responding to one or more gestures for the whole system) usage for its own usage (Mission Control, Dashboard, etc.)

using kinect skeleton - no interest in wpf drawing

Good day,
I would like to take this opportunity to give my many thanks to the people of stackoverflow.com.
I have been new to coding, .net, over the past year, and I have always found stakoverflow to be a fantastic base of knowledge for learning. I spent the last couple weeks working, in depth, on a speech recognition project I am going to use with the upcoming release of Media Browser 3. Originally, the idea was to build a recognizer and have it control media. However as I moved through the different namespaces for speech recognition, it lead me into the realm of the Microsoft Kinect sensor. The more I use the kinect device, the more I would like to use some of the skeleton tracking it has to offer. Which leads me to my question.
I am not interested in build a WPF application that displays a window of what the kinect is seeing. This is part of a Form application, in which I would like to support only two of three gestures.
The idea is for it to watch for three gestures and simulate a key press on the keyboard.
So first I enable skeletonframe before the the audio for the recognizer, because I had read on here somewhere that enabling the skeleton after the audio canceled the audio for some reason.
Then I add some event handlers to my form.
I added skeletonFrameReady event.
I suppose my main questions would be, am I on the right track with skeleton tracking? Is it possible to do this from a form application without trying to draw th skeleton?
Thank you again,
I hope I made sense, sorry for my ignorance.
Ben
It is possible of course. For gesture recognition you can make a comparison between the positions of the joints (In the method that skeletonFrameReady event calls, which is called several times per second).
If you want to recognize complex gestures (like waving a hand), I suggest you take a look at this page http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx and download the sample code there. (which is hidden in the last paragraph :)
Main idea is checking for predefined gesture segments with the correct order (If the segment1 is successful, look at segment2. If segment2 is paused, look at segment2 again until it is either successful or failed).
Hope this helps.

Is there a way to get push to scroll functionality in Windows 8 metro Apps?

In the Windows 8 Consumer Preview, moving the mouse towards the left or right edge in the start screen causes the content to scroll.
The standard controls (and currently released preview apps) does not seem to support this.
Is there a way to make this work?
I asked this question at TechEd North America this year, after one of the sessions given by Paul Gusmorino, a lead program manager for the UI platform.
His answer was that no, apps can't do push-against-the-edge-to-scroll. WinJS and WinRT/XAML apps don't even get the events you would need to implement it yourself. Apps get events at the level of the mouse pointer, and once the mouse pointer hits the edge of the screen, it can't move any farther and you don't get any more events. (Well, it might wiggle up and down a little bit, but not if it hit a corner. At any rate, it's not good enough to scroll the way the Start screen does.)
He mentioned that, if you were writing a C++/DirectX app, you would be able to get the raw mouse input you needed to do this yourself -- you can get low-level "device moved by DX,DY" rather than the high-level "pointer moved to X,Y". I'm guessing this is how the Start screen does it, though I didn't think to ask.
But no, it's not built-in, it's not something you can implement yourself (unless you write your app in low-level C++/DirectX), and it sounds like they have no plans to add it before Windows 8 ships.
Personally, I think it's pretty short-sighted of them to have apps feel cripped compared to the Start screen, but evidently they're not concerned about little things like usability. </rant>
You can do the following to get information on mouse moving beyond the screen and use the delta information to scroll your content.
using Windows.Devices.Input;
var mouse = MouseDevice.GetForCurrentView();
mouse.MouseMoved += mouse_MouseMoved;
private void mouse_MouseMoved(MouseDevice sender, MouseEventArgs args)
{
tb.Text = args.MouseDelta.X.ToString();
}

Keyboard shortcuts in iOS?

Is it possible to capture command-key sequences in 3rd party iPad/iPhone apps?
Long version:
On my excellent journey of discovery vis-a-vis my new iPad with it's gleaming keyboard dock, I discovered, much to my joy, that when editing text in standard issue text views; commands ranging from ⌘C/⌘P for copy-paste and ^A, ^B, ^E and friends for line and character jumping works.
So far so good, yeah? Problem is, this enthralling behaviour seems limited to text fields, and more specifically, standard issue text fields. What I would really like is to capture events like these for my own use.
An issue I often find with a lot of apps is that they tend to either be close to useless, or at least cumbersome, without the keyboard dock (e.g. the iWork Suite), or close to useless, or at least cumbersome, with the keyboard dock (most other applications that don't rely heavily on text input, but rather touch gestures [that is to say, most other applications period]).
Many games, for instance Civilization Revolutions, and similar, would benefit massively from just the simple addition of the ability to use the arrow keys to move units and the enter key to end turn.
So the question, then, as stated above: Is there a way to capture and respond to these events in order to offer an alternative to touch commands for those that desire this and have the hardware?
Disclaimer: I have no intention of developing applications that rely exclusively on keyboard input, of course, and nor should anyone else. The touch interface is paramount. It's just not always completely practical.
The only way (that I know of) to get input from the keyboard in iOS is using the UITextInput protocol. Unfortunately, the protocol doesn't give you the raw keys that were pressed, and instead sends you messages like "insert this string" and "move the caret to this position." So in order to know that an arrow key was pressed would require you to do some digging.
As for shortcuts with modifier keys, like copy/paste or undo/redo, Apple only seems to support the basics, and doesn't allow you to create custom ones. They use methods in UIResponder: –canPerformAction:withSender: and undoManager.
So if I were writing a game and wanted to take advantage of the keyboard I would subclass UIResponder and have it conform to the UITextInput protocol. And then make it the first responder. This however will probably bring up the software keyboard if a physical one is not present.
My own disclaimer: I haven't done all the hard work to use UITextInput in a way it wasn't meant to be used, so I don't know how feasible it would be to actually get it working. And I don't really want to. Rather, let's all file bug reports to get Apple to create an API that allows us to get more precise input from the keyboard.
In iOS 7, the UIResponder property keyCommands and the class UIKeyCommand were added to support shortcuts. Simply override keyCommands to return an array of UIKeyCommand and you should be good to go.
Worth mentioning: Though the details are currently under NDA, Apple is adding support for keyboard shortcuts/events in iOS 7.
I suspect it will work similarly to how it works on Mac OS X, which is briefly described in this answer to a similar question.

Apple Magic Mouse Api

I just bought a Magic Mouse and I like it pretty much. But as a Mac Developer it's even cooler. But there's one problem: is there already an API available for it? I want to use it for one of my applications. For, example, detect the user's finger positions, swipe or stretch gestures etc...
Does anyone know if there's an API for it (and how to use it)?
The Magic Mouse does not use the NSTouch API. I have been experimenting with it and attempting to capture touch information. I've had no luck so far. The only touch method that is common to both the mouse and the trackpad is the swipeWithEvent: method. It is called for a two finger swipe on the device only.
It seems the touch input from the mouse is being interpreted somewhere else, then forwarded on to the public API. I have yet to find the private API that is actually doing the work.
get a look here: http://www.iphonesmartapps.org/aladino/?a=multitouch
there's a full working proof-of-concept using the CGEventPost method.
--
all the best!
I have not tested, but I would be shocked if it didn't use NSTouch. NSTouch is the API you use to interact with the multi-touch trackpads on current MacBook Pros (and the new MacBooks that came out this week). You can check out the LightTable sample project to see how it is used.
It is part of AppKit, but it is a Snow Leopard only API.
I messed around with the below app before getting my magic mouse. I was surprised to find that the app also tracked the multi touch points on the mouse.
There is a link in the comments to some source that gets the raw data similarly, but there is no source to this actual app.
http://lericson.blogg.se/code/2009/november/multitouch-on-unibody-macbooks.html