Is it possible to capture command-key sequences in 3rd party iPad/iPhone apps?
Long version:
On my excellent journey of discovery vis-a-vis my new iPad with it's gleaming keyboard dock, I discovered, much to my joy, that when editing text in standard issue text views; commands ranging from ⌘C/⌘P for copy-paste and ^A, ^B, ^E and friends for line and character jumping works.
So far so good, yeah? Problem is, this enthralling behaviour seems limited to text fields, and more specifically, standard issue text fields. What I would really like is to capture events like these for my own use.
An issue I often find with a lot of apps is that they tend to either be close to useless, or at least cumbersome, without the keyboard dock (e.g. the iWork Suite), or close to useless, or at least cumbersome, with the keyboard dock (most other applications that don't rely heavily on text input, but rather touch gestures [that is to say, most other applications period]).
Many games, for instance Civilization Revolutions, and similar, would benefit massively from just the simple addition of the ability to use the arrow keys to move units and the enter key to end turn.
So the question, then, as stated above: Is there a way to capture and respond to these events in order to offer an alternative to touch commands for those that desire this and have the hardware?
Disclaimer: I have no intention of developing applications that rely exclusively on keyboard input, of course, and nor should anyone else. The touch interface is paramount. It's just not always completely practical.
The only way (that I know of) to get input from the keyboard in iOS is using the UITextInput protocol. Unfortunately, the protocol doesn't give you the raw keys that were pressed, and instead sends you messages like "insert this string" and "move the caret to this position." So in order to know that an arrow key was pressed would require you to do some digging.
As for shortcuts with modifier keys, like copy/paste or undo/redo, Apple only seems to support the basics, and doesn't allow you to create custom ones. They use methods in UIResponder: –canPerformAction:withSender: and undoManager.
So if I were writing a game and wanted to take advantage of the keyboard I would subclass UIResponder and have it conform to the UITextInput protocol. And then make it the first responder. This however will probably bring up the software keyboard if a physical one is not present.
My own disclaimer: I haven't done all the hard work to use UITextInput in a way it wasn't meant to be used, so I don't know how feasible it would be to actually get it working. And I don't really want to. Rather, let's all file bug reports to get Apple to create an API that allows us to get more precise input from the keyboard.
In iOS 7, the UIResponder property keyCommands and the class UIKeyCommand were added to support shortcuts. Simply override keyCommands to return an array of UIKeyCommand and you should be good to go.
Worth mentioning: Though the details are currently under NDA, Apple is adding support for keyboard shortcuts/events in iOS 7.
I suspect it will work similarly to how it works on Mac OS X, which is briefly described in this answer to a similar question.
Related
I am trying to display multiple key combinations to a MenuItem in cocoa. This is most commonly known as "chords".
For example I want to add a menu item that looks like:
"Action1 Control K, F" or "MenuItem2 K,L"
Would this be possible in Objective-C through the standard API? I've looked around and the closest thing to this on MacOS would be using custom views. Would it be the way to go for allowing this functionality?
The standard API does not support handling chords, thus it does not allow setting chords as key equivalent and thus it also cannot display a chord as a key equivalent.
If you need that functionality, you need to implement entirely yourself. Just make your own NSView object and assign it to the view property of NSMenuItem. As documented, you will then have to draw everything yourself:
A menu item with a view does not draw its title, state, font, or other
standard drawing attributes, and assigns drawing responsibility entirely
to the view. Keyboard equivalents and type-select continue to use the key
equivalent and title as normal.
Source: https://developer.apple.com/documentation/appkit/nsmenuitem/1514835-view?language=objc
Whether this is a normal NSView filled with subviews, created either programmatically or even loaded from a NIB file, or whether this is a subclass of NSView drawing everything itself is up to you, all these variations will actually work. Usually it's easiest to use a NIB file and build you menu look in interface builder and using autolayout.
Yet keep in mind that this breaks Apple Human Interface guidelines. It violates the users expectation as all his other apps don't offer anything comparable since in macOS a menu item has one key equivalent or it has none. It also breaks the ability of users to customize the key equivalent the way he is used to do this for all other applications (System Preferences > Keyboard > Shortcuts > App Shortcuts).
Generally you should not replace system standard UI with your own UI unless you really have a very good reason for doing so, as that always breaks users expectations, certain system functionality won't work as expected (e.g. accessibility features) and it destroys the uniform look and feel of the system. Also it breaks system automatic, as you can see in macOS 10.14 (Mojave) where all system standard UI automatically supports dark mode, so if you used only standard UI, your app supports dark mode without any modification, yet all custom UI needs to be customized again for dark mode.
Yes, you'll need to use a custom view. NSMenuItem only displays the first character of its keyEquivalent.
I am learning cocoa, and I am creating an application that will require similiar layout to the screenshot below (this seems like a very common layout approach).
What kind of controls/architecture would this type of Cocoa application be?
I'm still in my early stages of learning/reading, and I know of document based applications only so far, but this type of layout doesn't seem to look like a document based app since it doesn't really require multiple windows opened.
If it isn't document, is there a name for other design patters or layouts?
From what I now so far, I would describe this like:
I would be grateful if someone could give me a detailed overview of the high level design for an app like this i.e. things like: # of panels, views used, controls, controllers etc?
Also, a few quick sub-questions:
what kind of menu controls are those in the left pane, then expand and display sub elements?
When preferences windows are displayed, what is that effect called that makes it display in an animated way (like the address book does), where it is a small window that expands to its correct size in an animated fashion.
You are right that this is probably not a document based application, as they open documents in new windows by default.
To layout the window like that, there’d be an NSSplitView that contains the 3 panes. Each pane may optionally contain a view loaded from an NSViewController, which can help keep the code modularised, but it depends on what you’re trying to do if this is appropriate.
The left pane would be an NSOutlineView (a NSTableView subclass), the middle an NSTableView, but I’m not sure exactly how the right-hand side view would be created (lots of custom NSViews and other things, possibly WebView)
That popover options window is possibly a NSPopover (which contains an NSViewController), but that’s only compatible with OS X 10.7, so may also be totally custom for backwards compatibility and easier customisation.
Also note this is a fairly complicated example you’ve given, with lots of custom controls that are probably harder to create than they look:
To get the outline views on the left to have unread counts and icons (from memory) is not built into AppKit, so was all custom created. To do things like that, you’ll need a solid understanding of NSCell vs NSView, and ideally also know about Core Animation layer backed views, and what to use for different aspects.
The window has a taller-than usual title bar. This means the developer probably had to do some crazy stuff to get it to work, if not create the whole window from scratch.
That’s just the start. There’s lots of really nice design in there that’s custom and done from scratch.
Designing Mac apps can be hard sometimes. AppKit is pretty old (back from the NEXT days), and has lots of legacy stuck in it. UIKit on iOS on the other hand is quite nice – Apple clearly learned from their past and made things much better.
I’ve hardly touched on the controllers and model behind all that. There’s lots of different ways you could do it. For persistence, you could use CoreData, sqlite, NSKeyedArchived, just to name a few. Brent Simmons (past developer of another RSS reader, NetNewsWire) wrote some interesting blog posts about that:
http://inessential.com/2010/02/26/on_switching_away_from_core_data
http://inessential.com/2011/09/22/core_data_revisited
The way you design your model & controllers really depends on the specific problem. Cocoa really forces you to stick to MVC though – if you don’t, things are guaranteed to end up messy.
I hope that all helps! I’m really only just learning myself too.
Apple refers to this type of application design as Single-window, library- (or “shoebox”) style and gives a number of recommendations for this design choice in the docs.
(see Mac App Programming Guide)
I'm in the midst of porting a win32 app to cocoa. Wherever possible, I'm using IB, since... well its way easier in every way possible, obviously. One thing is the designer and the win32 dev set up all the button assets on a massive "sprite sheet" such you move around the viewport to determine button state. Similar to how yahoo does CSS sprites on their home page (http://d.yimg.com/a/i/ww/met/pa_icons/20100309/spr_apps_us.png)
Can IB be setup to handle this type sprite strip with the default buttons, or are we SOL on this one? I can certainly fire something up programmatically that would do this, but would like to incorporate as much of the default button behavior and selector hookup in IB.
Thoughts?
Josh
This isn't supported in IB because it is really not a Cocoa way of setting button images. I understand why you would use sprites in CSS but in a native program (on any platform) it seems really unnecessary and inefficient.
I honestly think it would be much less work for you to forget about using the sprites. Out of curiosity, are these buttons going to be for standard user interactions, or something more along the line of buttons for a game? If it is for standard user interactions (open file, change font, etc.) then I strongly recommend using the stock buttons as much as possible, although I understand that this might be out of your control. The reason is that the worst ported apps are usually the ones that try to keep visual fidelity with their Windows counterpart.
I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.
I just bought a Magic Mouse and I like it pretty much. But as a Mac Developer it's even cooler. But there's one problem: is there already an API available for it? I want to use it for one of my applications. For, example, detect the user's finger positions, swipe or stretch gestures etc...
Does anyone know if there's an API for it (and how to use it)?
The Magic Mouse does not use the NSTouch API. I have been experimenting with it and attempting to capture touch information. I've had no luck so far. The only touch method that is common to both the mouse and the trackpad is the swipeWithEvent: method. It is called for a two finger swipe on the device only.
It seems the touch input from the mouse is being interpreted somewhere else, then forwarded on to the public API. I have yet to find the private API that is actually doing the work.
get a look here: http://www.iphonesmartapps.org/aladino/?a=multitouch
there's a full working proof-of-concept using the CGEventPost method.
--
all the best!
I have not tested, but I would be shocked if it didn't use NSTouch. NSTouch is the API you use to interact with the multi-touch trackpads on current MacBook Pros (and the new MacBooks that came out this week). You can check out the LightTable sample project to see how it is used.
It is part of AppKit, but it is a Snow Leopard only API.
I messed around with the below app before getting my magic mouse. I was surprised to find that the app also tracked the multi touch points on the mouse.
There is a link in the comments to some source that gets the raw data similarly, but there is no source to this actual app.
http://lericson.blogg.se/code/2009/november/multitouch-on-unibody-macbooks.html