Catch key events in a NSWindow - objective-c

I'm trying to get a very basic web browser with 3 Webviews (2 hidden and 1 visible at all times).
I'd like to switch between these 3 webviews by pressing CMD+1, CMD+2, CMD+3.
I have created a basic Cocoa app, added 3 webviews in it, referenced the Webkit framework and I'm up and running with it, this part is working.
Now I wonder:
1) How to catch key event? It seems so overly complicated that browsing the event structure docs gave me a headache.
[rant]From someone who did lots of Windows forms, GTK, QT and Java/C#/C++ work it seems that XCode is getting worse every release by moving everything around and creating 3 different ways of achieving the same thing, etc. Each time I have to use it's always like I have to learn everything once again.[/rant]
2) How to specifically catch CMD+NUMBERS ?
This is just for a quick productivity app I'm building to use in conjunction with JIRA (project management).
I'd appreciate if somebody could point me in the good direction.
Every time I stumble upon a good tutorial, it was outdated or was for iOS dev which most of the time doesn't use the same APIs as OS X anymore.
Sorry about the rant and thanks about your help!

What you are looking to override is the NSResponder method "keyDown:", and what I would recommend doing is subclassing "WebView" and create your own "keyDown" method (make certain to call "[super keyDown: theEvent]" somewhere in your implementation, though).
Now, within your "keyDown" implementation, to look for the command key, "NSEvent" objects respond to the "modifierFlags" method and one of the flags is "NSCommandKeyMask".
E.G.:
NSUInteger flags = [theEvent modifierFlags];
if( flags == NSCommandKeyMask ){
// Got it!
}

Don't worry about what kind of views they are.
Put each one inside a tab of an NSTabView, with or without tab control displayed.
With tab control, set the desired key combo per tab in IB.
With or without it, create a menu item in a menu in the menu bar for each tab with the same key combos. This is the recommended way since long ago. Apple recommends adding a menu bar menu item for everything that has a keyboard shortcut. The menu bar will receive the keys before anything else.

Related

NSApplication mainMenu returns nil

the problem I'm having is I cannot add a menu to my app programmatically!
here's where I'm at:
in app delegate
applicationDidFinishLaunching:
create a window and make key and order front.
EDIT:( here if I log [NSApplication sharedApplication].mainMenu prints (null) ) anyway...
create a NSMenu object and [[NSApplication sharedApplication] setMainMenu:myMenu]
also tried [[NSApplication sharedApplication] setMenu:myMenu]
build/run
menu is not there!
EDIT2:
( if still not understanding: )
make a osx app, remove the menu object, run, you'll still see a menu up there with the name of your app, you click it, it turns blue but no submenus, now how do I get a pointer to that!
You won't be able to do this as the OSX menus conform to the Aqua layout. Is there any reason why you'd remove it completely?
It's probably going to be a nightmare for a few reasons:
1) In the standard 'Aqua Menu' you have menu's like 'Services' which are handled by the system and not by the Application.
2) Apple are specific about their design guides, and menu's aren't mentioned and I'd HIGHLY doubt it apple would like you changing the Aqua layout.
I remember once upon a time coming upon a discussion which mentioned setAppleMenu etc... but that was back in the Tiger (I think) days.
edit you won't be able to get a 'pointer; to it using Documented API's, it's system-driven, not application driven i.e. complying with Aqua.
Personally, I'd remove all of the menuItems which can be changed in a sandboxed app, i.e. in User Land, and add/remove the various menuItems yourself.

UIWindow makeKeyWindow: versus makeKeyAndVisible:

My app is using a 2nd UIWindow to show a special screen if iOS wants to take a screenshot of the app.
By accident I used [UIWindow makeKeyWindow:] on my main window if I wanted to remove the 2nd window again. This really should be makeKeyAndVisible: instead but I'm wondering why it worked at all.
I mean: most of the time (99%), my 2nd window was removed as expected and my main window became visible.
I'm asking because I'm wondering if I have really found the problem or if there might still be something else?
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?
Each method maps to the selector of the same name makeKeyAndVisible and makeKeyWindow.
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?
GIT history shows (me ;-) that both never changed since they were first added (more than two years ago).
Documentation about the former states:
You can also hide and reveal a window using the inherited hidden property of UIView.
Maybe this happens in your code (or even within the iOS code).

Popup window similar to Xcode's code completion?

I would like to build something similar to the code-completion feature in Xcode 4. (The visual style and behavior, not the data structure type work required for code completion).
As the user is typing, a pop-up window presents other word choices that can be selected.
The Feature in action:
I'm not exactly sure where to start. I am mainly concerned with the visual appearance of the window and how I should populate the list with a given set of words. Later I will get into making the window follow the cursor around the screen and etc.
I am mainly looking for an overview of how to display such data in a "window", and how to cusomize the appearance of the thing so it looks like a nice little informational popup rather than a full-on OS X window.
Just add a subview to your current view that happens to be a tableview. Programmatically cause it to be visible on an event (such as mouseDown), and adjust it's position based on where you want it. You will need to instantiate the proper delegate/datasource methods, but it should be pretty straight forward. You will also need a source for the words you want to use in your autocomplete, and put them in an array or something for your tableview datasource to go through.
Like I said, it's not terribly hard, provided you are comfortable using tableviews and adding views to your existing view. If this doesn't explain enough, leave a comment and I can flesh this comment out more.
Add a (completions) subview to your view and set its visible property to NO. Create a separate AutoComplete object that includes the subview as a property and fills it with potential completions. Your controller can respond to key pressed (key typed) events and give the last word (substring the text from the end to the 1st preceding blank) to the AutoComplete on each event. The basic logic in AutoComplete could be something like:
Given an AutoComplete with a list of known words "dog, spaghetti, minute, horse, spare, speed"
When asked to complete a fragment "sp"
Then the following words should be offered as potential completions, "spaghetti, spare, speed"
Which would imply that you need to instantiate it with a list of words (this could be done in the init of your controller), and create a method "-(NSArray*) completeFragment:(NSString*)fragment;". You could drop this into an OCUnit test case and iterate over your implementation of autocomplete until you get it right. You would then write a method that takes the completions from AutoComplete and lists them in the subview. Even better you might create currentWord and potentialCompletions properties in AutoComplete that gets updated as you send it "newFragment:(NSString*)fragment;" messages. Throw that into OCUnit and work it out then use that potentialCompletions property to drive updated of a the subview (which is probably best modeled as a custom tableView).

Why does the execution order of touchesBegan, target-action and touchesEnded change with fast touches of UIButton?

UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.

OOP: Designing a menu system

I am currently trying to create a menu system for a game and cannot arrive at any really sound way to do it. There are several menu screens, each of them non-trivial, so that I would like to keep these as separate classes. The main problem I am having is passing control between these menu screens.
I tried building each of the screens as a singleton and call one screen from the other directly, ie. something like [[MainMenu instance] display] in Objective C. This is a bit messy, because (1) I have to write the singleton boilerplate code for each of the menu screens and (2) the classes get dependent on each other, sometimes I have to code around circular dependencies etc.
I thought about making the classes fully static to get around the instance management (which is a bit extra in this case, since there really is just one instance of each screen). But this also looks quite ugly, especially with Objective C having to “fake” class variables by declaring them static.
Then I thought about some “manager” class that would create the instances and pass the control around, but I am not sure introducing an extra class would solve the problem, especially if this class was to be named Manager :-)
I should note that I do have a working system, it just doesn’t feel very nice. By which I mean there is a bit of code duplication going on, if I am not careful the thing might hang, and so on. Any ideas? I am aware that this is underspecified, so that the discussion will probably be more of a brainstorming, but I am interested in the ideas anyway, even if they do not outright solve my problem.
Update: Thank You all for the ideas. What I did in the end:
I reworked the menu contents (buttons, graphics, etc.) to fit under one interface called ScreenView. This is a general interface that looks like this:
#protocol ScreenView
- (void) draw;
- (BOOL) handlesPoint: (CGPoint) p;
- (void) appearWithAnimation;
- (void) disappearWithAnimation;
- (BOOL) hasFinishedAnimating;
#optional
- (void) fingerDown;
- (void) fingerUp;
#end
Thanks to this protocol I was able to throw away all the specific menu screens and create a general menu screen that takes a list of subviews to display and handles all the presentation like drawing, transitions, events and such. This general menu screen does not get subclassed much, because most of the menu screens are happy simply displaying a list of subviews. This would be the V in MVC.
Then I also created a controller class that handles all the events for a certain menu screen. (Obviously the C in MVC.) The root controller class handles the instance management, transitions between menus and some other little things. Most of the menu screens get a customized subclass of the controller that handles the events from the buttons and other subviews.
The number of classes got up, but the code is much cleaner, does not repeat itself and is less prone to errors. The instance management is still not perfect, but I’m reasonably happy with the design. Once again, thank to all who answered.
One of the tricks I learned to decent design is always separate your data from your code. This will do WONDERS for your specific problem.
By this I mean that the menu items (strings) and relationships between the menus should be stored somewhere either in an array or a separate file (and read into an array).
You then use this array to instantiate all your menu classes.
Once you recode it to work this way (I've done this with menus), all your code will fall into place, you'll also factor out--90% of your menuing code (each menu will no longer be it's own class, just the same class instantiated with its own unique data.
The target of the menu items are stored in the "data" as well (as method pointers or class instances).
I think a MenuManager class would be the way to go. You'd have one Menu base class which all the menu screens derive from, and the manager would have a pointer to the currently active menu screen. It could also, for example, keep track of previous menu screens for easy use of back buttons on menu screens in arbitrary menu screen calls. Maybe just use a std::vector for that so you don't have to recreate the previous menu screens when going back (this would also prevent loss of entered information, like in an Options menu with an Advanced submenu).
Putting all the contents of the menus into a dictionary, dumping to a plist and reading each as necessary by the menu screens is likely the simplest route but in all honesty, you should consider taking a more MVC-centric approach to solving the problem. The screens should be for presentation of data not the storage of it. If you provide for a clean separation of the data from the views, the problem solves itself.