UIWindow makeKeyWindow: versus makeKeyAndVisible: - objective-c

My app is using a 2nd UIWindow to show a special screen if iOS wants to take a screenshot of the app.
By accident I used [UIWindow makeKeyWindow:] on my main window if I wanted to remove the 2nd window again. This really should be makeKeyAndVisible: instead but I'm wondering why it worked at all.
I mean: most of the time (99%), my 2nd window was removed as expected and my main window became visible.
I'm asking because I'm wondering if I have really found the problem or if there might still be something else?
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?

Each method maps to the selector of the same name makeKeyAndVisible and makeKeyWindow.
Or could it be that the method was incorrectly bound in (previous) MonoTouch versions?
GIT history shows (me ;-) that both never changed since they were first added (more than two years ago).
Documentation about the former states:
You can also hide and reveal a window using the inherited hidden property of UIView.
Maybe this happens in your code (or even within the iOS code).

Related

Catch key events in a NSWindow

I'm trying to get a very basic web browser with 3 Webviews (2 hidden and 1 visible at all times).
I'd like to switch between these 3 webviews by pressing CMD+1, CMD+2, CMD+3.
I have created a basic Cocoa app, added 3 webviews in it, referenced the Webkit framework and I'm up and running with it, this part is working.
Now I wonder:
1) How to catch key event? It seems so overly complicated that browsing the event structure docs gave me a headache.
[rant]From someone who did lots of Windows forms, GTK, QT and Java/C#/C++ work it seems that XCode is getting worse every release by moving everything around and creating 3 different ways of achieving the same thing, etc. Each time I have to use it's always like I have to learn everything once again.[/rant]
2) How to specifically catch CMD+NUMBERS ?
This is just for a quick productivity app I'm building to use in conjunction with JIRA (project management).
I'd appreciate if somebody could point me in the good direction.
Every time I stumble upon a good tutorial, it was outdated or was for iOS dev which most of the time doesn't use the same APIs as OS X anymore.
Sorry about the rant and thanks about your help!
What you are looking to override is the NSResponder method "keyDown:", and what I would recommend doing is subclassing "WebView" and create your own "keyDown" method (make certain to call "[super keyDown: theEvent]" somewhere in your implementation, though).
Now, within your "keyDown" implementation, to look for the command key, "NSEvent" objects respond to the "modifierFlags" method and one of the flags is "NSCommandKeyMask".
E.G.:
NSUInteger flags = [theEvent modifierFlags];
if( flags == NSCommandKeyMask ){
// Got it!
}
Don't worry about what kind of views they are.
Put each one inside a tab of an NSTabView, with or without tab control displayed.
With tab control, set the desired key combo per tab in IB.
With or without it, create a menu item in a menu in the menu bar for each tab with the same key combos. This is the recommended way since long ago. Apple recommends adding a menu bar menu item for everything that has a keyboard shortcut. The menu bar will receive the keys before anything else.

Display waiting screen when user selects UITable row

When the user selects a row in my table, I need to load a bunch of data with CoreData. This takes several seconds (at least when running on the simulator - haven't tested on a device yet, but I imagine it will still be pretty long). I want to display a loading popover screen (I'm developing for iPad) when the user selects a row, then have it disappear once the data is loaded.
When working with UIButtons, I've always done this by triggering both a TouchDown and TouchUp method. I put the code to display the popover on the TouchDown, then do all my actual work (so in this case loading from CoreData) in the TouchUp. Then at the end of TouchUp, I close the popover.
Is there a way to split up touches in a table view the same way as ones on a button?
So while I was typing up the question I came across something in the Apple Docs for willSelectRowAtIndexPath and it looks like that is there to address this very issue. I didn't see any other topics on this on SO, so I figured I'd make use of this "Answer own question - Q&A" feature on here, so maybe people looking for this in the future will find it a little easier.
Huge FYI block:
I should note that willSelectRowAtIndexPath is actually supposed to return an NSIndexPath, unlike didSelectRowAtIndexPath (which returns void). However, I originally had
-(void)tableView:(UITableView *)tableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
//stuff
}
Note the void return type. Xcode (Version 4.5) allowed this without any errors or warnings (and even had it for me in the autocomplete list). When I ran the code this way, I always hit an invisible breakpoint in libobjc.A.dylib'objc_autorelease: (maybe becaues I have the "All Exceptions" breakpoint enabled?). I couldn't get past this breakpoint, I never got any debug information in the debugger output console, and the app never terminated. I happened to look at the Apple Doc again and notice that it was supposed to return an NSIndexPath (specifically, the index path the code should interpret as being the one that was selected), but if I didn't happen to see it there, I never would have known what the problem was.

JTRevealSideBar dismiss sidebar

I have implemented the JTRevelSideBar into my project, and it's working quite well, but I have been trying to figure out if it was possible to do something like with the Facebook app, where if the user presses the main viewcontroller, while the sidebar is revealed, the sidebar should be dismissed. Do anybody know this is possible to implement?
Never having used it before, I will say: probably.
Though, the github page for the pod you mention says that it's no longer being supported due to its use being discouraged at the WWDC14. Though, one of the alternatives that JT mentions is PKRevealController2 and seems to be fairly simple to use.
Though the reason I bring it up at all is that usually devs will give you some hint as to how to do what you are asking for in one of the project's main header files. For example in PKRevealController.h it lists the property
/// Whether to use the front view's entire visible area to allow pan based reveal.
#property (nonatomic, assign, readwrite) BOOL recognizesPanningOnFrontView;
Which is exactly what you'd want to set as YES in your project. I would recommend to take a look at the header files in the JTRevealSideBar pod to see if there is something similar.
Now, I have used MMDrawerController before (it's pretty great!) and similarly it has a MMCloseDrawerGestureMode which can be set to MMCloseDrawerGestureModeBezelPanningCenterView (The user can close the drawer by starting a pan anywhere within the bezel of the center view.)
So you see, you'll just have to do a little digging. Otherwise you'll need to implement a pan gesture recognizer... but I can't really say for sure where you'll be putting it in your particular implementation.

iOS 3 - UITabBarItems disappear from UITabBar after a memory warning occurs

At a great number of requests from people using older iOS hardware, I'm currently refactoring and optimizing my app so it will work on iOS 3. That being said I've got a glitch with my UITabBar that I can replicate on all of the iPhone 3G units I've tested it on.
The glitch appears to have been fixed in iOS 4, but I was wondering if before that time, anyone else had this glitch as well and had figured out a (relatively elegant) workaround for it.
The problem is what you can see below; when a memory warning occurs and all of the views offscreen are released, when I bring a view controller with a tab bar back on screen, all of the UITabBarItems that are supposed to be in it are gone. As far as I can see, they're not being drawn at all; ie tapping the tab bar has no effect. After setting breakpoints and examining the UITabBar and its items in memory, they're all still there (ie not getting released), just that they're not getting redrawn when the UITabBar is re-created in the controller loadView method.
My app works similar to the official Twitter app in that I implemented my own version of UITabBarController so I could control the integration of it with a parent UINavigationController properly. I set it up as closely as possible to the original UITabBarController class though, with all of the child view controllers handling their own respective UITabBarItems and initializing them inside the class' init methods. Once the child view controllers are passed to my TabController object via an accessor method, the tabBarItems are accessed and added to the UITabBar view.
Has anyone seen this behaviour before and know of a way I can fix it? I'm hoping there's a really simple fix for this since it already works in iOS 4, so I don't want to hack it up too badly.
Thanks a lot!
After a bit of research, I think I found a solution to this. It's not the most elegant solution I was after, but it definitely works.
I'm guessing after a memory warning is triggered, something is happening to the UITabBarItem objects that basically renders them corrupt. I tried a lot of things (flushing out the UITabBar, re-creating the controllers array etc), but nothing worked.
I finally discovered that if you completely destroy the UITabBarItems and allocate new ones in their place, then those ones will work. :)
So my final solution to this was to add an extra condition in the viewDidLoad method of my controller that if the detected system was iOS 3, and there was already an array of UITabBarItems, it would go through each one, copy out all of the properties needed, destroy it, allocate a new one and then copy the old properties over to the new one.
I'm still going to keep an eye out for a better solution (I think there's a bit of overhead in this method), but thankfully at this stage, iOS 3 legacy support is becoming less and less of an issue. :)

Why does the execution order of touchesBegan, target-action and touchesEnded change with fast touches of UIButton?

UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.