NSSegmentedCell Subclass and Custom Geometry/Layout Impossible? - objective-c

A Tale of Two Subclasses
By Ben Stock
Prologue
I'm in the process of making a really nice looking set of controls which automatically change their appearance depending on the type of window they're used in (e.g. If you drop a button in a normal window, it looks like any other standard Aqua button. If you drop it on an NSPanel with a window mask of NSHUDWindowMask, however, it'll automatically switch its style to look good on a HUD background. So far, I've subclassed NSButton, NSTextField, NSSlider, and NSSearchField. Last night I started on NSTabView, only to be slammed down by its lack of customizability. It's a real pain in the ass, but I'm a developer, so I'm used to finding my own way. The first thing I think to do is add an instance of NSSegmentedControl in place of the private tabs used by NSTabView. So far, so good. I've got the buttons selectable, they automatically update when new NSTabViewItem's are added, and they work just like the real thing.
And the Pain Begins …
Finally, I start to style my segments, and … WTF have I gotten myself into‽ I should've just gone into acting or something. Objective-C development is slowly taking years off my life. No matter what I do, the "tracking areas" used by NSSegmentedCell don't seem to be updating when my segment widths change. So when my widths change, my artwork does, too. However, the actual tracking area doesn't update (even when I override -updateTrackingAreas. It's really hard to explain, so I decided to draw my segment rectangles behind and in front of the ones drawn by super in -drawSegment:inFrame:withView. Here's a screenshot with my art drawn on top of the underlying tracking areas:
And here's super's implementation above my segment rects:
I've tried overriding everything I can think of. Here are a few of the methods I've overridden (and un-overridden):
-cellSize (NSSegmentedCell)
-cellSizeForBounds: (NSSegmentedCell)
-sizeToFit (NSSegmentedControl)
-intrinsicContentSize (NSSegmentedControl)
-setWidth:forSegment: (NSSegmentedControl/Cell)
-startTrackingAt:inView: (NSSegmentedCell)
-continueTracking:at:inView: (NSSegmentedCell)
-stopTracking:at:inView:mouseIsUp: (NSSegmentedCell)
At this point, some of those methods in the above list are still using my overrides and some aren't. I've mixed and matched, deleted, simplified, rewrote, and refactored, and no matter what I do, the underlying rectangles don't change. I love Apple as much as the next guy, but their view of customization needs to change. I can't stand not being able to understand what's going on in the implementation of all these stupid controls. Not to mention the fact that I still can't fully wrap my head around Auto Layout (which is about the most un-"auto" thing I've ever dealt with), but that's a post for another day. Anyway, if anybody could help a brotha out, I'd be super grateful. Sorry for ranting and thanks for reading!
P.S. None of these things are finished, so please don't be too hard on a few pixel imperfections. ;-)

Related

NSCell vs NSView: when many controls are needed

I am aware that Apple is deprecating the use of NSCell in favour of NSView (see AppKit 10.10 release notes). It was previously recommended that NSCell be used for performance reasons when many controls were needed.
I have spent considerable time implementing a custom control that required many subViews and the performance using NSView-type subViews was not good. See related stackoverflow discussion What are the practical limits in terms of number of NSView-type instances you can have in a window? I was struggling with 1000-2000 in-memory objects (which doesn't seem a lot). What is the actual reason for this limitation?
One thing that confuses me on the above is View-based Cocoa NSTableViews. You can create tableViews with more than 1000-2000 cells and they don't seem to have poor loading and scrolling performance? If each of the cells is an NSView then how is this achieved?
If there are practical limits, then what are Apple thinking when they say they are deprecating usage of NSCell's? I am sure they are aware that some controls need a large number of subViews.
Further, an (probably outdated) Apple Developer Guide give the following explanation for the difference between NSView & NSCell which I need explained further:
"Because cells are lighter-weight than controls, in terms of inherited data and behavior, it is more efficient to use a multi-cell control rather than multiple controls."
Inherited data: this would surely only cause "bloat" if the data was being used => and it would only be used it you needed it?
Inherited behavior: methods that you don't use in a class/object surely can't cause any overhead ?
What is the real difference between the lightweight NSCell versus the heavyweight NSView other than that it just seems to be conventionally accepted?
(I would really like to know.)
A brief and incomplete answer:
NSCells are about drawing state, and not much else. NSViews must draw, but also must maintain and update all kinds of other information, such as their layout, responding to user input events, etc.
Consider the amount of calculation that must happen if you resize a view that contains many hundreds of subviews: every single subview position and size must be maintained according to the existing constraints for the view layout. That alone is quickly a very large amount of processing.
NSCells, in contrast, don't exist in a layout in that way. Their only job is to draw information in a given rectangle whenever asked to do so.
One thing that confuses me on the above is View-based Cocoa NSTableViews. You can create tableViews with more than 1000-2000 cells and they don't seem to have poor loading and scrolling performance? If each of the cells is an NSView then how is this achieved?
NSTableViews re-use views. The only views actually generated are those that are visible, plus maybe a row of views above and a row below the visible area. When scrolling the table, the object value associated with a given row of views is changed to that the views in the row will display different content

Understanding the use of addChildViewController

I'm working with some code that I need to refactor. A view controller is acting as a container for two other view controllers, and will swap between them, as shown in the code below.
This may not be the best design. Swapping the view controllers in this way might not be required. I understand that. However, as I work with this code I want to further understand what happens with the addChildViewController call. I haven't been able to find the answer in Apple's docs or in related questions, here (probably an indication that the design needs to change).
Specifically - how does the container view controller handle a situation where it is asked to add a child view controller, which it has already added? Does it recognise that it has already added that view controller object?
E.g. if the code below is inside a method - and that method is called twice...
[self addChildViewController:viewControllerB];
[self.view addSubview:viewControllerB.view];
[viewControllerB didMoveToParentViewController:self];
[viewControllerA willMoveToParentViewController:nil];
[viewControllerA.view removeFromSuperview];
[viewControllerA removeFromParentViewController];
Thanks,
Gavin
In general, their guidelines for view controller "containment", when one contains another, should be followed to determine whether you will need to implement containment.
In particular, worrying about adding the same child view controller twice is like worrying about presenting the same view controller twice. If you've really thought things through, you shouldn't need to face that problem. Your hunch is correct.
I agree that Apple's docs should be more up-front about what happens with weird parameters or when called out of sequence, but it may also be a case of not wanting to tie themselves to an error-correcting design that will cause trouble down the road. When you work out a design that doesn't ever call these methods in the wrong way, you solve the problem correctly and make yourself independent of whatever error correction they may or may not have - even more important if you consider that, since it's not documented, that error correction may work differently in the future, breaking your app.
Going even a bit further, you'll notice that Apple's container view controllers can't get in an invalid state (at least not easily with public API). With a UITabViewController, switching from one view controller to another is an atomic operation and the tab view controller at any point in time knows exactly what's going on. The most it ever has to do is remove the active one and show the new one. The only time where it blows everything out of the water is when you tell it "you should blow everything out of the water and start using these view controllers instead".
Coding for anything else, like removing all views or all view controllers no matter what may in some cases seem expedient or robust, but it's quite the opposite since in effect one end of your code doesn't trust the other end of your code to keep its part of the deal. In any situation where that actually helps you, it means that you've let people add view controllers willy-nilly without the control that you should desire, and in that case, that's the problem you should fix.

Extend Gesture Length

I've come across a problem to which the simplest solution would be to extend the length of a UIGestureRecognizer. What I mean is that I need the iOS device to still believe the user has their finger on the screen for about 0.1 seconds after they release it. I need the device to think that the finger is in the exact same position for the 0.1 seconds as it was when the user released this.
Any help as to weather this is possible would be greatly appreciated!
Thank you!!!
EDIT:
Sorry for the late reply, I've been really busy with work.
To elaborate, I'm using a set of classes made by Alan Quartermain called AQGridView. It's a class that strongly resembles UITableView; however, it displays data in a grid instead of a list. There appears to be a bug where (if I understand correctly, and I may very well not) the data of the grid is reloaded before the delegate method, that is called when a user ends a UIGestureRecognizer, finishes if the user releases their finger while dragging a cell (from one grid index to another) very quickly. This causes a graphical glitch (which can be recreated in the springboard example that comes with the class set) where the dragging cell appears to settle one cell before or after it's appropriate location, and then quickly jumps to it's proper location. I believe this is because there is a brief period, when the user releases their finger, where the grids count is -1 of what it is when the cell settles.
This is a poor explanation of the problem, but the best I could come up with. As well, I'm a relatively new developer and could be way off on the cause of the problem. That is why I believe the most appropriate fix would be to extend the gesture length by a very small amount. If anyone wants to take a look at the AQGridView classes (https://github.com/AlanQuatermain/AQGridView/) I would really appreciate it! But if possible a simpler fix would just be to simulate the touch that the user inputed right before they released their finger so that the desired animation occurs.
Inside touchesEnded delegate method, start timer for 0.1 second and perform your selector.
You can even, subclass UIGestureRecogniser and implement your own gestures, Check the first answer of this question.
See : custom iOS gesture

Why does the execution order of touchesBegan, target-action and touchesEnded change with fast touches of UIButton?

UPDATE: With the blush of shame I discovered that the order had nothing to do with the speed of tapping. I was calling the visual code before the super touchesEnded:withEvent call, which was why if you tapped really fast, the display never got a chance to draw the highlighted state before being dismissed again. Because the code that was actually causing the main thread to block just a few milliseconds, the highlighted state would stay visible until the main thread unblocked again, where as if you tapped really fast, it looked like nothing happened at all. Moving the super call up to the top of the overridden method fixed it all. Sorry, if any moderator sees this post it can be deleted. shame
This problem must have been asked a 1000 times at SO, yet I can't find the explanation to match my specific issue.
I have a UIButton subclass with a custom design. Of course the design is custom enough that I can't just use the regular setSomething:forControlState: methods. I need a different backgroundcolor on touch, for one, and some icons that need to flash.
To implement these view changes, I (counter-intuitively) put the display code in (A) touchesBegan:withEvent and (Z) touchesEnded:withEvent:, before calling their respective super methods. Feels weird, but it works as intended, or so it seemed at first.
After implementing addTarget:action:forControlEvents was used to bind the UIControlEventTouchUpInside to the method (X) itemTapped:, I would expect these methods to always fire in the order (A)(X)(Z). However, if you tap the screen real fast (or the mouse in simulator), they fire in the order (A)(Z)(X). Where (A) and (Z) follow each other in such rapid succession, that the whole visual feedback for tapping is invisible. This is unwanted behavior. This also can't be the way to go, for so many apps need similar behavior, right?
So my question to you is: What am I doing wrong? One thing I'm guessing is that the visual appearance of the buttons shouldn't be manipulated in the touchesBegan:withEvent and touchesEnded:withEvent, but then where? Or am I missing some other well known fact?
Thanks for the nudge,
Eric-Paul.
I don't know why the order is different, but here's 2 suggestions to help deal with it.
What visual changes are you making to the button? If it's things like changing title/image/background image, you can do all this by modifying the highlighted state of the button. You can set a few properties like title and background image per-state. When the user's finger is down on the button, the highlighted state is turned on, so any changes you make to this state will be visible at this time. Do note that if you're making use of the selected state on the button, then you'll need to also set up the visual appearance for UIControlStateHighlighted|UIControlStateSelected, otherwise it will default back to inheriting from Normal when both highlighted & selected are on.
The other suggestion is to ditch touchesBegan:withEvent: and touchesEnded:withEvent: and switch over to using the methods inherited from UIControl, namely beginTrackingWithTouch:withEvent: and endTrackingWithTouch:withEvent:. You may also want to implement continueTrackingWithTouch:withEvent: and use the touchInside property to turn off your visual tweaks if the touch leaves the control.

How to structure a very large Objective-C class?

I've a fairly complex window that is backed by a controller class that is obviously growing to meet the needs of my view window. While I believe I am sticking to proper MVC I'm still having problems managing a fairly largish controller class.
How do you breakdown your objects? Maybe use Categories? For example, one category to handle the bottom part of the window, another category to handle my NSOutlineView, another category to handle a table, and so on and so forth?
Any ideas or suggestions are welcome.
It sounds like it's a complex window controller that's growing to unmanageable proportions? This is getting to be a more common issue because of applications which, like the iApps, do most of their work in a single window.
As of Leopard, the recommended way of breaking it down is to factor out each part of the window into its own NSViewController subclass. So, for example, you'd have a view controller for your outline view, and a view controller for each of your content views, etc.
Also, I'd like to second the use of #pragma marks to divide code files up into segments, and in addition to categories, I also like to use class extensions for private methods.
It's a simple answer, but the code folding feature of the Xcode IDE can be handy for focusing your attention on sections of a class. Another little thing that might help is going to View->Code Folding and turning on Focus Follows Selection. This makes it so the background color of the scope of your current selection is white while everything else is shades of gray.
Categories are ideal for this. Create a new file for each category, and group them by functionality, as you suggested.
I've tried using Categories in situations like this and I just end up confusing myself, wondering how in the world I'm calling that method when it's "obviously" not in the class I'm looking at.
I'd recommend liberal use of #pragma mark in your source code. Makes it super-easy to browse through all your methods.