Is it possible to observe -visibleRect - objective-c

I would like to be notified whenever a certain NSView's - (NSRect)visibleRect changes because I want to do some fancy subview layout based on the visible rect. Frankly, right now I'm stumped; -visibleRect doesn't emit KVO notifications (which makes sense), and there doesn't seem to be way to find out if the visible rect changed or not without explicitly calling -visibleRect.
Is this at all possible? (or is it a terrible, terrible idea?)

I think you can either override -[NSView updateTrackingAreas] or listen for NSViewDidUpdateTrackingAreasNotification. Those may happen on more occasions than just a change of the visible rect, but they should happen for any change of the visible rect. I think.
That said, it may be a terrible idea. Hard to know. :)

Another option post 10.5 is the -viewWillDraw method which is called just prior to the view (and its subviews) being drawn. You can fetch the view's visible rectangle and perform layout prior to calling [super viewWillDraw].

Ken's suggestions of listening for the tracking area changes feels hacky but seems to work, although they only trigger after the resize is complete. If you need updates during resizing like I did, it overriding -[NSView resizeWithOldSuperviewSize:] will do that

The adjustTrackingArea solution does not appear viable in Mojave for NSScrollView at least.
Mojave does not appear to always call adjustTrackingArea while scrolling an NSScrollView.
Haven't tested other OS versions, other view types.

Related

Cocoa: Update views outside NSWindow's content view during live window resizing?

I have a standard NSWindow with a toolbar. One of the toolbar's items is a custom view -- specifically, an NSTextField. (More specifically, it's a timer app -- the timer's controls as well as the digital display are all within the toolbar, with other stuff in the window's content area. The NSTextField is the digital display.)
Ordinarily, I just update the timer every second by changing the 'stringValue' property of the NSTextField, which causes it to update itself. But during a live window resize, even though the code that updates the 'stringValue' property is running (which I have verified with NSLog), the NSTextField doesn't draw itself again until the window resizing is done. Meanwhile, the stuff inside the content area is updating itself just fine.
I've tried all the ways I know to tell the NSTextField to draw itself, but it just refuses to happen until the live resize is done. Any ideas? Obviously it must be possible somehow, as the toolbar gets resized along with the rest of the window -- so you'd think it would be possible to force it to redraw one or more of its subviews as it is moving them around. I'm assuming I can hack this together by subclassing something, but my Cocoa-fu is not yet strong enough to figure out the easiest/most proper way to do so.
Thanks in advance...
EDIT: I kind of figured out a solution -- it's not great but it mostly works for now. It's in my comments below.
Just invoke -[NSWindow displayIfNeeded] after marking the view as needing display. I encountered this problem when implementing the Mac driver for Wine (an open-source project for running Windows software on OS X and other Unix-like OSes).
http://source.winehq.org/source/dlls/winemac.drv/cocoa_window.m?v=wine-1.7.11#L1905
(That's LGPL code, so you want to consider before copying it. But you can learn implementation techniques from it without worry.)

layoutSubviews being called repeatedly on ios6 after CATransaction

I inherited an overly complicated project (so I don't know all of the inner workings), and I'm running into a bug. Certain parts of my app have some long animations done with CATransaction, and it seems to be causing layoutSubviews to be called repeatedly while the animations are active. This doesn't happen on ios5 and everything looks correct, but on ios6 it gets called nonstop and interferes with a lot of the layout of the view. The stack trace is all hidden/grayed out, but it does seem to begin with CA::Transaction::commit()
Did anything with CATransaction change between ios versions to cause something like this?
See this post: UIView/CALayer: Transform triggers layoutSubviews in superview
Apple answered me via TSI:
why am I seeing this behavior?
is this inconsistency or am I misunderstanding some core concepts?
A view will be flagged for layout whenever the system feels something has changed that requires the view to re-calculate the frames of its subviews. This may occur more often than you'd expect and exactly when the system chooses to flag a view as requiring layout is an implementation detail.
why does it cascade upwards the view hierarchy?
Generally, changing a geometric property of a view (or layer) will trigger a cascade of layout invalidations up the view hierarchy because parent views may have Auto Layout constraints involving the modified child. Note that Auto Layout is active in some form regardless of whether you have explicitly enabled it.
how can I avoid superview to layoutSubviews every time I'm changing transform?
There is no way to bypass this behavior. It's part of UIKit's internal bookkeeping that is required to keep the view hierarchy consistent.
Sounds like an Autolayout issue. Does the view or any of its subviews use Autolayout? Autolayout is nice but doesn't seem very fast and efficient so may cause issues when animating.
Of course it may be necessary for the subviews to be layed out each step in the animation if the size one shape of the view is changing in a way that affects subview placement or size. Consider the animation and what effects it has.

How do I move one UIView during rotation in iOS?

I'm having difficulty figuring out how to deal with different device orientations for one screen of my iPad app. Here's my situation:
~ All of this screen is rotating perfectly using springs and struts except for one label. The problem with this label is that I want it to move in an unorthodox manner (diagonally), thus springs and struts (or resizing masks will not work).
~ The way that I'm considering doing this is as such:
- (void)willAnimateRotationToInterfaceOrientation:(UIInterfaceOrientation)newInterfaceOrientation duration:(NSTimeInterval)duration
{
if(UIInterfaceOrientationIsLandscape(newInterfaceOrientation))
{
self.myScreenLabel.frame = CGRectMake(600,0,400,100);
}
else {
self.myScreenLabel.frame = CGRectMake(...//something); }
}
I would also put a check in viewDidLoad with similar logic. If in portrait mode, put label at... else put label at....
I think that this will work; however, I'm kinda wondering if there's a better designed way to do this. The method above, has hard-coded numbers everywhere; thus, is there a better way to do this? Also, this method does not take advantage of the fact that I have my label positioned perfectly in the storyboard for portrait mode and it's just landscape that I need to change.
Any suggestions on better design?
First question, I've implemented behavior on orientation change with your approach with no bad results, if you are okay that the method is triggered right before rotation happens. Alternatively, you can use NSNotifications to add a trigger on orientation change:
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(didRotate:)name:UIDeviceOrientationDidChangeNotification object:nil];
Then add a method like:
- (void) didRotate:(NSNotification *)notification {
UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
if (orientation == UIDeviceOrientationLandscapeLeft) {
//your code here
}
}
Regarding your second question, positioning frames like that is scary, but I think you realize that. Instead, position the element related to something else, just like a strut (which would say, always position the widget relative to a distance between it is strutted against). So use the window's frame, the view's frame, or some other UIView subclass in the view to position the object against, rather than absolute numbers.
A couple of thoughts:
If iOS 5+, you might want to use viewWillLayoutSubviews, probably even more important given the comments in the iOS 6 release notes re modal views and the screen reorientation methods. This also has the advantage that your code is in one spot. Since I still support pre 5 (though I won't for much longer), I actually have dynamic checking of iOS version and invoke my viewWillLayoutSubviews from the other methods if pre 5.0, otherwise I let viewWillLayoutSubviews just do the heavy lifting.
If your landscape orientation is radically different, you probably want to pursue Creating an Alternate Landscape Interface. I've never done this, but it seems like it's up your alley. There are also postings on SO about using different NIBs for different orientations. Not sure this makes sense in a storyboard environment, though.
For these controls that we occasionally have to move around or resize based upon screen dimensions, I think most of us do it with viewWillLayoutSubviews. That's the entire purpose of that method (though I generally use it for labels whose height changes based upon the data contents and the screen width). I had never stopped to think that there might be another way. If you only have one control that you're moving around as you change your orientation, maybe you could create two additional, hidden controls, one for where you want your visible control in portrait, the other hidden control for where you want it in landscape (and in IB, you can toggle your view orientation to facilitate the layout of controls). Then in your viewWillLayoutSubviews set your frame of your actual visible control to be the frame of one of those two hidden controls (depending upon orientation, of course). That gets you out of the business of hardcoding frame coordinates in your code, and take advantage of the benefits of IB. This whole suggestion might be too cute by half, but it's an alternative approach if you don't want to go through the effort of my second point.

Clicking through NSWindow/CALayer

So I'm working on an issue I have when trying to do some simple animation using CAKeyframeAnimation, and I believe my problem is more related to not fully understanding how NSWindow, NSView, and CALayer work together. 
I have two main objects in question. MyContainerWindow (NSWindow subclass) and MyMovableView (NSView subclass). My goal is to be able to animate MyMovableView back and forth across the screen, while maintaining the ability to click on anything through MyContainerWindow unless you are clicking on wherever MyMovableView is. I am able to accomplish the first part fine, by calling -addAnimation:forKeyPath: on myMovableView.layer, and everything is great except I can't click through MyContainerWindow. I could make the window smaller, but then the animation would clip by the bounds of the window.
Important points: 
1) MyContainerWindow is initWithFrame to [[NSScreen mainScreen] frame], NSBorderlessWindowMask, defer no, buffered
2) I setWantsLayer:TRUE to MyMovableView
3) MyContainerWindow is clear, and I want it to be as if there wasnt a window at all, but need it so I have a larger canvas to animate on.
Is there something obvious I'm missing to be able to click through an NSWindow?
Thanks in advance!
My solution in this scenario was actually to use:
[self ignoresMouseEvents:YES];
I originally was hoping to be able to retain the mouse events on the specific CALayer that I'm animating, but upon some further research I understand this comes with the cost of custom drawing everything from scratch, which is not ideal for this particular project.

Objective-C: Trying to implement my own dragging in a UIScrollView, running into tons of issues

I'm trying to override the default behavior in a UITableView (which is in fact a UIScrollView subclass). Basically, my table takes up a third of the screen, and I'd like to be able to drag items from the table to the rest of the screen — both by holding and then dragging, and also by dragging perpendicular to the table. I was able to implement the first technique with a bit of effort using the default UIScrollView touchesShouldBegin/touchesShouldCancel and touchesBegan/Moved/Ended-Cancelled, but the second technique is giving me some serious trouble.
My problem is this: I'd like to be able to detect a drag, but I also want to be able to scroll when not dragging. In order to do this, I have to perform my dragging detection up to and including the point when touchesShouldCancel is called. (This is because touchesShouldCancel is the branching point in which the UIScrollView decides whether to continue passing on touches to its subviews or to scroll.) Unfortunately, UIScrollView's cancellation radius is pretty tiny, and if the user touches a cell and then moves their finger really quickly, only touchesBegan is called. (If the user moves slowly, we usually get a few touchesMoved before touchesShouldCancel is called.) As a result, I have no way of calculating the touch direction, since I only have the one point from touchesBegan.
If I could query a touch at any given instant rather than having to rely on the touch callbacks, I could fix this pretty easily, but as far as I know I can't do that. The problem could also be fixed if I could make the scroll view cancel (and subsequently call touchesShouldCancel) at my discretion, or at least delay the call to touchesShouldCancel, but I can't do that either.
Instead, I've been trying to capture a couple of touchesBegan/Moved calls (2 or 3 at most) in a separate overlay view over the UITableView and then forwarding my touches to the table. That way, my table is guaranteed to already know the dragging direction when touchesShouldCancel is called. You can read about variations on this method here:
http://theexciter.com/articles/touches-and-uiscrollview-inside-a-uitableview.html
http://forums.macrumors.com/showthread.php?t=640508
(Yes, they do things a bit differently, but I think the crux is forwarding touches to the UITableView after pre-processing is done.)
Unfortunately, this doesn't seem to work. Calling my table view with touchesBegan/Moved/Ended-Cancelled doesn't move the table, nor does forwarding them to the table's hitTest view (which by my testing is either the table itself or a table cell). What's more, I checked what the cells' nextResponder is, and it turns out to be the table, so that wouldn't work either. By my understanding, this is because UIScrollView, at some point in the near past, switched over to using gesture recognizers to perform its vital dragging/scrolling detection, and to my knowledge, you can't forward touches as you would normally when gesture recognizers are involved.
Here's another thing: even though gesture recognizers were officially released in 3.2, they're still around in 3.1.3, though you can't use the API. I think UIScrollView is already using them in 3.1.3.
Whew! So here are my questions:
The nextResponder method described in the two links above seems pretty recent. Am I doing something wrong, or has the implementation of UIScrollView really fundamentally changed since then?
Is there any way to forward touches to a class with UIGestureRecognizers, ensuring that the recognizers have a chance to handle the touches?
I can solve the problem by adding my own UIGestureRecognizer that detects the dragging angle to my table view, and then making sure that every gesture recognizer added before that in table.gestureRecognizers depends on mine finishing. (There are 3 default UIScrollView gesture recognizers, I think. A few are private API classes, but all are UIGestureRecognizer subclasses, obviously.) I'm not handling any of the private gesture recognizers by name, but I'm still manipulating them and also using my knowledge of UIScrollView's internals, which aren't documented by Apple. Could my app get rejected for this?
What do I do for 3.1.3? The UIScrollView is apparently already using gesture recognizers, but I can't actually access them because the API is only available in 3.2.
Thank you!
Okay, I finally figured out an answer to my problem. Two answers, actually.
Convoluted solution: subclass UIWindow and override sendEvent to store the last touch's location. (Overriding sendEvent is one of the examples given in the Event Handling Guide.) Then, the Scroll View can query the window for the last touch when touchesShouldCancel is called.
Easier solution: shortly after, I noticed that Facebook's Three20 library was storing UITouches without retaining them. I always thought that you shouldn't keep UITouch objects around beyond local scope, but Apple's docs only explicitly prohibit retention. ("A UITouch object is persistent throughout a multi-touch sequence. You should never retain an UITouch object when handling an event. If you need to keep information about a touch from one phase to another, you should copy that information from the UITouch object.") Therefore, it might be legal to simply store the initial UITouch in the table and query its new position when touchesShouldCancel is called.
Unfortunately, in the worst case scenario, both of these techniques only give me 2 sample points, which isn't a very accurate measurement of direction. It would be much better if I could simply delay the table's touch processing or call touchesShouldCancel manually, but as far as I can tell it's either very hacky or outright impossible/illegal to do that.