Core Graphics- drawRect: not getting called frequently enough - objective-c

In my application, I have a UIViewController with a subclassed UIView (and several other elements) inside of it. Inside of the UIView, called DrawView, in my drawRect: method, I draw a table grid type thing, and plot an array of CGPoints on the grid. When the user taps on the screen, it calls touchesBegan:withEvent: and checks to find the closest point on the grid to the touch, adds a point to the array that the drawRect: method draws points from, and calls [self setNeedsDisplay]. As the user moves their finger around the screen, it checks to see if the point changed from the last location, and updates the point and calls [self setNeedsDisplay] as necessary.
This works great in the Simulator. However, when run on a real iPhone, it runs very slowly, when you move your finger around, it lags in drawing the dot. I have read that running calculations for where to place the points in a different thread can improve performance. Does anyone have experience with this that knows this for a fact? Any other suggestions to reduce lag?

Any other suggestions to reduce lag?
Yes. Don't use -drawRect:. It's a long and complicated reason why, but basically when UIKit sees that you've implemented -drawRect: in your UIView subclass, rendering goes through the really slow software-based rendering path. When you draw with CALayer objects and composite views, you can get hardware accelerated graphics, which can make your app FAR more performant.

Related

setNeedsDisplay does not trigger drawRect in subviews as expected

I'm struggling with setNeedsDisplay. I thought it was supposed to trigger calls of drawRect: for the view for which it is called and the hierarchy below that if it's within the view's bounds, but I'm not finding that to be the case. Here is my setup:
From the application delegate, I create a view whose size is a square that covers essentially the whole screen real estate. This view is called TrollCalendarView. There is not much that happens with TrollCalendarView except for a rotation triggered by the compass.
There are 7 subviews of TrollCalendarView called PlatformView intended to contain 2D draw objects arranged around the center of TrollCalendarView in a 7-sided arrangement. So when the iPad is rotated, these 7 views rotate such that they are always oriented with the cardinal directions.
Each of the PlatformView subviews contains 3 subviews called Tower. Each tower contains 2D draw objects implemented in drawRect:.
So, in summary, I have TrollCalendarView with empty drawRect:, and subviews PlatformView and Platformview -> Tower that each have drawRect implementations. Additionally, Tower lies within the bounds of Platform, and Platform lies within the bounds of TrollCalendarView.
In TrollCalendarView I've added a swipe recognizer. When I swipe happens, a property is updated, and I call [self setNeedsDisplay] but nothing seems to happen. I added NSLog entries to drawRect: method in each of these views, and only the TrollCalendarView drawRect: method is called. Ironically, that is the one view whose drawRect method will be empty.
There is no xib file.
What do I need to do to ensure the drawRect method in the other subviews is called? Is there documentation somewhere that describes all the nuances that could affect this?
I'm struggling with setNeedsDisplay. I thought it was supposed to trigger calls of drawRect for the view for which it is called and the hierarchy below that if it's within the view's bounds
No, that is not the case. Where did you get that idea?
-setNeedsDisplay: applies only to the view to which it is sent. If you need to invalidate other views, you need to add some code to send -setNeedsDisplay: to them, too. That's all there is to it.
I think this is an optimization in the framework; if your subviews don't need to draw again, then this is a major performance improvement. Realize that almost anything animatable does not require drawrect (moving, scaling, etc).
If you know that all of your subviews should be redrawn (and not simply moved), then override setNeedsDisplay in your main view and do like this:
-(void) setNeedsDisplay {
[self.subviews makeObjectsPerformSelector:#selector(setNeedsDisplay)];
[super setNeedsDisplay];
}
I have tested this, and it causes all subviews to be redrawn as well. Please note that you will earn efficiency karma points if you somehow filter your subviews and make sure you only send that to subviews which actually need redrawn... and even more if you can figure out how not to need to redraw them. :-)

Clicking through NSWindow/CALayer

So I'm working on an issue I have when trying to do some simple animation using CAKeyframeAnimation, and I believe my problem is more related to not fully understanding how NSWindow, NSView, and CALayer work together. 
I have two main objects in question. MyContainerWindow (NSWindow subclass) and MyMovableView (NSView subclass). My goal is to be able to animate MyMovableView back and forth across the screen, while maintaining the ability to click on anything through MyContainerWindow unless you are clicking on wherever MyMovableView is. I am able to accomplish the first part fine, by calling -addAnimation:forKeyPath: on myMovableView.layer, and everything is great except I can't click through MyContainerWindow. I could make the window smaller, but then the animation would clip by the bounds of the window.
Important points: 
1) MyContainerWindow is initWithFrame to [[NSScreen mainScreen] frame], NSBorderlessWindowMask, defer no, buffered
2) I setWantsLayer:TRUE to MyMovableView
3) MyContainerWindow is clear, and I want it to be as if there wasnt a window at all, but need it so I have a larger canvas to animate on.
Is there something obvious I'm missing to be able to click through an NSWindow?
Thanks in advance!
My solution in this scenario was actually to use:
[self ignoresMouseEvents:YES];
I originally was hoping to be able to retain the mouse events on the specific CALayer that I'm animating, but upon some further research I understand this comes with the cost of custom drawing everything from scratch, which is not ideal for this particular project.

Objective-C: Trying to implement my own dragging in a UIScrollView, running into tons of issues

I'm trying to override the default behavior in a UITableView (which is in fact a UIScrollView subclass). Basically, my table takes up a third of the screen, and I'd like to be able to drag items from the table to the rest of the screen — both by holding and then dragging, and also by dragging perpendicular to the table. I was able to implement the first technique with a bit of effort using the default UIScrollView touchesShouldBegin/touchesShouldCancel and touchesBegan/Moved/Ended-Cancelled, but the second technique is giving me some serious trouble.
My problem is this: I'd like to be able to detect a drag, but I also want to be able to scroll when not dragging. In order to do this, I have to perform my dragging detection up to and including the point when touchesShouldCancel is called. (This is because touchesShouldCancel is the branching point in which the UIScrollView decides whether to continue passing on touches to its subviews or to scroll.) Unfortunately, UIScrollView's cancellation radius is pretty tiny, and if the user touches a cell and then moves their finger really quickly, only touchesBegan is called. (If the user moves slowly, we usually get a few touchesMoved before touchesShouldCancel is called.) As a result, I have no way of calculating the touch direction, since I only have the one point from touchesBegan.
If I could query a touch at any given instant rather than having to rely on the touch callbacks, I could fix this pretty easily, but as far as I know I can't do that. The problem could also be fixed if I could make the scroll view cancel (and subsequently call touchesShouldCancel) at my discretion, or at least delay the call to touchesShouldCancel, but I can't do that either.
Instead, I've been trying to capture a couple of touchesBegan/Moved calls (2 or 3 at most) in a separate overlay view over the UITableView and then forwarding my touches to the table. That way, my table is guaranteed to already know the dragging direction when touchesShouldCancel is called. You can read about variations on this method here:
http://theexciter.com/articles/touches-and-uiscrollview-inside-a-uitableview.html
http://forums.macrumors.com/showthread.php?t=640508
(Yes, they do things a bit differently, but I think the crux is forwarding touches to the UITableView after pre-processing is done.)
Unfortunately, this doesn't seem to work. Calling my table view with touchesBegan/Moved/Ended-Cancelled doesn't move the table, nor does forwarding them to the table's hitTest view (which by my testing is either the table itself or a table cell). What's more, I checked what the cells' nextResponder is, and it turns out to be the table, so that wouldn't work either. By my understanding, this is because UIScrollView, at some point in the near past, switched over to using gesture recognizers to perform its vital dragging/scrolling detection, and to my knowledge, you can't forward touches as you would normally when gesture recognizers are involved.
Here's another thing: even though gesture recognizers were officially released in 3.2, they're still around in 3.1.3, though you can't use the API. I think UIScrollView is already using them in 3.1.3.
Whew! So here are my questions:
The nextResponder method described in the two links above seems pretty recent. Am I doing something wrong, or has the implementation of UIScrollView really fundamentally changed since then?
Is there any way to forward touches to a class with UIGestureRecognizers, ensuring that the recognizers have a chance to handle the touches?
I can solve the problem by adding my own UIGestureRecognizer that detects the dragging angle to my table view, and then making sure that every gesture recognizer added before that in table.gestureRecognizers depends on mine finishing. (There are 3 default UIScrollView gesture recognizers, I think. A few are private API classes, but all are UIGestureRecognizer subclasses, obviously.) I'm not handling any of the private gesture recognizers by name, but I'm still manipulating them and also using my knowledge of UIScrollView's internals, which aren't documented by Apple. Could my app get rejected for this?
What do I do for 3.1.3? The UIScrollView is apparently already using gesture recognizers, but I can't actually access them because the API is only available in 3.2.
Thank you!
Okay, I finally figured out an answer to my problem. Two answers, actually.
Convoluted solution: subclass UIWindow and override sendEvent to store the last touch's location. (Overriding sendEvent is one of the examples given in the Event Handling Guide.) Then, the Scroll View can query the window for the last touch when touchesShouldCancel is called.
Easier solution: shortly after, I noticed that Facebook's Three20 library was storing UITouches without retaining them. I always thought that you shouldn't keep UITouch objects around beyond local scope, but Apple's docs only explicitly prohibit retention. ("A UITouch object is persistent throughout a multi-touch sequence. You should never retain an UITouch object when handling an event. If you need to keep information about a touch from one phase to another, you should copy that information from the UITouch object.") Therefore, it might be legal to simply store the initial UITouch in the table and query its new position when touchesShouldCancel is called.
Unfortunately, in the worst case scenario, both of these techniques only give me 2 sample points, which isn't a very accurate measurement of direction. It would be much better if I could simply delay the table's touch processing or call touchesShouldCancel manually, but as far as I can tell it's either very hacky or outright impossible/illegal to do that.

Looking for guidance on detecting overlap of UIImageViews

The scenario is that I have a UIViewController containing multiple "InteractiveUIImageViews" (inherited from UIImageView) each containing their own UIImage. In InteractiveUIImageView I have iplemented methods for touchesBegan, touchesMoved and touchesEnded to handle their movement and behaviour on screen. Certain objects of this type will be set as 'containers' (think recycle bin) with the objective being that when one image is dragged onto it, it will be removed from the screen and placed inside it to be potentially retrieved later.
My current thinking would be to call a new method in UIViewController from the touchesEnded method of my InteractiveUIImageView but being new to all this I'm not really sure how to go about doing that (e.g. calling a method from the 'parent') or indeed if this is the best way to achieve what I want to do.
Thanks in advance for any advice.
I'm afraid your question is (too me at least) a bit unclear. I get that your are trying drag a UIImage around a scene and drop it in drop-locations.
What is unclear is you class hierarchy. I believe that you are going about it in a wrong way. You shouldn't have to subclass UIImage at all.
Instead I would urge you to let the UIViewController manage the movement of the images. When you touch down on an image, you also touch down on its parent (containing) view.
What you have to to is then reposition the UIImage (all handled by the UIViewController) as you drag the image across the screen. When you let go you check if your finger was inside your drop-zone on touch up.

Is there an optimal way to render images in cocoa? Im using setNeedsDisplay

Currently, any time I manually move a UIImage (via handling the touchesMoved event) the last thing I call in that event is [self setNeedsDisplay], which effectively redraws the entire view.
My images are also being animated, so every time a frame of animation changes, i have to call setNeedsDisplay.
I find this to be horrific since I don't expect iphone/cocoa to be able to perform such frequent screen redraws very quickly.
Is there an optimal, more efficient way that I could be doing this?
Perhaps somehow telling cocoa to update only a particular region of the screen (the rect region of the image)?
setNeedsDisplayInRect: does exactly what you need.
See documentation at developer.apple.com