Objective-C: Trying to implement my own dragging in a UIScrollView, running into tons of issues - objective-c

I'm trying to override the default behavior in a UITableView (which is in fact a UIScrollView subclass). Basically, my table takes up a third of the screen, and I'd like to be able to drag items from the table to the rest of the screen — both by holding and then dragging, and also by dragging perpendicular to the table. I was able to implement the first technique with a bit of effort using the default UIScrollView touchesShouldBegin/touchesShouldCancel and touchesBegan/Moved/Ended-Cancelled, but the second technique is giving me some serious trouble.
My problem is this: I'd like to be able to detect a drag, but I also want to be able to scroll when not dragging. In order to do this, I have to perform my dragging detection up to and including the point when touchesShouldCancel is called. (This is because touchesShouldCancel is the branching point in which the UIScrollView decides whether to continue passing on touches to its subviews or to scroll.) Unfortunately, UIScrollView's cancellation radius is pretty tiny, and if the user touches a cell and then moves their finger really quickly, only touchesBegan is called. (If the user moves slowly, we usually get a few touchesMoved before touchesShouldCancel is called.) As a result, I have no way of calculating the touch direction, since I only have the one point from touchesBegan.
If I could query a touch at any given instant rather than having to rely on the touch callbacks, I could fix this pretty easily, but as far as I know I can't do that. The problem could also be fixed if I could make the scroll view cancel (and subsequently call touchesShouldCancel) at my discretion, or at least delay the call to touchesShouldCancel, but I can't do that either.
Instead, I've been trying to capture a couple of touchesBegan/Moved calls (2 or 3 at most) in a separate overlay view over the UITableView and then forwarding my touches to the table. That way, my table is guaranteed to already know the dragging direction when touchesShouldCancel is called. You can read about variations on this method here:
http://theexciter.com/articles/touches-and-uiscrollview-inside-a-uitableview.html
http://forums.macrumors.com/showthread.php?t=640508
(Yes, they do things a bit differently, but I think the crux is forwarding touches to the UITableView after pre-processing is done.)
Unfortunately, this doesn't seem to work. Calling my table view with touchesBegan/Moved/Ended-Cancelled doesn't move the table, nor does forwarding them to the table's hitTest view (which by my testing is either the table itself or a table cell). What's more, I checked what the cells' nextResponder is, and it turns out to be the table, so that wouldn't work either. By my understanding, this is because UIScrollView, at some point in the near past, switched over to using gesture recognizers to perform its vital dragging/scrolling detection, and to my knowledge, you can't forward touches as you would normally when gesture recognizers are involved.
Here's another thing: even though gesture recognizers were officially released in 3.2, they're still around in 3.1.3, though you can't use the API. I think UIScrollView is already using them in 3.1.3.
Whew! So here are my questions:
The nextResponder method described in the two links above seems pretty recent. Am I doing something wrong, or has the implementation of UIScrollView really fundamentally changed since then?
Is there any way to forward touches to a class with UIGestureRecognizers, ensuring that the recognizers have a chance to handle the touches?
I can solve the problem by adding my own UIGestureRecognizer that detects the dragging angle to my table view, and then making sure that every gesture recognizer added before that in table.gestureRecognizers depends on mine finishing. (There are 3 default UIScrollView gesture recognizers, I think. A few are private API classes, but all are UIGestureRecognizer subclasses, obviously.) I'm not handling any of the private gesture recognizers by name, but I'm still manipulating them and also using my knowledge of UIScrollView's internals, which aren't documented by Apple. Could my app get rejected for this?
What do I do for 3.1.3? The UIScrollView is apparently already using gesture recognizers, but I can't actually access them because the API is only available in 3.2.
Thank you!

Okay, I finally figured out an answer to my problem. Two answers, actually.
Convoluted solution: subclass UIWindow and override sendEvent to store the last touch's location. (Overriding sendEvent is one of the examples given in the Event Handling Guide.) Then, the Scroll View can query the window for the last touch when touchesShouldCancel is called.
Easier solution: shortly after, I noticed that Facebook's Three20 library was storing UITouches without retaining them. I always thought that you shouldn't keep UITouch objects around beyond local scope, but Apple's docs only explicitly prohibit retention. ("A UITouch object is persistent throughout a multi-touch sequence. You should never retain an UITouch object when handling an event. If you need to keep information about a touch from one phase to another, you should copy that information from the UITouch object.") Therefore, it might be legal to simply store the initial UITouch in the table and query its new position when touchesShouldCancel is called.
Unfortunately, in the worst case scenario, both of these techniques only give me 2 sample points, which isn't a very accurate measurement of direction. It would be much better if I could simply delay the table's touch processing or call touchesShouldCancel manually, but as far as I can tell it's either very hacky or outright impossible/illegal to do that.

Related

Following touch / using gesture recognizer translation on custom interactive transitioning

I created a custom transition for navigation controller where as the user pans up, the next controller's view revealed below as the current controller's view moves in upward direction. I want that view to move by following the touch (as if it is glued to finger at the touch point), but i dont know how to pass that translation from pan gesture recognizer to the object that implements UIViewControllerAnimatedTransitioning. Well, I do but i cannot access it from inside the [UIView animateWithDuration ... ] block (It seems that block is executed once, I thought it would be executed as percentage of completion changes). How can I accomplish this?
To ask the question in a different way, if you use the Photos app in ios7, when you are looking at a photo, touch with two fingers and pinch /move and you will see that it is following the finger (movements). Any example code for this?
You'll need to create a separate animation controller as a subclass of UIPercentDrivenInteractiveTransition to go along with your custom transition animation. This is the class that will calculate the percentage of how complete your animation is. There's too much to explain in a single SO answer, but have a look at the docs here. You can also refer to one of my implementations of a custom transition animation with interactive abilities here to see it in action.
Croberth's answer is correct. You actually have two choices.
If you want to keep your custom animation, then use a UIPercentDrivenInteractiveTransition and keep updating it as the gesture proceeds, as in this example of mine:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p296customAnimation2/ch19p620customAnimation1/AppDelegate.m
However, I prefer to split the controller up into two separate cases; if we are interactive (using a gesture), then I just keep updating the view positions myself, manually, as the gesture proceeds, including completing or reversing it at the end, as this in this code:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p300customAnimation3/ch19p620customAnimation1/AppDelegate.m

Using IBAction Instead of touchesBegan:

Is there any real difference between setting an IBAction to handle your touch event vs. using touchesBegan: touchesMoved, etc? What considerations would cause one to be preferred over the other?
Accessibility
If by IBAction you mean attaching to events like UIControlEventTouchUpInside, there is quite a bit of "magic" attached to control events that would take some work to duplicate touch events.
Most obviously, if you touch a UIButton, then drag a short distance off the button before releasing, the button still sends its UIControlEventTouchUpInside event. The distance was chosen through usability experiments: how far can someone's finger slip while they still think they're pressing the button?
I suspect that using control events will also make it easier for iOS 6 Guided Access and other accessibility aids to understand your app.
Separating the View from the Model
Using control events means that the View doesn't need to know what effect it has when tapped. This is considered a good thing in the Model-View-Controller paradigm. Ideally your Controller will receive the event, and update the Model to suit.
Conclusion
If you can implement your interaction with control events, it's better to do so.
Sometimes your control needs complex interaction, though. If you're implementing something like finger painting with multi-touch, you're going to need to know exactly where and when touches happen. In that case, implement your interaction with touchesBegan, touchesMoved, touchesEnded and touchesCancelled.

Core Graphics- drawRect: not getting called frequently enough

In my application, I have a UIViewController with a subclassed UIView (and several other elements) inside of it. Inside of the UIView, called DrawView, in my drawRect: method, I draw a table grid type thing, and plot an array of CGPoints on the grid. When the user taps on the screen, it calls touchesBegan:withEvent: and checks to find the closest point on the grid to the touch, adds a point to the array that the drawRect: method draws points from, and calls [self setNeedsDisplay]. As the user moves their finger around the screen, it checks to see if the point changed from the last location, and updates the point and calls [self setNeedsDisplay] as necessary.
This works great in the Simulator. However, when run on a real iPhone, it runs very slowly, when you move your finger around, it lags in drawing the dot. I have read that running calculations for where to place the points in a different thread can improve performance. Does anyone have experience with this that knows this for a fact? Any other suggestions to reduce lag?
Any other suggestions to reduce lag?
Yes. Don't use -drawRect:. It's a long and complicated reason why, but basically when UIKit sees that you've implemented -drawRect: in your UIView subclass, rendering goes through the really slow software-based rendering path. When you draw with CALayer objects and composite views, you can get hardware accelerated graphics, which can make your app FAR more performant.

How quickly does the iPad respond to touches?

I'm talking, how much time can be expected to elapse between the user touching the screen and something like touchesBegan being called? (Or something lower level, if such a thing is available.) Sub-millisecond? Multiple milliseconds? Tens?
I'm pretty sure touchesBegan is called very quickly (meaning, with whatever minimal delay exists in the event path). I've noticed in my code that I get a single touchesBegan for a two-fingered touch with both touches already in the list. I assume that I don't touch both fingers down together very precisely but from what I've seen the touchesBegan event is delivered within a few milliseconds. I suspect Apple holds the touches very briefly before passing them along in order to batch them for more efficient handling, possibly also using that delay to filter out accidental touches (sort of like de-bouncing a mechanical switch).
I use a touchesBegan in my view to freeze my scrolling animations and my perception is that the scrolling stops immediately when I touch the screen.
The real delays come from the gesture recognizers. They often have to wait to see if you've moved far enough to signify a pan or if you've held long enough to be holding or if you've released without dragging to signify a tap. Those delays can be substantial of course, though they're still only a fraction of a second in my experience.

Any "fundamentals-oriented" example of NSScroller out there?

I'm looking for some kind of a basic, straightforward example of how to work with a pair of NSScrollers in an NSScrollView with a custom NSView.
There are sporadic examples out there, largely consisting of contrived examples using programatically created interfaces, or based on the assumption that the developer is working with a typical ImageView or TextView. Even the Sketch example is based on an NSView that uses the Page Setup in the Print dialog for the bounds, and everything is managed by Cocoa. So there's no real discussion or examples anywhere of how make it all work using a custom Model (though that may be part of the problem, because what does one base the Model on?). Even Apple's own documentation is dodgy here.
Essentially, I have a sub-classed NSView enbedded in an NSScrollView (per the Scoll View Guide), that a user can click in the view to create, edit and delete objects, much like an illustration program. The Model is those objects that are just data wrappers that simply record their position for drawRect: to use. The height and width are based on custom values that are being translated into pixels as needed.
My problem is that all of the examples I have found are based on either a text editor, an image viewer, or uses the standard document sizes in the Page Setup dialog. Because these are common document types, Cocoa basically manages for the developer, so the interaction code is more or less hidden (or I'm just not seeing it for what it is). My project doesn't fit any of those needs, and I have no need for printing. Thrusting my Model into the documentView property wouldn't work.
I'm just looking for a simple example on how to initialize the NSScrollers with a custom, object-oriented Model (the documentView), and handle scrolling and updating based on user action, such as when the user drags a smattering of objects off to the left or down or the window gets resized. I think I'm close to getting it all together, but I'm missing the jumping off point that ties the controls to document.
(Not that it matters in a Cocoa question, but when I did this in REALbasic, I would simply calculate and apply the MaxX, MaxY to a ScrollBar's Maximum value based on user actions, watch the position in the ScrollBar when the user clicks, and draw as needed. NSScrollers in the NSScrollView context aren't nearly as obvious to me, it seems.)
I appreciate the time taken by everyone, but I'm updating with more information in the hopes of getting an answer I can use. I'm sorry, but none of this is making sense, Apple's documents are obtuse, but perhaps I'm missing something painfully obvious here...
I have an array of objects sitting in a subclassed NSDocument which are data holders that tell drawRect what and where to draw. This is straight from the Sketch example. The Sketch example uses the document sizes in the Page Setup dialog for the size, so there's nothing to show here. I'm cool with Cocoa handling the state of the scroll bars, but how do I link up the ScrollView to see the initial editor's state held in the NSDocument and updates to those objects and the editor? Do I calculate my own NSRect and pass that to the NSScrollView? Where and how? Am I doing this in my custom NSView which has been embedded in the NSScrollView or my NSDocument in init? The NSScrollView isn't created programmatically (there's no easy way of doing that), so it's all sitting in Interface Builder waiting to be hooked up. I'm missing the hook up bit.
Perhaps I'm wearing my "I don't get it" cap this week, but this can't be this difficult. Illustration apps, MIDI Editors, and countless other similar custom apps do this all the time.
SOLVED (mostly):
I think I have this sorted out now, though it's probably not the best implementation.
My document class now has a NSRect DocumentRect property that looks at all of its objects and gives back a new NSRect based on their location. I call it in my subclassed NSView's mouse events with
[self setFrame:[[self EditorDocument] DocumentRect]];
This updates the size of the View based on user interaction, and the window now handles the scrolling where it didn't before. At this point I'm figuring out how to get the frame to expand while dragging, but at least I now have the fundamental concept I was missing.
The answer given pointed me in the direction I needed to go here (documentView requiring a view, which translated to looking at the NSView class), so Peter gets the credit. Thanks so much for help.
The document view isn't a model, it's a view. That's why it's called the document view.
The reason there are so few examples on working with NSScrollers directly is because you normally don't. You work with NSScrollView and let it handle the scrollers for you.
All you need to do is make a view big enough to show the entire model, then set that as the document view of the scroll view. From there, it should Just Work. You don't need to manage any of the scrolling-related numbers yourself; Cocoa handles them for you.
For details, see the Scroll View Programming Guide.