Following touch / using gesture recognizer translation on custom interactive transitioning - objective-c

I created a custom transition for navigation controller where as the user pans up, the next controller's view revealed below as the current controller's view moves in upward direction. I want that view to move by following the touch (as if it is glued to finger at the touch point), but i dont know how to pass that translation from pan gesture recognizer to the object that implements UIViewControllerAnimatedTransitioning. Well, I do but i cannot access it from inside the [UIView animateWithDuration ... ] block (It seems that block is executed once, I thought it would be executed as percentage of completion changes). How can I accomplish this?
To ask the question in a different way, if you use the Photos app in ios7, when you are looking at a photo, touch with two fingers and pinch /move and you will see that it is following the finger (movements). Any example code for this?

You'll need to create a separate animation controller as a subclass of UIPercentDrivenInteractiveTransition to go along with your custom transition animation. This is the class that will calculate the percentage of how complete your animation is. There's too much to explain in a single SO answer, but have a look at the docs here. You can also refer to one of my implementations of a custom transition animation with interactive abilities here to see it in action.

Croberth's answer is correct. You actually have two choices.
If you want to keep your custom animation, then use a UIPercentDrivenInteractiveTransition and keep updating it as the gesture proceeds, as in this example of mine:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p296customAnimation2/ch19p620customAnimation1/AppDelegate.m
However, I prefer to split the controller up into two separate cases; if we are interactive (using a gesture), then I just keep updating the view positions myself, manually, as the gesture proceeds, including completing or reversing it at the end, as this in this code:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p300customAnimation3/ch19p620customAnimation1/AppDelegate.m

Related

IOS 7 Weather APP Like Transition/Animations

I would like to implement IOS Weather APP like transition, ListView, tap on list item it expands to detail view, or pinch to list also expands to detail view.
Slide left and right transitions. Please let me know how can I implement that.
Thanks in Advance.
Here is some post on a blog I found that explains Apple new Transitioning API on iOS 7, go through it, read it.
In short lines, here are the steps
1 - Set a transition delegate on a controller
There are 3 types of transitions you might want to customise :
UINavigationController push & pop transitions
UItabBarController tab changed transitions
any modal presentation with presentViewController:animated
Each of these 3 cases offers its own 'transition delegate' protocol :
UINavigationControllerDelegate
UITabBarControllerDelegate
UIViewControllerTransitioningDelegate
When, from somewhere in your code, you use the methods for presentation :
pushViewController:animated: or popViewControllerAnimated:
setViewControllers:animated:
presentViewController:animated
Then, these delegates asks for what I call an 'animator' if an animation is required.
What I'm calling an 'animator' is an object conforming to protocol <UIViewControllerAnimatedTransitioning> (or <UIViewControllerInteractiveTransitioning> in case of interactive transition, like gesture driven interactions). This decouples the animation from your UIViewControllers (which might already have plenty of code inside)
2 - Write the 'animator'
This is the object responsible for animating transition. This can be a viewController, or a completely new NSObject.
In case of a UINavigationController, you could define different animators for push and pop operation.
3 - add the properties you need for your animation into your animator, and code the animation
The 'animator' might implement different protocols, depending on which transition you're trying to customise.
In case of non interactive animations, these are the methods :
- (NSTimeInterval)transitionDuration:(id<UIViewControllerContextTransitioning>)transitionContext : define the duration of animation
- (void)animateTransition:(id<UIViewControllerContextTransitioning>)transitionContext this is where the beef goes. See the example code in link above,
- (void)animationEnded:(BOOL)transitionCompleted for any clean-up after your animation was played.
In your case, you might want to add some 'origin' and 'target' UIView properties in your animator class (as weak properties of course !)
Then, when you detect 'which' view was tapped by user. (in your UITableVIewDelegate or UICollectionViewDelegate didSelect methods), you tell your animator so that it can animate with THAT specific frame, then call the 'push', 'pop' or 'presentViewController' , depending on your navigation logic.
You can definitely pull this off with the transitioning api.
Check out this project, I think it will help:
https://github.com/chefnobody/Colors
I was able to do it using this example from Ash Furrow # Teehan + Lax: http://www.teehanlax.com/blog/custom-uiviewcontroller-transitions/ with some modifications:
To augment this example to get the pinch/pull table view cell separation animation you would need to identify the table view cell that was selected (or "selected" relative to the pinch gesture"), then in -animateTransition: you animate the actual table view cells above and below the selected cell out of view, revealing your details view controller. Remember, also to animate back to the table view from the details you need to (during the "pop") know which cell would be selected (scrolling it back into view if it's not already in view) then animate the cells surrounding it from off screen, back into view.
As for the swipe interaction between the different cities you would implement a different InteractionController that handles the transitions there. Again, you can probably follow Furrow's example and figure out how to pull it off.

Stop a UIGestureRecognizer from operating on UIButtons inside a UIView

I've attached a UITapGestureRecognizer to a UIView in my application. If the user double-taps it, the buttons inside that view get randomly re-arranged. Working fine, lovely.
However, the user can also trigger this by double-tapping one of the buttons themselves, or even by tapping two buttons on different parts of the screen.
Is there a sensible / easy way to have this double-tap only work if the two taps are within x number of pixels, and on the view itself, not any elements within it such as these UIButtons?
I think the usual way to do this is with shouldReceiveTouch. Check out this question for a lengthy discussion and all the details.
One way to do this would be to attach a single tap gesture recognizer to the buttons -- this will preempt the button's normal touch events, so you would have to put the button's action method in the gesture recognizer's action method. Then, you would add a dependency to the double tapper to have it only fire if the single tapper fails:
[self.doubleTapper requireGestureRecognizerToFail:self.tapper];

Getting around Subclassing UIView and Using Pictures in Storyboard

I'm trying to write my first iPhone app, and I'm running into a sort of design struggle. What I want to do is have a grid of icons and when you touch one, all the icons above and to the left will "activate" and all the ones below and to the right will "deactivate." If an icon is activated it shows one picture, and if it's not activated it shows another.
The problem I have is that I want to assign a gesture recognizer to each one of these individual icons, and when that icon is tapped, it needs to call a function that updates my grid of icons. But in order to properly update, the function needs to pass as arguments the location of the image in the grid and there's no way to call a function with arguments as part of a gesture recognizer.
So really all I need to do is extend UIImageView to hold two extra integers and the grid it's contained in, and then I could have the following code:
imageView.userInteractionEnabled = YES;
UITapGestureRecognizer *tapgr = [[UITapGestureRecognizer alloc] initWithTarget:
self action:#selector(handleTap)];
[imageView addGestureRecognizer:tapgr];
:
:
- (void)handleTap
{
[self.grid updateTableFromRow:self.row andCol:self.col
}
So I suppose this is one way of doing it, but I'm told that I'm not supposed to extend classes in Objective-C, that I should build them from the ground up. In this case, I would just make a custom view with all the properties and/or instance variables I need and I would just fill this custom view with the UIImageView.
This is mostly fine, except when it comes to building my Storyboard. I put all the code that manages and creates this table of icons (programmatically) in another custom view, GridOfIconsView. So on the Storyboard I drag out a custom view and set to be a GridOfIconsView, but then I just see a big white rectangle, and I really want to be able to visualize my app in Storyboard. I know that I can drag out actual image files that I use for the icons onto Storyboard and set them to be a custom view, but then how does that work? Is that image just a background to the custom view? Would I be able to change it programmatically? So if the activated image was a green square, but the deactivated was a red one and I initially dragged out red squares to the Storyboard, would I have access to that red square image?
And a more concerning issue is that I want to manage all these icons in a data structure, either as a 2d array (id icons[][]) or a NS(Mutable?)Array of NS(Mutable?)Arrays. Either way, how could I initialize the data structure to contain links to all these? The grid will be probably 8x8 or 10x10, and there's no way I'm going to have 64-100 #propertys connecting these icons. I'm thinking the only way to sensible organize this is programmatically, but then still, how can I visualize it in Storyboard?
First, it's completely fine to extend classes in Objective-C, and it's done all the time. UIView, UIViewController, UIComponent, etc., were all designed specifically to be subclassed and extended.
However, there are two ways you can do this that are much simpler than extending the class. First, you can have your grid as you already do, where each view has a gesture recognizer attached that calls back to a method on the view controller. Then, you can set a tag on each view (or even just use the view's frame for identification), and read that from the callback method (the gesture recognizer is passed back to the callback method). For example, let's say you had a grid of 4x4 views and you simply numbered them starting in the top-left, advancing each column to the right and then each row, from 0 to 11, you could easily identify the view as such:
// The system automatically passes the gesture recognizer as the only parameter
- (void)handleTap:(UIGestureRecognizer)gestureRecognizer
{
NSInteger viewNumber = [[gestureRecognizer view] tag];
// do something with this view
}
The other way you can do it is to have a single gesture recognizer on the parent view, and then in your -handleTap: callback, you'd query the position of the tap in the view. If the position is within the frame of any of your views, you'd know which one and what to do with it. If not, you could ignore it. This solution requires slightly more math, but also requires far less maintenance and far fewer gesture recognizer that need to be wired up. I would recommend this solution over tagging your views.

Objective-C: Trying to implement my own dragging in a UIScrollView, running into tons of issues

I'm trying to override the default behavior in a UITableView (which is in fact a UIScrollView subclass). Basically, my table takes up a third of the screen, and I'd like to be able to drag items from the table to the rest of the screen — both by holding and then dragging, and also by dragging perpendicular to the table. I was able to implement the first technique with a bit of effort using the default UIScrollView touchesShouldBegin/touchesShouldCancel and touchesBegan/Moved/Ended-Cancelled, but the second technique is giving me some serious trouble.
My problem is this: I'd like to be able to detect a drag, but I also want to be able to scroll when not dragging. In order to do this, I have to perform my dragging detection up to and including the point when touchesShouldCancel is called. (This is because touchesShouldCancel is the branching point in which the UIScrollView decides whether to continue passing on touches to its subviews or to scroll.) Unfortunately, UIScrollView's cancellation radius is pretty tiny, and if the user touches a cell and then moves their finger really quickly, only touchesBegan is called. (If the user moves slowly, we usually get a few touchesMoved before touchesShouldCancel is called.) As a result, I have no way of calculating the touch direction, since I only have the one point from touchesBegan.
If I could query a touch at any given instant rather than having to rely on the touch callbacks, I could fix this pretty easily, but as far as I know I can't do that. The problem could also be fixed if I could make the scroll view cancel (and subsequently call touchesShouldCancel) at my discretion, or at least delay the call to touchesShouldCancel, but I can't do that either.
Instead, I've been trying to capture a couple of touchesBegan/Moved calls (2 or 3 at most) in a separate overlay view over the UITableView and then forwarding my touches to the table. That way, my table is guaranteed to already know the dragging direction when touchesShouldCancel is called. You can read about variations on this method here:
http://theexciter.com/articles/touches-and-uiscrollview-inside-a-uitableview.html
http://forums.macrumors.com/showthread.php?t=640508
(Yes, they do things a bit differently, but I think the crux is forwarding touches to the UITableView after pre-processing is done.)
Unfortunately, this doesn't seem to work. Calling my table view with touchesBegan/Moved/Ended-Cancelled doesn't move the table, nor does forwarding them to the table's hitTest view (which by my testing is either the table itself or a table cell). What's more, I checked what the cells' nextResponder is, and it turns out to be the table, so that wouldn't work either. By my understanding, this is because UIScrollView, at some point in the near past, switched over to using gesture recognizers to perform its vital dragging/scrolling detection, and to my knowledge, you can't forward touches as you would normally when gesture recognizers are involved.
Here's another thing: even though gesture recognizers were officially released in 3.2, they're still around in 3.1.3, though you can't use the API. I think UIScrollView is already using them in 3.1.3.
Whew! So here are my questions:
The nextResponder method described in the two links above seems pretty recent. Am I doing something wrong, or has the implementation of UIScrollView really fundamentally changed since then?
Is there any way to forward touches to a class with UIGestureRecognizers, ensuring that the recognizers have a chance to handle the touches?
I can solve the problem by adding my own UIGestureRecognizer that detects the dragging angle to my table view, and then making sure that every gesture recognizer added before that in table.gestureRecognizers depends on mine finishing. (There are 3 default UIScrollView gesture recognizers, I think. A few are private API classes, but all are UIGestureRecognizer subclasses, obviously.) I'm not handling any of the private gesture recognizers by name, but I'm still manipulating them and also using my knowledge of UIScrollView's internals, which aren't documented by Apple. Could my app get rejected for this?
What do I do for 3.1.3? The UIScrollView is apparently already using gesture recognizers, but I can't actually access them because the API is only available in 3.2.
Thank you!
Okay, I finally figured out an answer to my problem. Two answers, actually.
Convoluted solution: subclass UIWindow and override sendEvent to store the last touch's location. (Overriding sendEvent is one of the examples given in the Event Handling Guide.) Then, the Scroll View can query the window for the last touch when touchesShouldCancel is called.
Easier solution: shortly after, I noticed that Facebook's Three20 library was storing UITouches without retaining them. I always thought that you shouldn't keep UITouch objects around beyond local scope, but Apple's docs only explicitly prohibit retention. ("A UITouch object is persistent throughout a multi-touch sequence. You should never retain an UITouch object when handling an event. If you need to keep information about a touch from one phase to another, you should copy that information from the UITouch object.") Therefore, it might be legal to simply store the initial UITouch in the table and query its new position when touchesShouldCancel is called.
Unfortunately, in the worst case scenario, both of these techniques only give me 2 sample points, which isn't a very accurate measurement of direction. It would be much better if I could simply delay the table's touch processing or call touchesShouldCancel manually, but as far as I can tell it's either very hacky or outright impossible/illegal to do that.

Using hitTest logic only for touchesBegan and NOT gesture recognizers

I have been developing a simple game for iOS which involves dragging and using rotation- and other gesture recognizers. Dragging is realized through touchesBegan/Moved/Ended and rotation - through recognizer.
The views are irregularly shaped, and the view borders sometimes overlap, so I implemented Ole Belgeman's UIImage+ColorAtPixel in my picture view and overrode isPointInside method in the main element view. isPointInside invokes the method in picture view, which checks alpha at touch point and returns NO if the transparent section has been touched. Essentially, hitTest ignores this branch.
But the side effect of it is that hitTest ignores all touches on the transparent section, and rotation recognizer only works on the non-transparent zone. For some views, which are too small in size, it becomes impossible to use rotation gesture :(
Is there any way to somehow avoid this problem and use hitTest logic only for touchesBegan? I tried to work the solution out, but it seems that hitTest works strictly before any touch handling.
Checking the transparency at touchesBegan works, but when you touch the transparent section, which overlaps the non-transparent section of the other view, the latter doesn't receive the touch.
I just can't figure out the trick...
Thank you in advance for any help!
I would make the dragging use a UIPanGestureRecognizer, so that you can implement the delegate method -gestureRecognizer:shouldReceiveTouch: to return NO when your pan recognizer is considering touches in the transparent area. Leave it unimplemented or return YES from your rotation recognizer to receive everything.
In addition, using gesture recognizers for both kinds of actions has other benefits, like the ability to specify dependencies with -requireGestureRecognizerToFail:.
Try to check if the UIEvent parameter that passed to pointInside:withEvent: when it comes from the gesture recognizer, is different than the one passed when it called from touchBegan/Moved/Ended.
If it is different then i guess this is solving your problem.
Just put a break point or NSLog at pointInside to see the Event parameter on each case and see if you can differentiate.
Good Luck!