I am doing some drawing on a CALayer and want to be able to have the user single tap different parts of the drawing and trigger a response. I tried looking into gesture recognizers, and it seems that they need to be tied to a UIView. Any idea how I can get my desired behavior using CALayers?
You need a responder to be able to respond to touches. From the view that is hosting this layer (at some point in your tree this needs to be true) you can use -[CALayer hitTest:] to try to find the deepest sublayer that will respond to you.
Related
I created a custom transition for navigation controller where as the user pans up, the next controller's view revealed below as the current controller's view moves in upward direction. I want that view to move by following the touch (as if it is glued to finger at the touch point), but i dont know how to pass that translation from pan gesture recognizer to the object that implements UIViewControllerAnimatedTransitioning. Well, I do but i cannot access it from inside the [UIView animateWithDuration ... ] block (It seems that block is executed once, I thought it would be executed as percentage of completion changes). How can I accomplish this?
To ask the question in a different way, if you use the Photos app in ios7, when you are looking at a photo, touch with two fingers and pinch /move and you will see that it is following the finger (movements). Any example code for this?
You'll need to create a separate animation controller as a subclass of UIPercentDrivenInteractiveTransition to go along with your custom transition animation. This is the class that will calculate the percentage of how complete your animation is. There's too much to explain in a single SO answer, but have a look at the docs here. You can also refer to one of my implementations of a custom transition animation with interactive abilities here to see it in action.
Croberth's answer is correct. You actually have two choices.
If you want to keep your custom animation, then use a UIPercentDrivenInteractiveTransition and keep updating it as the gesture proceeds, as in this example of mine:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p296customAnimation2/ch19p620customAnimation1/AppDelegate.m
However, I prefer to split the controller up into two separate cases; if we are interactive (using a gesture), then I just keep updating the view positions myself, manually, as the gesture proceeds, including completing or reversing it at the end, as this in this code:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch06p300customAnimation3/ch19p620customAnimation1/AppDelegate.m
I am working on an app, which actually works like MSPaint (something to draw lines, etc...).
I got a white UIView, basically where the user draws. On top of this UIView I set up a UIImage, which is gray, with a 0,4 alpha. I want this UIImage to be used as a blotting paper. The idea is to disable touch when the user put the palm of his hand on this area, so it's more comfortable to draw (multitouch is disabled, and with this "blotting paper" you won't draw something accidentally with your palm...)
Even if I bring the UIImage to the front, on top of the view, and even if I disable user interactions on this UIImage, it is still possible to draw on the UIView. , behind the UIImage (kind of strange!)
I do not understand what's happening, because, it seems that the image is transparent, and that the UIView "behind" is still active, even if she's overlaid by the UIImage?!
Any help/indication/idea would be much appreciated! Thank you :-)
Have you set the "userInteractionEnabled" property of the UIImage to "NO" ?
You may actually want to do the opposite. When you disable user interaction or touches, the view basically becomes invisible to touches and they are passed on to the next view.
In your case you do want userInteractionEnabled because you want the view to catch those touches.
You have to disable the user interaction on the UIImageView not the UIImage and it should work.
Edit:
Or you could be sneaky and just add an empty view over it. Use the same frame size so it overlaps perfectly and thats it. You'll be able to see everything you need and it's not a subview of it so there will eb no interaction and touches will get registered but won't have any effect. :P
No better ideas unless you post some of your code...
OK, so I managed to do what I wanted to! YAY!
I got 3 different classes :
StrokesViewController (UIViewController)-the view controller
StrokesView (UIView) - the view where the user draws the strokes.
BlottingPaper (UIView) - the blotting paper.
I got a XIB file "linked" to all three.
I created this new class, called "BlottingPaper", typed UIView. the .h and .m file are actually empty (I do import #import < Foundation/Foundation.h >)
User interaction is enable on BlottingPaper.
I do not use the exclusive touch on this class.
On the XIB file, I just put a view on top of StrokesView. I link it to BlottingPaper (modify the alpha as I want, blablabla...)
And that's it! When I put the palm of my hand on it, it doesn't draw anything on the area where my hand is, but I still can draw with my finger on the rest of the StrokesView!
In addition to Dancreek's response, you should be setting buvard.userInteractionEnabled = YES; so that it captures interaction.
You should also set buvard.exclusiveTouch = YES; so that buvard is the only view which will receive touch events.
When you remove buvard you should set buvard.exclusiveTouch = NO; so that other views will regain their ability to receive touches.
I've got a UIView that I've subclassed to be the main view used throughout my app. In it, I have two subviews: banner and container. Banner is basically a place to put an ad or a disclaimer or whatever. Container is meant to act as the primary view, to which you can add, remove and whatever as if it were the only view.
Right now, I'm just overriding the methods of the parent view and sending the calls to the container view. I'm wondering if there is an easier way to do this, without having to write out stuff like this for every method:
- (void)addSubview:(UIView*)view {
[container addSubview:view];
}
Maybe something that lets you delegate all method calls to the view to a specific subview, rather than responding to the method calls itself.
Anyone know if this is possible?
I'm a little confused by the question.
The responder chain is present and passes ui events up through all visible views on screen, by hierarchy. It may be useful to read a little about the responder chain, because by design it passes events from the deepest view to the highest (root) in that order, which is the opposite of the direction you're seeking (if I'm reading this right).
If you need to forward events from a superview to a subview, to respect principles of encapsulation, you should define appropriate actions in your subview's subclass interface, and then then configure your superview to target the actions in that subview/class.
I'm trying to determine when a UIBackButton is pressed in a sublayer of a navigation app. Would HitTest be used for that? I've seen reference to HitTest, but not exactly sure what it is and how to code for it. Any help is muchly appreciated. Thanks!!
No. Hit testing is the (recursive) process by which UIKit determines which view receives touch events. You shouldn't need to participate in it or invoke it.
If you're using a UINavigationController, it will do the right thing to transition between view controllers when the back button is pressed. If an individual view controller needs to know when it is transitioning off-screen, it should override the -viewWillDisappear: and -viewDidDisappear:. See the documentation for those methods for more information.
I have been developing a simple game for iOS which involves dragging and using rotation- and other gesture recognizers. Dragging is realized through touchesBegan/Moved/Ended and rotation - through recognizer.
The views are irregularly shaped, and the view borders sometimes overlap, so I implemented Ole Belgeman's UIImage+ColorAtPixel in my picture view and overrode isPointInside method in the main element view. isPointInside invokes the method in picture view, which checks alpha at touch point and returns NO if the transparent section has been touched. Essentially, hitTest ignores this branch.
But the side effect of it is that hitTest ignores all touches on the transparent section, and rotation recognizer only works on the non-transparent zone. For some views, which are too small in size, it becomes impossible to use rotation gesture :(
Is there any way to somehow avoid this problem and use hitTest logic only for touchesBegan? I tried to work the solution out, but it seems that hitTest works strictly before any touch handling.
Checking the transparency at touchesBegan works, but when you touch the transparent section, which overlaps the non-transparent section of the other view, the latter doesn't receive the touch.
I just can't figure out the trick...
Thank you in advance for any help!
I would make the dragging use a UIPanGestureRecognizer, so that you can implement the delegate method -gestureRecognizer:shouldReceiveTouch: to return NO when your pan recognizer is considering touches in the transparent area. Leave it unimplemented or return YES from your rotation recognizer to receive everything.
In addition, using gesture recognizers for both kinds of actions has other benefits, like the ability to specify dependencies with -requireGestureRecognizerToFail:.
Try to check if the UIEvent parameter that passed to pointInside:withEvent: when it comes from the gesture recognizer, is different than the one passed when it called from touchBegan/Moved/Ended.
If it is different then i guess this is solving your problem.
Just put a break point or NSLog at pointInside to see the Event parameter on each case and see if you can differentiate.
Good Luck!