I'm writing a multitouch gesture recognition library for non-iOS platform, but UIGestureRecognition and all that concept seems pretty solid, so in some way I use it as a reference.
One thing is unclear for me: the requireGestureRecognizerToFail method. Can anyone point on some any potential use-cases other then double-triple-n-tap over single tap? I do understand it's purpose and even wrote that kind of implementation, but eventually removed this thing entirely. Because IMHO the code smells a bit (if you try to include it in base gesture recognition class, even though it works perfectly with doubletap-tap scenario). For me it seems much cleaner to have an extra couple lines of code to workaround single-doubletap situation (once you actually have it), rather then include this very specific thing into base gesture class... But maybe I miss some other scenarios? Have you met any?
OK, so I accidently found another use-case: Swipe on UIScrollView (swipe pan gesture recognizer requires swipe to fail) developer.apple.com/videos/wwdc/2011/?id=104 min30.
Was my question so unclear, or here's not many i-developers?
Related
A Tale of Two Subclasses
By Ben Stock
Prologue
I'm in the process of making a really nice looking set of controls which automatically change their appearance depending on the type of window they're used in (e.g. If you drop a button in a normal window, it looks like any other standard Aqua button. If you drop it on an NSPanel with a window mask of NSHUDWindowMask, however, it'll automatically switch its style to look good on a HUD background. So far, I've subclassed NSButton, NSTextField, NSSlider, and NSSearchField. Last night I started on NSTabView, only to be slammed down by its lack of customizability. It's a real pain in the ass, but I'm a developer, so I'm used to finding my own way. The first thing I think to do is add an instance of NSSegmentedControl in place of the private tabs used by NSTabView. So far, so good. I've got the buttons selectable, they automatically update when new NSTabViewItem's are added, and they work just like the real thing.
And the Pain Begins …
Finally, I start to style my segments, and … WTF have I gotten myself into‽ I should've just gone into acting or something. Objective-C development is slowly taking years off my life. No matter what I do, the "tracking areas" used by NSSegmentedCell don't seem to be updating when my segment widths change. So when my widths change, my artwork does, too. However, the actual tracking area doesn't update (even when I override -updateTrackingAreas. It's really hard to explain, so I decided to draw my segment rectangles behind and in front of the ones drawn by super in -drawSegment:inFrame:withView. Here's a screenshot with my art drawn on top of the underlying tracking areas:
And here's super's implementation above my segment rects:
I've tried overriding everything I can think of. Here are a few of the methods I've overridden (and un-overridden):
-cellSize (NSSegmentedCell)
-cellSizeForBounds: (NSSegmentedCell)
-sizeToFit (NSSegmentedControl)
-intrinsicContentSize (NSSegmentedControl)
-setWidth:forSegment: (NSSegmentedControl/Cell)
-startTrackingAt:inView: (NSSegmentedCell)
-continueTracking:at:inView: (NSSegmentedCell)
-stopTracking:at:inView:mouseIsUp: (NSSegmentedCell)
At this point, some of those methods in the above list are still using my overrides and some aren't. I've mixed and matched, deleted, simplified, rewrote, and refactored, and no matter what I do, the underlying rectangles don't change. I love Apple as much as the next guy, but their view of customization needs to change. I can't stand not being able to understand what's going on in the implementation of all these stupid controls. Not to mention the fact that I still can't fully wrap my head around Auto Layout (which is about the most un-"auto" thing I've ever dealt with), but that's a post for another day. Anyway, if anybody could help a brotha out, I'd be super grateful. Sorry for ranting and thanks for reading!
P.S. None of these things are finished, so please don't be too hard on a few pixel imperfections. ;-)
I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.
I'm working with some code that I need to refactor. A view controller is acting as a container for two other view controllers, and will swap between them, as shown in the code below.
This may not be the best design. Swapping the view controllers in this way might not be required. I understand that. However, as I work with this code I want to further understand what happens with the addChildViewController call. I haven't been able to find the answer in Apple's docs or in related questions, here (probably an indication that the design needs to change).
Specifically - how does the container view controller handle a situation where it is asked to add a child view controller, which it has already added? Does it recognise that it has already added that view controller object?
E.g. if the code below is inside a method - and that method is called twice...
[self addChildViewController:viewControllerB];
[self.view addSubview:viewControllerB.view];
[viewControllerB didMoveToParentViewController:self];
[viewControllerA willMoveToParentViewController:nil];
[viewControllerA.view removeFromSuperview];
[viewControllerA removeFromParentViewController];
Thanks,
Gavin
In general, their guidelines for view controller "containment", when one contains another, should be followed to determine whether you will need to implement containment.
In particular, worrying about adding the same child view controller twice is like worrying about presenting the same view controller twice. If you've really thought things through, you shouldn't need to face that problem. Your hunch is correct.
I agree that Apple's docs should be more up-front about what happens with weird parameters or when called out of sequence, but it may also be a case of not wanting to tie themselves to an error-correcting design that will cause trouble down the road. When you work out a design that doesn't ever call these methods in the wrong way, you solve the problem correctly and make yourself independent of whatever error correction they may or may not have - even more important if you consider that, since it's not documented, that error correction may work differently in the future, breaking your app.
Going even a bit further, you'll notice that Apple's container view controllers can't get in an invalid state (at least not easily with public API). With a UITabViewController, switching from one view controller to another is an atomic operation and the tab view controller at any point in time knows exactly what's going on. The most it ever has to do is remove the active one and show the new one. The only time where it blows everything out of the water is when you tell it "you should blow everything out of the water and start using these view controllers instead".
Coding for anything else, like removing all views or all view controllers no matter what may in some cases seem expedient or robust, but it's quite the opposite since in effect one end of your code doesn't trust the other end of your code to keep its part of the deal. In any situation where that actually helps you, it means that you've let people add view controllers willy-nilly without the control that you should desire, and in that case, that's the problem you should fix.
Currently, when selecting components or swiping the UIPickerView, the default is a lengthy animation time waiting for the selection, with a "gravity" effect near values. Is there a simple way to speed up this animation? I've looked at the delegate protocols as well as UIPickerView's methods and properties. Will I have to subclass and overload the animation method? Any help will be useful.
There is no way to do this. If you'd like for there to be a way to do this, please file a bug asking for it.
Also, relying on implementation details and a particular interval view hierarchy, as Fabian suggests, is a really excellent way to introduce a ton of fragility into your application and open the possibility of your app breaking in the future, should UIKit ever change anything.
I don't know of a way to achieve that using public API, but UIPickerView uses a UIPickerTableView as a subview somewhere in its view hierarchy. That is a subclass of UITableView which is a subclass of UIScrollView which has a decelerationRate property.
You shouldn't use private API, though. If you really need this and it's not for an App Store app this might be okay, but you should be careful and code defensively.
I don't have 50 rep, so can't comment on this (which is where this should really go). This question shouldn't have been downvoted since the question is legitimate. The valid answer is "no, you can't do that without private API hacks"), but the question is still valid.
I am working to integrate a current iOS application with an analytics suite. One of analytics items that we will use in our UX analysis is a complete track of all gestures (at least ones that are recognized through a UIGestureRecognizer subclass). My goal is to add this hook into the analytics suite without having to subclass each gesture recognizer.
My initial thought was to write a category that had an override for an existing method on UIGestureRecognizer, but I couldn't find a safe way to do that (and I also learned that there is no way to call the class's existing implementation of that method without method swizzling).
My next approach would be to use poseAs and simply have a subclass of UIGestureRecognizer pose as UIGestureRecognizer and add a target on init. However, I then learned that poseAs is deprecated (and has been for a while), so I also abandoned this approach.
Obviously, I could subclass each gesture recognizer we are using, but I feel that doesn't take advantage of the dynamic nature of obj-c.
Is there a good way to accomplish this?
After research, I don't think there is a clean way to do this. I ended up subclassing all of the gesture recognizers to accomplish this shared functionality.