Adding a Target for All UIGestureRecognizers - objective-c

I am working to integrate a current iOS application with an analytics suite. One of analytics items that we will use in our UX analysis is a complete track of all gestures (at least ones that are recognized through a UIGestureRecognizer subclass). My goal is to add this hook into the analytics suite without having to subclass each gesture recognizer.
My initial thought was to write a category that had an override for an existing method on UIGestureRecognizer, but I couldn't find a safe way to do that (and I also learned that there is no way to call the class's existing implementation of that method without method swizzling).
My next approach would be to use poseAs and simply have a subclass of UIGestureRecognizer pose as UIGestureRecognizer and add a target on init. However, I then learned that poseAs is deprecated (and has been for a while), so I also abandoned this approach.
Obviously, I could subclass each gesture recognizer we are using, but I feel that doesn't take advantage of the dynamic nature of obj-c.
Is there a good way to accomplish this?

After research, I don't think there is a clean way to do this. I ended up subclassing all of the gesture recognizers to accomplish this shared functionality.

Related

Speed up Animation for UIPickerView Scrolling

Currently, when selecting components or swiping the UIPickerView, the default is a lengthy animation time waiting for the selection, with a "gravity" effect near values. Is there a simple way to speed up this animation? I've looked at the delegate protocols as well as UIPickerView's methods and properties. Will I have to subclass and overload the animation method? Any help will be useful.
There is no way to do this. If you'd like for there to be a way to do this, please file a bug asking for it.
Also, relying on implementation details and a particular interval view hierarchy, as Fabian suggests, is a really excellent way to introduce a ton of fragility into your application and open the possibility of your app breaking in the future, should UIKit ever change anything.
I don't know of a way to achieve that using public API, but UIPickerView uses a UIPickerTableView as a subview somewhere in its view hierarchy. That is a subclass of UITableView which is a subclass of UIScrollView which has a decelerationRate property.
You shouldn't use private API, though. If you really need this and it's not for an App Store app this might be okay, but you should be careful and code defensively.
I don't have 50 rep, so can't comment on this (which is where this should really go). This question shouldn't have been downvoted since the question is legitimate. The valid answer is "no, you can't do that without private API hacks"), but the question is still valid.

Using viewDid/WillMoveToSuperview to setup an NSView

I'd like to know which is the best way to setup an NSView.
The only method suitable for this purpose, seems to be viewDidMoveToSuperview.
In this method I can add subviews and inviewWillMoveToSuperview I can do geometry operation on frame etc.
But these are only my suppositions... I can't find a useful documentation that explain where is the better function to perform setup operations.
What do you think about that?
The reason you don't find any documentation on where to set up your NSViews is probably that you can set up views, add subviews, etc. in pretty much any method, as long as it is called on the main thread.
For simple apps, applicationDidFinishLaunching: of the application delegate is a useful place.
When the app grows, you might want to consider doing this lazily, when a new window is opened or when a view is added.
For normal apps, you won't need to do anything in viewWillMoveToSuperview/viewDidMoveToSuperview.

Registering all view controllers for NSNotifications

I have a custom graphic that is to be displayed to a user when an event occurs. The graphic needs to be displayed on whichever viewController is currently being displayed to the user.
The way i have programmed it so far is by adding to ALL viewcontrtollers:
1) the .h file for the custom graphic class
2) an observer for the NSNotification event that is raised
3) the method which actually draws the graphic.
This doesnt feel like a very efficient way of doing things and i was wondering if anyone has a better way of doing things?
To me it sounds like you've done it in a fairly sane way. The only other way I can think is to just add the graphic to the window which would then overlay on the current view controller and you'd only need to have one object listening for the notification. You could use the app delegate for instance. But then you would have to worry about rotation of the screen yourself when adding the graphic over the top.
What you are doing is correct .. The only thing you can improve is to mauve the drawing graphics part to the custom graphic class.. (if you are not already doing so...
just Make a UIViewController variable as a member variable to the graphics class..and then set it up to the current view displaying..after you receive the notifications..and the class will itself draw the code based on the ViewController you set it up to
The reason it doesn't feel efficient is that you're duplicating a lot of code. That's more work at the outset, and it creates a maintenance headache. You should be taking advantage of the inheritance that's built into object oriented languages, including Objective-C.
If you want all your view controllers to share some behavior, then implement that behavior in a common superclass. Derive all your other view controllers from that superclass, and they'll all automatically get the desired behavior. Your superclass's initializer can take care of registering the view controller for the notification(s) that you care about, and -dealloc can unregister it. This way, you don't have to clutter up each view controller with the same repeated code, and if you want to change the code you only have to do it in one place.

UIGestureRecognizer requireGestureRecognizerToFail scenarios

I'm writing a multitouch gesture recognition library for non-iOS platform, but UIGestureRecognition and all that concept seems pretty solid, so in some way I use it as a reference.
One thing is unclear for me: the requireGestureRecognizerToFail method. Can anyone point on some any potential use-cases other then double-triple-n-tap over single tap? I do understand it's purpose and even wrote that kind of implementation, but eventually removed this thing entirely. Because IMHO the code smells a bit (if you try to include it in base gesture recognition class, even though it works perfectly with doubletap-tap scenario). For me it seems much cleaner to have an extra couple lines of code to workaround single-doubletap situation (once you actually have it), rather then include this very specific thing into base gesture class... But maybe I miss some other scenarios? Have you met any?
OK, so I accidently found another use-case: Swipe on UIScrollView (swipe pan gesture recognizer requires swipe to fail) developer.apple.com/videos/wwdc/2011/?id=104 min30.
Was my question so unclear, or here's not many i-developers?

how to get uiview to talk to controller

I'm relatively new to Objective-C and Cocoa... I've been trying to understand how to correctly implement the MVC pattern in Cocoa/Cocoa Touch for a long time now... I understand the idea behind it; it makes complete sense conceptually: a model holds the data, a view is what that the user sees and can interact with, and the controller acts as the bridge between the two. View can't talk to the model, model can't talk to the view. Got it.
What doesn't make sense to me is how to use MVC efficiently… if the user can only interact with the view, and does something to interact with it (i.e. for an iPhone app, the user clicks/drags within a subclass of UIView, triggering the "touchesBegan" and "touchesMoved" methods, etc.), how does the view communicate these events to the controller?
I've looked at countless examples and forums online, but have yet to find a simplified all-purpose way of achieving this… I know how to communicate with a controller through buttons, sliders, and other things that you can connect to an outlet, but for things that don't have a target-action mechanism, what's the best way to do it?
Thanks in advance for any suggestions regarding what to do, or where to look.
The standard way in Cocoa to do this is the delegate pattern (cf. UITableViewDelegate). Your view class would declare a delegate protocol and the controller sets itself as the view's delegate. The view then calls one of the delegate methods you defined whenever it wants to communicate something to the controller.
An alternative would be to implement the target-action mechanism for your view yourself. You get this more or less for free if you subclass from UIControl (just call sendActionsForControlEvents:) but it is quite easy to implement a system that works the same way for any custom class.
(Edit: I suppose a third way is to have the controller observe properties of the view (with KVO). This wouldn't work well to communicate touch events but it is a feasible way if you want to notify the controller about a state change or something like that.)