What is the difference between beginTrackingWithTouch and touchesBegan? - objective-c

Can someone please explain the high-level difference between these two methods? In particular, when would you use one over the other, and is there any overlap in terms of the purposes of these methods?
They seem like they serve the same purpose but don't appear to be related at all in documentation, and this has me confused.

beginTrackingWithTouch:withEvent:
1) subclass UIControl
2) Sent to the control when a touch related to the given event enters the control’s bounds.
3) To provide custom tracking behavior (for example, to change the highlight appearance).
To do this, use one or all of the following methods: beginTrackingWithTouch:withEvent:, continueTrackingWithTouch:withEvent:, endTrackingWithTouch:withEvent:
touchesBegan:withEvent:
1) subclass UIResponder
2) Tells the receiver when one or more fingers touch down in a view or window.
3) There are two general kinds of events: touch events and motion events.
The primary event-handling methods for touches are touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:.
The parameters of these methods associate touches with their events—especially touches that are new or have changed—and thus allow responder objects to track and handle the touches as the delivered events progress through the phases of a multi-touch sequence.
Any time a finger touches the screen, is dragged on the screen, or lifts from the screen, a UIEvent object is generated. The event object contains UITouch objects for all fingers on the screen or just lifted from it.

Having just run into this today, I think the key difference is that beginTrackingWithTouch and friends are only for tracking - not anything else - in particular not for target/action handling. So if you override touchesBegan, then you'd also be responsible for calling sendActionsForControlEvents when touches ended. But if you use beginTrackingWithTouch, that's handled for free.
I discovered this by implementing beginTrackingWithTouch (for a custom button control) thinking it was just a sideways replacement for handling touchesBegan. So in endTrackingWithTouch, I called sendActionsForControlEvents if touchInside was true. The end result was that the action was called twice, because first the builtin mechanism sent the action, then I called it. In my case, I'm just interesting in customizing highlighting, so took out the call to sendActionsForControlEvents, and all is good.
Summary: use beginTrackingWithTouch when all you need to do is customize tracking, and use touchesBegan when you need to customize the target/action handling (or other low-level details) as well.

If I properly understand Apple documentation:
beginTracking:
Use the provided event information to detect which part of your control was hit and to set up any initial state information
So, it's more for control state configuration.
touchesBegan:
Many UIKit classes override this method and use it to handle the corresponding touch events
This method is more for touch event handling.

Related

What is the similar things like the Java's Robot class in Cocoas?

I would like to use Cocoas to control the user input and mouse movement. In Java, In Java, I can go these things using the Robot class. Which library/class I should check in the Cocoas framework? Thanks.
In order to move the mouse programmatically you can use Quartz Display Services and
CGWarpMouseCursorPosition
in particular. For more information check out this chapter: Controlling the Mouse Cursor.
In Cocoa, mouse movements can be tracked by a combination of NSWindow, NSView and NSResponder.
You must be knowing there is something called Responder Chain, where each of the object in the chain gets the chance to response based on if they are first responder.
And there are few methods you would like to check:
mouseDown:
mouseDragged:
mouseUp:
mouseMoved: etc.
For more you may read this Handling Mouse Events.

Use of first responders and first responder vs. target-action

I understand the use and need of target-actions.
But I encountered this concept of "First Responder".
Can someone explain why is it needed? What can it do that can't be done using target-actions?
In an app, the responder object that first receives many kinds of events is known as the first responder. It receives key events, motion events, and action messages, among others. (Mouse events and multitouch events first go to the view that is under the mouse pointer or finger; that view might or might not be the first responder.) The first responder is typically the view in a window that an app deems best suited for handling an event. To receive an event, the responder must also indicate its willingness to become first responder; it does this in different ways for each platform
When you design your app, it’s likely that you want to respond to events dynamically. For example, a touch can occur in many different objects onscreen, and you have to decide which object you want to respond to a given event and understand how that object receives the event.
When a user-generated event occurs, UIKit creates an event object containing the information needed to process the event. Then it places the event object in the active app’s event queue. For touch events, that object is a set of touches packaged in a UIEvent object. For motion events, the event object varies depending on which framework you use and what type of motion event you are interested in.
An event travels along a specific path until it is delivered to an object that can handle it. First, the singleton UIApplication object takes an event from the top of the queue and dispatches it for handling. Typically, it sends the event to the app’s key window object, which passes the event to an initial object for handling. The initial object depends on the type of event.
Touch events. For touch events, the window object first tries to deliver the event to the view where the touch occurred. That view is known as the hit-test view. The process of finding the hit-test view is called hit-testing, which is described in the “Hit-Testing Returns the View Where a Touch Occurred.” doc.
For Motion and remote control events. With these events, the window object sends the shaking-motion or remote control event to the first responder for handling. The first responder is described in “The Responder Chain Is Made Up of Responder Objects.”
The ultimate goal of these event paths is to find an object that can handle and respond to an event. Therefore, UIKit first sends the event to the object that is best suited to handle the event. For touch events, that object is the hit-test view, and for other events, that object is the first responder.
For more info, look here...

What's the perferred event to handle the end of user interaction with a UIControl?

I have a view with multiple dynamically created UITextfields and UISegmented controls on it (but for purposes of this question, there could also be UIButtons, UISwitches, UISliders, or anything else that inherits from UIControl). I want to preform an action whenever the user finished interacting with any of the controls, regardless of what subclass of control it belongs to. From looking at other questions, I think I want to use addTarget:action:forControlEvents: to add observers to each of my controls after they are created, but I don't know which event I'm looking for. I've tried all the ones that are listed in the Apple Docs here that seemed relevant but none of them seem to be triggered everytime. I'm looking for something like .LostFocus in VBA, but I can't seem to find out what that is - I know there is a becomeFirstResponder method to make a control active, but I can't find anything like a "lostFirstResponder" event.
I suppose I could use isKindOfClass to tell what kind of control it is, and set up my event accordingly, but that seems a little sloppy and I feel like there should be a more direct way to do it. I could also probably set up a UITapGestureRecognizer and build up something that way, but that still feels like a workaround and not really the way it's supposed to be done.
If you're willing to subclass, you can override -resignFirstResponder to detect lost "focus", and act accordingly. This is probably only useful for things like textfields which can hold first responder status, and would not work for UISwitch for instance.
Since all UIControl objects are just UIViews, you can also override touchesEnded to detect the end of interaction with these elements.. although the more accepted way is to add your dismissal handler method as an action for all the UIControlEvents that indicate end of interaction, or just UIControlEventValueChanged.
More info on UIResponder here from Apple's Documentation:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIResponder_Class/Reference/Reference.html#//apple_ref/occ/instm/UIResponder/resignFirstResponder
Many UIKit classes have delegate methods that indicate when interactions have ended, for instance UITextField has a textFieldDidEndEditing method. UITextView has similar methods.

Sending an event up the UIResponder chain?

I just created a custom UIViewController with some user actions like touch. I would like to handle the user interaction in the parentObject. In other words the one that created the ViewController.
From other languages I am used to use Events that are pushed up. So my parent object would have some kind of listener on the reference of the ViewController object it can react to.
How would that type of interaction handled by Objective C?
This can be done by 1) responder chain, 2) notifications and 3) delegates.
All UI objects form the responder chain, starting from the currently focused element, then it's parent view and so on, usually until the application object. By sending action to the special First Responder object in your nib, you'll throw it down the responder chain until someone handles it. You can use this mechanism for firing events without knowing who and when will handle them. This is similar to HTML event model.
Notifications send by NSNotificationCenter can be received by any number of listeners. This is the closest analog to e.g. C# events.
Delegates is the simplest mechanism of sending event to one single object. The class declares weak property named delegate that can be assigned with any object, and a protocol that this object is supposed to implement. Many classes use this approach; the main problem is that you can't have more than one listener this way.
you should look into delegations/delegate for interactions between two viewControllers. You need to understand how it works first.
https://developer.apple.com/library/mac/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html
It sounds like you need to implement a delegate protocol, which will allow your 'child' view controller to communicate back to it's 'parent'
See http://developer.apple.com/library/ios/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html

-makeFirstResponder: usage

I am fairly new to cocoa programming and I would like to ask if anyone can explain me how to
-(BOOL)makeFirstResponder:(NSResponder *)responder; method works. I was planning on using it for NSEvent but can anyone show me how to implement it?
I am trying to use the NSResponder class to get me a working -keyDown method.
NSResponder is one of the fundamental classes in Cocoa. Any class that can respond to events like key presses or menu commands should be a subclass of NSResponder. Each responder keeps track of it's "next responder", and each window keeps track of the object that's currently the "first responder". When an event happens in a window, a message is sent to the first responder. If that object handles the message, great. If not, it passes it along to its next responder. This is known as the "responder chain."
Normally, you don't mess much with the responder chain in Cocoa. The first responder is mostly determined by user actions, such as clicking on a control.
It doesn't make sense to 'use it for NSEvent'. NSEvent isn't a responder, but something that enables responders to do their job.
If you describe more clearly what you're trying to accomplish, I'm sure we can point you in the right direction.
You don't usually implement -makeFirstReponder:, you call it to set the input focus to a view. What is it that you really want to achieve?
I am trying to use the NSResponder class to get me a working keyDown method.
That doesn't make sense. “Use” a class?
If you want to respond to key events, you normally should do that in a view that should be capable of becoming the first responder (see the NSView docs).
See also the Event-Handling Guide, the View Programming Guide, and the video for session 145 (“Key Event Handling in Cocoa Applications”) from the WWDC 2010 session videos (which you should be able to access through your developer account even if you didn't go to WWDC last year).