I understand the use and need of target-actions.
But I encountered this concept of "First Responder".
Can someone explain why is it needed? What can it do that can't be done using target-actions?
In an app, the responder object that first receives many kinds of events is known as the first responder. It receives key events, motion events, and action messages, among others. (Mouse events and multitouch events first go to the view that is under the mouse pointer or finger; that view might or might not be the first responder.) The first responder is typically the view in a window that an app deems best suited for handling an event. To receive an event, the responder must also indicate its willingness to become first responder; it does this in different ways for each platform
When you design your app, it’s likely that you want to respond to events dynamically. For example, a touch can occur in many different objects onscreen, and you have to decide which object you want to respond to a given event and understand how that object receives the event.
When a user-generated event occurs, UIKit creates an event object containing the information needed to process the event. Then it places the event object in the active app’s event queue. For touch events, that object is a set of touches packaged in a UIEvent object. For motion events, the event object varies depending on which framework you use and what type of motion event you are interested in.
An event travels along a specific path until it is delivered to an object that can handle it. First, the singleton UIApplication object takes an event from the top of the queue and dispatches it for handling. Typically, it sends the event to the app’s key window object, which passes the event to an initial object for handling. The initial object depends on the type of event.
Touch events. For touch events, the window object first tries to deliver the event to the view where the touch occurred. That view is known as the hit-test view. The process of finding the hit-test view is called hit-testing, which is described in the “Hit-Testing Returns the View Where a Touch Occurred.” doc.
For Motion and remote control events. With these events, the window object sends the shaking-motion or remote control event to the first responder for handling. The first responder is described in “The Responder Chain Is Made Up of Responder Objects.”
The ultimate goal of these event paths is to find an object that can handle and respond to an event. Therefore, UIKit first sends the event to the object that is best suited to handle the event. For touch events, that object is the hit-test view, and for other events, that object is the first responder.
For more info, look here...
Related
Can someone please explain the high-level difference between these two methods? In particular, when would you use one over the other, and is there any overlap in terms of the purposes of these methods?
They seem like they serve the same purpose but don't appear to be related at all in documentation, and this has me confused.
beginTrackingWithTouch:withEvent:
1) subclass UIControl
2) Sent to the control when a touch related to the given event enters the control’s bounds.
3) To provide custom tracking behavior (for example, to change the highlight appearance).
To do this, use one or all of the following methods: beginTrackingWithTouch:withEvent:, continueTrackingWithTouch:withEvent:, endTrackingWithTouch:withEvent:
touchesBegan:withEvent:
1) subclass UIResponder
2) Tells the receiver when one or more fingers touch down in a view or window.
3) There are two general kinds of events: touch events and motion events.
The primary event-handling methods for touches are touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:.
The parameters of these methods associate touches with their events—especially touches that are new or have changed—and thus allow responder objects to track and handle the touches as the delivered events progress through the phases of a multi-touch sequence.
Any time a finger touches the screen, is dragged on the screen, or lifts from the screen, a UIEvent object is generated. The event object contains UITouch objects for all fingers on the screen or just lifted from it.
Having just run into this today, I think the key difference is that beginTrackingWithTouch and friends are only for tracking - not anything else - in particular not for target/action handling. So if you override touchesBegan, then you'd also be responsible for calling sendActionsForControlEvents when touches ended. But if you use beginTrackingWithTouch, that's handled for free.
I discovered this by implementing beginTrackingWithTouch (for a custom button control) thinking it was just a sideways replacement for handling touchesBegan. So in endTrackingWithTouch, I called sendActionsForControlEvents if touchInside was true. The end result was that the action was called twice, because first the builtin mechanism sent the action, then I called it. In my case, I'm just interesting in customizing highlighting, so took out the call to sendActionsForControlEvents, and all is good.
Summary: use beginTrackingWithTouch when all you need to do is customize tracking, and use touchesBegan when you need to customize the target/action handling (or other low-level details) as well.
If I properly understand Apple documentation:
beginTracking:
Use the provided event information to detect which part of your control was hit and to set up any initial state information
So, it's more for control state configuration.
touchesBegan:
Many UIKit classes override this method and use it to handle the corresponding touch events
This method is more for touch event handling.
I'm learning about design patterns and I have stumbled upon a question I really don't know how to find an answer to. In the observer design pattern class diagrams, I have seen that a Concrete Observer usually has a reference to the Subject. But, who sets the value of that reference? And how Attaching function gets called? Do observers call it themselves according to the subject reference they have, or somebody else sets the subject and then attaches the observer to the subject? I've looked for examples, but I'm still having troubles finding the best way to implement this.
The observer is the component that wants to be notified about changes or events of the subject. It decides to observe the subject and adds itself to the list of observers that the subject maintains.
The typical use-case is a graphical panel containing a button. The graphical panel creates a button and adds it to itself. And it wants to display a dialog box every time the button is clicked. So it adds itself as an observer of the button, and the button notifies the panel when it's clicked.
In this example, the observer creates the object it observes. But there are situations where that is not the case, and when the reference to the subject is passed as an argument to its constructor or one of its methods. This is irrelevant to the principle of the observer pattern itself.
The Subject is an object which controls some event or has some property which the Observers are interested in. The Observers register themselves with the Subject to express that interest, and the Subject keeps a list of those registered Observers.
When the Subject's property changes, or the event of interest occurs, the Subject iterates through its list of registered Observers and notifies them of the change or event.
The specifics of how the Observers are notified can vary. It may be that they have a well known method that gets called. It may be that they specify a custom method that they want called, which they specify as part of the registration process.
I come from a .NET web application background and have just started iOS development. The initial design of my app focuses around the NSNotificationCenter. I was reasonably happy with this until I saw various posts explaining how reaching for the NSNotificationCentre was a common newbie mistake.
Here is a simplified version of the problem I am trying to address:
My application is trying to show a list of messages that are populated using web service calls, think Facebook messaging.
When the app is first loaded it pulls a collection of messages from the server and displays them in a table to the user. The user can add new messages (which get sent back over the API) and the app can receive Push Notifications about new messages which are added to the feed.
The messages are never persisted to disk so I'm just using POCOs for the model to keep things simple.
I have a MessageFeedController which is responsible for populating the message feed view (in a storyboard). I also have a message feed model, which stores the currently retrieved values and has various methods:
(void) loadFromServer;
(void) createMessage: (DCMMessage*) message;
(void) addMessage: (DCMMessage*) message;
(NSArray*) messages;
(int) unreadMessages;
The current implementation I have is this:
Use case 1 : Initial Load
When the view first appears the "loadFromServer" method is called. This populates the messages collection and raises an NSNotificationCenter event.
The controller observes this event, and when received it populates the tableview
Use Case 2: New Message
When a user clicks the "add" button a new view appears, they enter their message, hit send and then the view is dismissed.
This calls the createMessage method on the model, which calls the API
Once we have a response the model raises the NSNotificationCenter event
Once again the MessageFeedController listens for this event and re-populates the table
Use Case 3: Push Message
A push notification is received while the app is open, with the new message details
The AppDelegate (or some other class) will call the addMessage method on the model, to add it to the collection
Once again, assuming the MessageFeed view is open, it re-populates
In all three cases the MessageFeed view is updated. In addition to this a BadgeManager also listens to these events which has the responsibility of setting the app icon badge and the tabbar badge, in both cases the badge number relates to the number of unread messages.
It's also possible that another view is open and is listening to these events, this view holds a summary of messages so needs to know when the collection changes.
Right, thanks for sticking with me, my question is: Does this seem like a valid use of NSNotificationCentre, or have I misused it?
One concern I have is that I'm not 100% sure what will happen if the messages collection changes half-way through re-populating the message table. The only time I could see this happening is if a push notification was received about a new message. In this case would the population of the table have to finish before acting upon the NSNotification anyway?
Thanks for your help
Dan.
In other words, you're posting a notification whenever the message list is updated. That's a perfectly valid use of NSNotificationCenter.
Another option is to use Key-Value Observing.
Your controller (and anyone else) can register as an observer to the "messages" property, and will be notified whenever that property changes. On the model side, you get KVO for free; simply calling a method named setMessages: will trigger the KVO change notification. You can also trigger the notification manually, and, if so desired, the KVO notification can include which indexes of the array have been added, removed, or changed.
KVO is a standardized way to do these kinds of change notifications. It's particularly important when implementing an OS X app using Cocoa Data Binding.
NSNotificationCenter is more flexible in that you can bundle any additional info with each notification.
It's important to ensure that your messages list is only updated on the main thread, and that the messages list is never modified without also posting a corresponding change notification. Your controller should also take care to ignore these notifications whenever it is not the top-most view controller or not on screen. It's not uncommon to register for change notifications in viewWillAppear: and unregister in viewWillDisappear:.
In my opinion using a delegate protocol pattern would be a much better fit for this scenario. Consider the scenario where your "api layer" needs used across many view controllers in an application. If another developer were to be introduced to your code, they would have to hunt around for notificationcenter subscriptions instead of just following a clean 'interface' like protocol.
That being said, your code will work just fine and this is a valid use of notification center. It is just my personal preference for 'cleaner' code to use a protocol based approach. Take a look around in the iOS SDK itself and you will see scenarios where Apple themselves use protocols and use notifications. I feel it is much more easy to comprehend and use the protocols than having to dig around and determine what I must listen to for a notification.
NSNotifications run the receivers code synchronously as soon as they are posted, so a new message during repopulation would join the back of that execution queue. On the whole it seems valid to me, and it keeps a reasonable degree of separation between The view controllers and the model.
Depending on the number of classes that are likely to want to listen for the same information arriving, you may want to use a delegate pattern, maybe keeping an dictionary of delegate objects, but I personally don't feel as though this scales so well, and you also have to take care of nil-ing out delegates if a page is dealloced to avoid crashes. To sum up, your approach seems good to me.
I just created a custom UIViewController with some user actions like touch. I would like to handle the user interaction in the parentObject. In other words the one that created the ViewController.
From other languages I am used to use Events that are pushed up. So my parent object would have some kind of listener on the reference of the ViewController object it can react to.
How would that type of interaction handled by Objective C?
This can be done by 1) responder chain, 2) notifications and 3) delegates.
All UI objects form the responder chain, starting from the currently focused element, then it's parent view and so on, usually until the application object. By sending action to the special First Responder object in your nib, you'll throw it down the responder chain until someone handles it. You can use this mechanism for firing events without knowing who and when will handle them. This is similar to HTML event model.
Notifications send by NSNotificationCenter can be received by any number of listeners. This is the closest analog to e.g. C# events.
Delegates is the simplest mechanism of sending event to one single object. The class declares weak property named delegate that can be assigned with any object, and a protocol that this object is supposed to implement. Many classes use this approach; the main problem is that you can't have more than one listener this way.
you should look into delegations/delegate for interactions between two viewControllers. You need to understand how it works first.
https://developer.apple.com/library/mac/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html
It sounds like you need to implement a delegate protocol, which will allow your 'child' view controller to communicate back to it's 'parent'
See http://developer.apple.com/library/ios/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html
I am fairly new to cocoa programming and I would like to ask if anyone can explain me how to
-(BOOL)makeFirstResponder:(NSResponder *)responder; method works. I was planning on using it for NSEvent but can anyone show me how to implement it?
I am trying to use the NSResponder class to get me a working -keyDown method.
NSResponder is one of the fundamental classes in Cocoa. Any class that can respond to events like key presses or menu commands should be a subclass of NSResponder. Each responder keeps track of it's "next responder", and each window keeps track of the object that's currently the "first responder". When an event happens in a window, a message is sent to the first responder. If that object handles the message, great. If not, it passes it along to its next responder. This is known as the "responder chain."
Normally, you don't mess much with the responder chain in Cocoa. The first responder is mostly determined by user actions, such as clicking on a control.
It doesn't make sense to 'use it for NSEvent'. NSEvent isn't a responder, but something that enables responders to do their job.
If you describe more clearly what you're trying to accomplish, I'm sure we can point you in the right direction.
You don't usually implement -makeFirstReponder:, you call it to set the input focus to a view. What is it that you really want to achieve?
I am trying to use the NSResponder class to get me a working keyDown method.
That doesn't make sense. “Use” a class?
If you want to respond to key events, you normally should do that in a view that should be capable of becoming the first responder (see the NSView docs).
See also the Event-Handling Guide, the View Programming Guide, and the video for session 145 (“Key Event Handling in Cocoa Applications”) from the WWDC 2010 session videos (which you should be able to access through your developer account even if you didn't go to WWDC last year).