I'm using an NSDatePicker and NSLevelIndicator to try and set/display certain values of an object. I don't want to use bindings. My first thought would be to try and set a delegate of the date picker/level indicator to be my controller class so that I can be notified when either of those is changed. However, NSDatePicker and NSLevelIndicator don't have a delegate (at least, none that I can see in interface builder). How then do I keep track of when these things are changed?
NSControl and its subclasses use the target / action mechanism to alert you when their value changes. Some delegate protocols work in a similar manner, but in general delegates are used to modify the behavior of an object, while target / action alerts your controller of a change in a UI control.
Related
I'm trying to make a NSTouchBar in an SDL application and I need to attach a responder to the NSWindow object (that's the only access SDL gives into the Cocoa windowing system).
https://developer.apple.com/reference/appkit/nstouchbar
If you explicitly adopt the NSTouchBarProvider protocol in an object,
you must also explicitly send the associated key-value observing
notifications within NSTouchBar methods; this lets the system respond
appropriately to changes in the bar.
What does that mean and how do I do it? I see lots of documentation about how to subscribe to the notifications, but not how to send them?
Right now I have:
#interface MyTouchBarResponder : NSResponder <NSTouchBarDelegate>
- (id)init;
- (NSTouchBar *)makeTouchBar;
- (nullable NSTouchBarItem *)touchBar:(NSTouchBar *)touchBar makeItemForIdentifier:(NSTouchBarItemIdentifier)identifier;
#property(strong, readonly) NSTouchBar *touchBar;
#end
and I'm attaching it to the window with the code from a previous question I asked here: How to create an NSTouchBar from an NSWindow object?
touchBarResponder.nextResponder = window.nextResponder;
window.nextResponder = touchBarResponder;
but my callbacks aren't ever being called (I put exit(0) in them to make it very obvious). When I hack the code directly into the SDL library, things work as expected, but that's not a viable permanent solution.
Thank you.
First, your custom responder should conform to NSTouchBarProvider (in the above, you declare the touchBar property, but not the explicit conformance)
Second, you want to make sure that your custom responder is in the responder chain of the window (whether the first responder or just later in the chain). After adjusting the responder chain with your above code, you want to call -makeFirstResponder: and pass in some view in the window (if you need that view to be first responder) or with the custom responder object. You should then verify that the window's firstResponder is that object.
With these in place, you should get at least one call to touchBar after the window is shown and made key.
To answer the question on key-value observing notifications, that is needed for when you want to change the actual NSTouchBar object being returned from touchBar. In the general case this isn't necessary, since it's unnecessary in the static touch bar case, and even in the dynamic case, you can rely on just setting the defaultItemIdentifiers on the previously created touch bar and it will update. However, should you need to change the touch bar object, you need to ensure that -willChangeValueForKey: and -didChangeValueForKey: are sent for touchBar when you change the return value. This developer documentation on KVO goes into much more detail.
Can someone please explain the high-level difference between these two methods? In particular, when would you use one over the other, and is there any overlap in terms of the purposes of these methods?
They seem like they serve the same purpose but don't appear to be related at all in documentation, and this has me confused.
beginTrackingWithTouch:withEvent:
1) subclass UIControl
2) Sent to the control when a touch related to the given event enters the control’s bounds.
3) To provide custom tracking behavior (for example, to change the highlight appearance).
To do this, use one or all of the following methods: beginTrackingWithTouch:withEvent:, continueTrackingWithTouch:withEvent:, endTrackingWithTouch:withEvent:
touchesBegan:withEvent:
1) subclass UIResponder
2) Tells the receiver when one or more fingers touch down in a view or window.
3) There are two general kinds of events: touch events and motion events.
The primary event-handling methods for touches are touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent:.
The parameters of these methods associate touches with their events—especially touches that are new or have changed—and thus allow responder objects to track and handle the touches as the delivered events progress through the phases of a multi-touch sequence.
Any time a finger touches the screen, is dragged on the screen, or lifts from the screen, a UIEvent object is generated. The event object contains UITouch objects for all fingers on the screen or just lifted from it.
Having just run into this today, I think the key difference is that beginTrackingWithTouch and friends are only for tracking - not anything else - in particular not for target/action handling. So if you override touchesBegan, then you'd also be responsible for calling sendActionsForControlEvents when touches ended. But if you use beginTrackingWithTouch, that's handled for free.
I discovered this by implementing beginTrackingWithTouch (for a custom button control) thinking it was just a sideways replacement for handling touchesBegan. So in endTrackingWithTouch, I called sendActionsForControlEvents if touchInside was true. The end result was that the action was called twice, because first the builtin mechanism sent the action, then I called it. In my case, I'm just interesting in customizing highlighting, so took out the call to sendActionsForControlEvents, and all is good.
Summary: use beginTrackingWithTouch when all you need to do is customize tracking, and use touchesBegan when you need to customize the target/action handling (or other low-level details) as well.
If I properly understand Apple documentation:
beginTracking:
Use the provided event information to detect which part of your control was hit and to set up any initial state information
So, it's more for control state configuration.
touchesBegan:
Many UIKit classes override this method and use it to handle the corresponding touch events
This method is more for touch event handling.
I have a UIBarButtonItem. When it receives a message it cannot handle, I want it to forward that message to a particular view controller.
I thought I might be able to accomplish this using the bar button item's forwardingTargetForSelector: method, but apparently no such property is found on objects of type UIBarButtonItem. (Point of terminology: Does that mean forwardingTargetForSelector: is a private property? edit: Wait, I think I'm confused... methods with a colon at the end aren't properties... so can you ever make public a method (like a getter/setter) to which parameters are passed?)
And does that mean that in order to set the value of forwardingTargetForSelector: I must do it from within the .m file of the object for which I want to set it? Which would mean that I would have to subclass my UIBarButtonItem?
And if so, why is this not a public property of NSObjects?
And moreover, what's the best way to achieve my forwarding goal, preferably avoiding subclassing?
additional information:
It all stems from my inclination to reuse a single action in response to various instances of an identical button being pressed. The action is currently contained in my delegate (see How should I implement [almost] identical actions used throughout various VCs? (Answer: use a category)) and varies only in that it should send a presentViewController message to the view controller that instantiated the button that sent the action. Thus, in the action, I can send a presentViewController message to sender, which is an instance of the button, and I want to be able to forward that message to the view controller that created that instance of the button, which I can do if I set each button's forwarding property immediately after it is instantiated in its respective view controller.
I hoped to avoid the "why" just to make the question shorter, but there ya go.
forwardingTargetForSelector: is not really a property; it's more like a question the runtime asks an instance when the instance doesn't respond to a message.
It can't be a property in the #property/declared-property sense, because each selector could have a different target; there would need to be a mapping behind it. That's just not how declared properties work.
UIBarButtonItem descends from NSObject, and it inherits this method along with all the others, but you can't "set" the forwarding target for a selector from outside an instance (without creating some other machinery to allow you to do so, anyways -- possible, but not available by default).
In order to utilize this method, yes, you have to implement it in the class that is doing the forwarding. This does indeed mean subclassing. It also means that the forwarding instance needs to have a reference to the object to which it is forwarding; this requires careful design.
forwardingTargetForSelector: is all but certainly not the correct way to achieve whatever your goal is. In general, in fact, it's a bit esoteric.
I'm not sure exactly what problem you're trying to solve ("making a button forward messages it doesn't respond to" is still rather general -- in particular, why is that necessary?), so it's hard to be more precise.
I have a view with multiple dynamically created UITextfields and UISegmented controls on it (but for purposes of this question, there could also be UIButtons, UISwitches, UISliders, or anything else that inherits from UIControl). I want to preform an action whenever the user finished interacting with any of the controls, regardless of what subclass of control it belongs to. From looking at other questions, I think I want to use addTarget:action:forControlEvents: to add observers to each of my controls after they are created, but I don't know which event I'm looking for. I've tried all the ones that are listed in the Apple Docs here that seemed relevant but none of them seem to be triggered everytime. I'm looking for something like .LostFocus in VBA, but I can't seem to find out what that is - I know there is a becomeFirstResponder method to make a control active, but I can't find anything like a "lostFirstResponder" event.
I suppose I could use isKindOfClass to tell what kind of control it is, and set up my event accordingly, but that seems a little sloppy and I feel like there should be a more direct way to do it. I could also probably set up a UITapGestureRecognizer and build up something that way, but that still feels like a workaround and not really the way it's supposed to be done.
If you're willing to subclass, you can override -resignFirstResponder to detect lost "focus", and act accordingly. This is probably only useful for things like textfields which can hold first responder status, and would not work for UISwitch for instance.
Since all UIControl objects are just UIViews, you can also override touchesEnded to detect the end of interaction with these elements.. although the more accepted way is to add your dismissal handler method as an action for all the UIControlEvents that indicate end of interaction, or just UIControlEventValueChanged.
More info on UIResponder here from Apple's Documentation:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIResponder_Class/Reference/Reference.html#//apple_ref/occ/instm/UIResponder/resignFirstResponder
Many UIKit classes have delegate methods that indicate when interactions have ended, for instance UITextField has a textFieldDidEndEditing method. UITextView has similar methods.
I just created a custom UIViewController with some user actions like touch. I would like to handle the user interaction in the parentObject. In other words the one that created the ViewController.
From other languages I am used to use Events that are pushed up. So my parent object would have some kind of listener on the reference of the ViewController object it can react to.
How would that type of interaction handled by Objective C?
This can be done by 1) responder chain, 2) notifications and 3) delegates.
All UI objects form the responder chain, starting from the currently focused element, then it's parent view and so on, usually until the application object. By sending action to the special First Responder object in your nib, you'll throw it down the responder chain until someone handles it. You can use this mechanism for firing events without knowing who and when will handle them. This is similar to HTML event model.
Notifications send by NSNotificationCenter can be received by any number of listeners. This is the closest analog to e.g. C# events.
Delegates is the simplest mechanism of sending event to one single object. The class declares weak property named delegate that can be assigned with any object, and a protocol that this object is supposed to implement. Many classes use this approach; the main problem is that you can't have more than one listener this way.
you should look into delegations/delegate for interactions between two viewControllers. You need to understand how it works first.
https://developer.apple.com/library/mac/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html
It sounds like you need to implement a delegate protocol, which will allow your 'child' view controller to communicate back to it's 'parent'
See http://developer.apple.com/library/ios/#documentation/General/Conceptual/DevPedia-CocoaCore/Delegation.html