I need to call a block on [cmd+double-click] event from NSTableView. I know about the -setDoubleAction: API to set the selector for the double-click event & -keyDown: delegate method for handling key press events.
What I need is kind of a combination of these 2 in a single customised event handler. Any pointers would be really appreciated.
Is there any way to register for such a custom event setting a callback selector?
I'm not sure, if that is good enough for you, but I'd declare a BOOL property, which I set to 'YES' at the keyDown event for the desired key and set to 'NO' at the keyUp event.
In the action for the double click event I'd check the BOOL property, if it's 'YES' -> do the thing otherwise return.
Related
I'd like to listen in to the text change event for an NSTextField, so I looked at the docs and saw a textDidChange event. So, I connect my text field to a delegate and implement the method, run the app, and nothing happens. I also tried textDidEndEditing, but to no avail.
What are these methods for, exactly? How can one trigger them? Changing the text and tabbing out of the field or pressing enter doesn't do anything. After a bit of googling I found a controlTextDidChange method (in the NSControl parent class of NSTextField) so I implemented it, and it worked (though it had to be an override func, not just a plain func).
Handling the text change event in .NET would be a cinch, just switch to the events panel and double click on "changed" and lo and behold, it creates a method stub into which I can handle the event.
Obviously, I'm an XCode newb comparatively. What's the typical way to go about handling events in XCode/Swift? Did I go about it the wrong way, and is there a better/easier way?
Thanks.
I understand the use and need of target-actions.
But I encountered this concept of "First Responder".
Can someone explain why is it needed? What can it do that can't be done using target-actions?
In an app, the responder object that first receives many kinds of events is known as the first responder. It receives key events, motion events, and action messages, among others. (Mouse events and multitouch events first go to the view that is under the mouse pointer or finger; that view might or might not be the first responder.) The first responder is typically the view in a window that an app deems best suited for handling an event. To receive an event, the responder must also indicate its willingness to become first responder; it does this in different ways for each platform
When you design your app, it’s likely that you want to respond to events dynamically. For example, a touch can occur in many different objects onscreen, and you have to decide which object you want to respond to a given event and understand how that object receives the event.
When a user-generated event occurs, UIKit creates an event object containing the information needed to process the event. Then it places the event object in the active app’s event queue. For touch events, that object is a set of touches packaged in a UIEvent object. For motion events, the event object varies depending on which framework you use and what type of motion event you are interested in.
An event travels along a specific path until it is delivered to an object that can handle it. First, the singleton UIApplication object takes an event from the top of the queue and dispatches it for handling. Typically, it sends the event to the app’s key window object, which passes the event to an initial object for handling. The initial object depends on the type of event.
Touch events. For touch events, the window object first tries to deliver the event to the view where the touch occurred. That view is known as the hit-test view. The process of finding the hit-test view is called hit-testing, which is described in the “Hit-Testing Returns the View Where a Touch Occurred.” doc.
For Motion and remote control events. With these events, the window object sends the shaking-motion or remote control event to the first responder for handling. The first responder is described in “The Responder Chain Is Made Up of Responder Objects.”
The ultimate goal of these event paths is to find an object that can handle and respond to an event. Therefore, UIKit first sends the event to the object that is best suited to handle the event. For touch events, that object is the hit-test view, and for other events, that object is the first responder.
For more info, look here...
I've connected a UITextField to an action method for a number of events. To enable me to check what the specific event was I used the sender: event: method type. However, I am now at a loss as to how to determine what the actual event that triggered the method was. What's the secret?
The "event" is the one passed to your action method, but I assume your real question is which of the UIControlEvents caused the action. UIEvent and UIControlEvents are unrelated. The target/action pattern provides a UIEvent. If you want to handle different UIControlEvents differently, you should implement different actions for them.
Remember, the target/action mechanism comes from UIResponder. UIControlEvents are related to UIControl.
Im a bit confused. I dont understand what code is actually is executed when I implement INotifyPropertyChanged interface.
As i imagine the chain goes like this:
My class impliments
INotifyPropertyChanged=>
Every property`s setter calls
NotifyPropertyChanged method=>
PropertyChangedEventHandler
invokes=>???
And i wonder what code makes my control rerender.
Thanks.
The control will subscribe to the event when it binds. When you raise the event, the control will check whether the property that's been changed is one of the ones it cares about. If it is, it will fetch the new value of the property, and rerender itself.
Of course, the handler doesn't have to be to do with controls rerendering - they can do anything. It's just a way of saying, "Hey, property X has changed its value... if you care about that, do something." You can add your own handlers very easily, just like any other event handlers.
I cannot find documents about the way, in which Interface Builder determines the Sent Message outlets for the graphical connections between components triggering events and messages of other components.
I want to generate components encapsulating Finite State Automata. The input part is simple, just define IBAction messages and you can connect them in in Interface Builder. The tricky part is obviously the other end of such connections.
I want to provide for each event triggered by the FSM a distinct outlet, like the 'selector' outlet of a NSButton (listed under 'Sent Messages' on the 'Connections' tab of the inspector).
How do I specify such interfaces programmatically and can I specify more than one of these?
Or is this approach not suitable; would Notifications be a better way? (I am used graphical connections from Visual Age and Parts, so I would prefer them, but in Interface Builder, the support for such connections seems somehow limited).
Thanks in advance
The first part of my question has been ansered in the question 'Send An Action Cocoa - IBAction'. I am still looking for a possibility to define more than one 'Sent Message'.
When you implement your method using IBActions, the object that generated the message (the sender) is passed to the message. So if I have a button on my interface that says "Logout" and an action on some controller object named logout: and I have wired these up, the method receives the instance of the button that triggered it. For example:
- (void)logout:(id)sender
{
// sender is the instance of whichever wired button triggered
// this action. We just NSLog() it for now.
NSLog(#"-[%# logout:%#]", self, sender);
}
Other objects may call this action as well, and may pass themselves as the sender or may pass nil. The details of this would be left up to you as the designer.