I'm working on a custom button component. I've been reading through the documentation for NSControl and NSCell, and the programming topics associated with it.
What I can't figure out is how to correctly use mouse events in NSCell's. I see that NSControl subclasses NSView, which implements NSResponder - so implementing mouse events in the control and forwarding to the cell is simple. But the documentation states that when a control needs to be placed in a Table, the Cell for the control is used, not the control.
The methods that are available in the cell are somewhat well documented, but confusing to understand how to implement. Ultimately what I'm trying to figure out is how to replicate a push button; a down image is shown while the mouse is down, until the mouse is up. I've already got the images and text etc working fine.
I've been experimenting and messing around with different combinations of the mouse methods for NSCell, but just can't get it right.
-Is using NSCell for mouse events in this respect even correct?
-Is using the mouse events from NSControl and forwarding to the cell the right way?
-If I use mouse events from the Control and forward to the cell, how does a Table know that I need certain mouse events forwarded to my cell?
-If the mouse events are needed from an NSControl, can a table use an NSControl?
Any takers?
Related
I'm experience some event handling issues when attempting to use arrow keys without modifiers as key equivalents for menu items in the main menu bar. The problem I'm experiencing is that the main menu bar is handling the key down event as a key equivalent event before a tableView is able to. When the tableView is the first responder, the up/down arrow keys do not change the tableView's selection but rather trigger the key equivalent in the main menu bar.
The reason for this is that the incoming keyDown event for an arrow key is first passed to performKeyEquivalent on the target window, which in turns passes that event down the chain. NSTableView does not respond to this so the event bubbles back up to the application where it next dispatches it to the main menu, via performKeyEquivalent, and thus the event is consumed.
If the main menu does not have a key equivalent, then the event goes back to the window and down the chain via keyDown, which the tableView does respond to and correctly handles.
This is documented by Apple (more or less) in their Event Handling Guide.
Is there a proper way to handle key equivalents like arrow keys without modifiers such that they both appear in the menu item when it's being displayed, but are also properly consumed by any subviews that might handle them?
I've tried various tricks, but each one has numerous pros-and-cons:
NSMenu delegate
One can implement menuHasKeyEquivalent but it appears that you have to implement that for the entire main menu. While you could easily filter out the arrow keys, you also have to validate every other key equivalent, which isn't very practical.
Subclass NSApplication
You can override sendEvent: in NSApplication but the logic for keeping track of where you are in the event handling chain gets a bit hairy.
NSEvent tap
Similar to subclassing NSApplication. Things are a bit cleaner here because I can cheat and have the event tap a bit closer to the tableView, but you're still left with a lot of logic to determine when the tap should consume the event and "force-feed" it to the tableView versus when you should let the event be handled normally.
I'm curious if anyone has any suggestions on how best to implement an arrow key as a key equivalent when no modifiers are present and a tableView might be present.
(macOS 10.11+)
Is there any real difference between setting an IBAction to handle your touch event vs. using touchesBegan: touchesMoved, etc? What considerations would cause one to be preferred over the other?
Accessibility
If by IBAction you mean attaching to events like UIControlEventTouchUpInside, there is quite a bit of "magic" attached to control events that would take some work to duplicate touch events.
Most obviously, if you touch a UIButton, then drag a short distance off the button before releasing, the button still sends its UIControlEventTouchUpInside event. The distance was chosen through usability experiments: how far can someone's finger slip while they still think they're pressing the button?
I suspect that using control events will also make it easier for iOS 6 Guided Access and other accessibility aids to understand your app.
Separating the View from the Model
Using control events means that the View doesn't need to know what effect it has when tapped. This is considered a good thing in the Model-View-Controller paradigm. Ideally your Controller will receive the event, and update the Model to suit.
Conclusion
If you can implement your interaction with control events, it's better to do so.
Sometimes your control needs complex interaction, though. If you're implementing something like finger painting with multi-touch, you're going to need to know exactly where and when touches happen. In that case, implement your interaction with touchesBegan, touchesMoved, touchesEnded and touchesCancelled.
I have a transparent NSWindow that follows the user's screen everywhere he goes (the NSWindowstays in front of every app, no matter what, even fullscreen apps).
In that NSWindow i have a mouseDown event that shows a popup. Let's say i'm on safari in fullscreen mode and i have my Window in front of it, i click on safari and i click again on my Window: nothing happens, the mouseDown doesn't occur. I have to click again so the mouseDown event is triggered.
How can i force my NSWindow to be always active so i don't have to click it 2x to trigger the mouseDown when i click on a background app and click in my window again?
Thank you!
I'm not sure if this is exactly what you want (it's not quite a window wide setting), but, from the documentation:
By default, a mouse-down event in a window that isn’t the key window
simply brings the window forward and makes it key; the event isn’t
sent to the NSView object over which the mouse click occurs. The
NSView can claim an initial mouse-down event, however, by overriding
acceptsFirstMouse: to return YES.
The argument of this method is the
mouse-down event that occurred in the non-key window, which the view
object can examine to determine whether it wants to receive the mouse
event and potentially become first responder. You want the default
behavior of this method in, for example, a control that affects the
selected object in a window.
However, in certain cases it’s
appropriate to override this behavior, such as for controls that
should receive mouseDown: messages even when the window is inactive.
Examples of controls that support this click-through behavior are the
title-bar buttons of a window.
Or you could try fiddling with
- (void)sendEvent:(NSEvent *)theEvent
and see if you can handle events in a custom way.
If you add a borderless NSButton instance to your window's view and set your image as the button's image (and as its alternate image, to make it more beautiful), it will work out of the box: Just connect the button's action method to your app delegate (or the object where you want to process the click action). A click on the image (i.e. the button) will then trigger the button's action method, no matter which window is active.
This worked for me, hope that will be helpful, This will keep your window always on Top of all applications
[self.window makeKeyAndOrderFront:nil];
[self.window setLevel:NSStatusWindowLevel];
I think what you really should do is use an NSPanel (a floating palette -- a special kind of NSWindow) that will do exactly what you want in a way that's consistent with the OS rather than trying to fight intended behavior.
Here's the NSPanel documentation:
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/ApplicationKit/Classes/nspanel_Class/Reference/Reference.html
And here's some helpful and pithy information:
http://cocoadev.com/wiki/NSPanel
By default, an NSPanel will disappear when the application is inactive, but you can turn this off.
I apologize for not laying it out more fully ... pressed for time.
Edit:
Note that you can probably get your window to behave as desired simply:
"The NSView can claim an initial mouse-down event, however, by overriding acceptsFirstMouse: to return YES."
https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/EventOverview/HandlingMouseEvents/HandlingMouseEvents.html
You'll need to do this with any NSView subclass to skip the "activation click".
I would like to simulate a mouse click on a Cocoa application without actually clicking the mouse, and* not have to figure out which view should respond to the click, given the current mouse location.
I would like the Cocoa framework to handle figuring out which view should respond, so I don't think that a method call on an NSView object is what I'm looking for. That is, I think I need a method call that will end up calling this method.
I currently have this working by clicking the mouse at a particular global location, using CGEventCreateMouseEvent and CGEventPost. However, this technique actually clicks the mouse. So this works, but I'm not completely happy with the behavior. For example, if I hold down a key on the keyboard while the CGEventPost is called, that key is wrapped into the event. Also, if I move another process's window over the window that I'd like to simulate the click, then the CGEventPost method will click the mouse in that window. That is, it's acting globally, across processes. I'd like a technique that works on a single window. Something on the NSWindow object maybe?
I read that "Mouse events are dispatched by an NSWindow object to the NSView object over which the event occurred" in the Cocoa documentation.
OK. So I'd like to know the method that is called to do the dispatching. Call this method on the window, and then let the framework figure out which NSView to call, given the current mouse location.
Any help would be greatly appreciated. I'm just starting to learn the Cocoa framework, so I apologize if any of the terminology/verbage here isn't quite right.
It's hard to know exactly how much fidelity you're looking for with what happens for an actual click. For example, do you want the click to activate the app? Do you want the click to bring a window to the top? To make it key or main? If the location is in the title bar, do you want it to potentially close, minimize, zoom, or move the window?
As John Caswell noted, if you pass an appropriately-constructed NSEvent to -[NSApplication sendEvent:] that will closely simulate the processing of a real event. In most cases, NSApplication will forward the event to the event's window and its -[NSWindow sendEvent:] method. If you want to avoid any chance of NSApplication doing something else, you could dispatch directly to the window's -sendEvent: method. But that may defeat some desirable behavior, depending on exactly what you desire.
What happens if the clicked window's or view's response is to run an internal event-tracking loop? It's going to be synchronous; that is, the code that calls -sendEvent: is not going to get control back until after that loop has completed and it might not complete if you aren't able to deliver subsequent events. In fact, such a loop is going to look for subsequent events via -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:], so if your synthetic events are not in the queue, they won't be seen. Therefore, an even better simulation of the handling of real events would probably require that you post events (mouse-down, mouse drags, mouse-up) to the queue using -[NSApplication postEvent:atStart:].
I think your first task is to really think deeply about what you're trying to accomplish, all the potential pitfalls and corner cases, and decide how you want to handle those.
With respect to the CGEvent... stuff, you can post an event to a specific process using CGEventPostToPSN() and that won't click on other app's windows, even if they are in front of the target window. However, it may still click on a different window within the target app.
OK. So I'd like to know the method that is called to do the dispatching. Call this method on the window, and then let the framework figure out which NSView to call, given the current mouse location.
NSView *target = [[theWindow contentView] hitTest:thePoint];
I'm not entirely clear on your problem so I don't know if all you want to do is then call mouseDown: on the target. But if you did, that would be almost exactly the same thing that happens for a real mouse click.
This is the message used in delivering live clicks. It walks the view hierarchy, automatically dealing with overlap, hidden messages, etc., and letting each step in the chain of views interfere if it wants. If a view wants to prevent child views from getting clicks, it does that by eating hitTest:, which means it'll affect your code the exact same way it affects a real click. If the click would be delivered, this method always tells you where it would be delivered.
However, it doesn't necessarily handle all the reasons a click might not be delivered (acceptsFirstMouse, modal dialogs, etc.). Also, you have to know the window, and the point (in the appropriate coordinate system), but it sounds like that's what you're starting with.
You can simulate mouse click by calling mouseDown: like this:
[self mouseDown: nil];
And to get mouse location in screen:
-(void)mouseDown:(NSEvent *)theEvent {
NSPoint mouseLocation = [NSEvent mouseLocation];
NSLog(#"x: %f", mouseLocation.x);
NSLog(#"y: %f", mouseLocation.y);
}
I had a class project consisting in programming a swype-like. I had to do it in java, and you can have a look at it (with the code) here. For this summer, I'd like to port it in ObjC/Cocoa, and then improve it. I intend to use NSButtons for the keyboard keys, like the "Gradient Button" proposed by Interface Builder.
So, I looked about how to handle mouse events (I need mouse pressed, entered, exited, and released). For some objects, it looks like you have to use a delegate, but for NSButton, looks like the methods like -mouseDown and related are in the object itself.
My question is, how do I override the methods in interface builder objects ? I tried creating a subclass of NSButton, and setting my button's class to this subclass, but without results. Maybe trying to override the methods is not the right way to do it at all, I'm open to every suggestion, even if it is not event-handling related. And if it is relevant, I'm running OS X 10.6, with XCode 4.
Thanks for your time !
A lot will depend on why you need all of the various events. NSButton is a control, and as such works differently than a standard NSView.
If you mostly need to figure out when the button is pressed, you can do this by assigning an action in IB. This is done by creating a void method in your controller class of the form:
- (IBAction) myMouseAction:(id)sender
and then having it do what you need based on receiving the click. Then in IB, you can hook up this action to the button by control-clicking on the button and dragging to your controller class (likely the owner) and selecting your new method when prompted.
If you need fine-grained control, you should consider creating your own NSView subclass and handling the mouse actions yourself, as trying to override controls is a pretty complicated matter. OS X controls were architected for extreme performance, but they're a bit anachronistic now and generally not worth the work to create your own.
One other thing is that the mouseEntered:, mouseMoved: and mouseExited: events are for handling mouse movement with the mouse button up.
You are going to want to pay attention to: mouseDown:, mouseUp: and mouseDragged: in order to handle events while the mouse button is being held down.