Is there any real difference between setting an IBAction to handle your touch event vs. using touchesBegan: touchesMoved, etc? What considerations would cause one to be preferred over the other?
Accessibility
If by IBAction you mean attaching to events like UIControlEventTouchUpInside, there is quite a bit of "magic" attached to control events that would take some work to duplicate touch events.
Most obviously, if you touch a UIButton, then drag a short distance off the button before releasing, the button still sends its UIControlEventTouchUpInside event. The distance was chosen through usability experiments: how far can someone's finger slip while they still think they're pressing the button?
I suspect that using control events will also make it easier for iOS 6 Guided Access and other accessibility aids to understand your app.
Separating the View from the Model
Using control events means that the View doesn't need to know what effect it has when tapped. This is considered a good thing in the Model-View-Controller paradigm. Ideally your Controller will receive the event, and update the Model to suit.
Conclusion
If you can implement your interaction with control events, it's better to do so.
Sometimes your control needs complex interaction, though. If you're implementing something like finger painting with multi-touch, you're going to need to know exactly where and when touches happen. In that case, implement your interaction with touchesBegan, touchesMoved, touchesEnded and touchesCancelled.
Related
I would like to simulate a mouse click on a Cocoa application without actually clicking the mouse, and* not have to figure out which view should respond to the click, given the current mouse location.
I would like the Cocoa framework to handle figuring out which view should respond, so I don't think that a method call on an NSView object is what I'm looking for. That is, I think I need a method call that will end up calling this method.
I currently have this working by clicking the mouse at a particular global location, using CGEventCreateMouseEvent and CGEventPost. However, this technique actually clicks the mouse. So this works, but I'm not completely happy with the behavior. For example, if I hold down a key on the keyboard while the CGEventPost is called, that key is wrapped into the event. Also, if I move another process's window over the window that I'd like to simulate the click, then the CGEventPost method will click the mouse in that window. That is, it's acting globally, across processes. I'd like a technique that works on a single window. Something on the NSWindow object maybe?
I read that "Mouse events are dispatched by an NSWindow object to the NSView object over which the event occurred" in the Cocoa documentation.
OK. So I'd like to know the method that is called to do the dispatching. Call this method on the window, and then let the framework figure out which NSView to call, given the current mouse location.
Any help would be greatly appreciated. I'm just starting to learn the Cocoa framework, so I apologize if any of the terminology/verbage here isn't quite right.
It's hard to know exactly how much fidelity you're looking for with what happens for an actual click. For example, do you want the click to activate the app? Do you want the click to bring a window to the top? To make it key or main? If the location is in the title bar, do you want it to potentially close, minimize, zoom, or move the window?
As John Caswell noted, if you pass an appropriately-constructed NSEvent to -[NSApplication sendEvent:] that will closely simulate the processing of a real event. In most cases, NSApplication will forward the event to the event's window and its -[NSWindow sendEvent:] method. If you want to avoid any chance of NSApplication doing something else, you could dispatch directly to the window's -sendEvent: method. But that may defeat some desirable behavior, depending on exactly what you desire.
What happens if the clicked window's or view's response is to run an internal event-tracking loop? It's going to be synchronous; that is, the code that calls -sendEvent: is not going to get control back until after that loop has completed and it might not complete if you aren't able to deliver subsequent events. In fact, such a loop is going to look for subsequent events via -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:], so if your synthetic events are not in the queue, they won't be seen. Therefore, an even better simulation of the handling of real events would probably require that you post events (mouse-down, mouse drags, mouse-up) to the queue using -[NSApplication postEvent:atStart:].
I think your first task is to really think deeply about what you're trying to accomplish, all the potential pitfalls and corner cases, and decide how you want to handle those.
With respect to the CGEvent... stuff, you can post an event to a specific process using CGEventPostToPSN() and that won't click on other app's windows, even if they are in front of the target window. However, it may still click on a different window within the target app.
OK. So I'd like to know the method that is called to do the dispatching. Call this method on the window, and then let the framework figure out which NSView to call, given the current mouse location.
NSView *target = [[theWindow contentView] hitTest:thePoint];
I'm not entirely clear on your problem so I don't know if all you want to do is then call mouseDown: on the target. But if you did, that would be almost exactly the same thing that happens for a real mouse click.
This is the message used in delivering live clicks. It walks the view hierarchy, automatically dealing with overlap, hidden messages, etc., and letting each step in the chain of views interfere if it wants. If a view wants to prevent child views from getting clicks, it does that by eating hitTest:, which means it'll affect your code the exact same way it affects a real click. If the click would be delivered, this method always tells you where it would be delivered.
However, it doesn't necessarily handle all the reasons a click might not be delivered (acceptsFirstMouse, modal dialogs, etc.). Also, you have to know the window, and the point (in the appropriate coordinate system), but it sounds like that's what you're starting with.
You can simulate mouse click by calling mouseDown: like this:
[self mouseDown: nil];
And to get mouse location in screen:
-(void)mouseDown:(NSEvent *)theEvent {
NSPoint mouseLocation = [NSEvent mouseLocation];
NSLog(#"x: %f", mouseLocation.x);
NSLog(#"y: %f", mouseLocation.y);
}
I had a class project consisting in programming a swype-like. I had to do it in java, and you can have a look at it (with the code) here. For this summer, I'd like to port it in ObjC/Cocoa, and then improve it. I intend to use NSButtons for the keyboard keys, like the "Gradient Button" proposed by Interface Builder.
So, I looked about how to handle mouse events (I need mouse pressed, entered, exited, and released). For some objects, it looks like you have to use a delegate, but for NSButton, looks like the methods like -mouseDown and related are in the object itself.
My question is, how do I override the methods in interface builder objects ? I tried creating a subclass of NSButton, and setting my button's class to this subclass, but without results. Maybe trying to override the methods is not the right way to do it at all, I'm open to every suggestion, even if it is not event-handling related. And if it is relevant, I'm running OS X 10.6, with XCode 4.
Thanks for your time !
A lot will depend on why you need all of the various events. NSButton is a control, and as such works differently than a standard NSView.
If you mostly need to figure out when the button is pressed, you can do this by assigning an action in IB. This is done by creating a void method in your controller class of the form:
- (IBAction) myMouseAction:(id)sender
and then having it do what you need based on receiving the click. Then in IB, you can hook up this action to the button by control-clicking on the button and dragging to your controller class (likely the owner) and selecting your new method when prompted.
If you need fine-grained control, you should consider creating your own NSView subclass and handling the mouse actions yourself, as trying to override controls is a pretty complicated matter. OS X controls were architected for extreme performance, but they're a bit anachronistic now and generally not worth the work to create your own.
One other thing is that the mouseEntered:, mouseMoved: and mouseExited: events are for handling mouse movement with the mouse button up.
You are going to want to pay attention to: mouseDown:, mouseUp: and mouseDragged: in order to handle events while the mouse button is being held down.
It seems like NSSlider in Cocoa does not provide a delegate to receive an event like Value Changed for a UISlider.
How can I get the value of an NSSlider continuously and display it in an NSTextField, for example?
You need to research Cocoa's Target/Action mechanism. This is a basic Cocoa concept you'll need to understand. The slider (and any other control) can be given a target (some controller object) and an action (the method to call against that controller object).
The action is fired when the user stops dragging by default. Check the slider's Continuous property in Interface Builder to cause it to trigger the action as you're sliding it.
One advantage of using the timer approach is that it works for the case of using the keyboard rather than the mouse to adjust the slider. If the user has "Full Keyboard Access" turned on in System Preferences, they can use the Tab key to give the slider focus. They can then hold down an arrow key so that autorepeat kicks in, whereupon you have a similar situation to dragging with the mouse: the target/action is firing repeatedly, and you want to wait for a moment of calm before saving to the database.
You do need to be careful not to delete your NSTimer prematurely. For example, if the user quits the app during those couple of seconds you probably want to "flush" the slider value to the database before terminating the process.
Programmatical solution based on the answer of Joshua Nozzi:
Swift
slider.isContinuous = true
Objective-C
slider.continuous = YES;
I'm trying to override the default behavior in a UITableView (which is in fact a UIScrollView subclass). Basically, my table takes up a third of the screen, and I'd like to be able to drag items from the table to the rest of the screen — both by holding and then dragging, and also by dragging perpendicular to the table. I was able to implement the first technique with a bit of effort using the default UIScrollView touchesShouldBegin/touchesShouldCancel and touchesBegan/Moved/Ended-Cancelled, but the second technique is giving me some serious trouble.
My problem is this: I'd like to be able to detect a drag, but I also want to be able to scroll when not dragging. In order to do this, I have to perform my dragging detection up to and including the point when touchesShouldCancel is called. (This is because touchesShouldCancel is the branching point in which the UIScrollView decides whether to continue passing on touches to its subviews or to scroll.) Unfortunately, UIScrollView's cancellation radius is pretty tiny, and if the user touches a cell and then moves their finger really quickly, only touchesBegan is called. (If the user moves slowly, we usually get a few touchesMoved before touchesShouldCancel is called.) As a result, I have no way of calculating the touch direction, since I only have the one point from touchesBegan.
If I could query a touch at any given instant rather than having to rely on the touch callbacks, I could fix this pretty easily, but as far as I know I can't do that. The problem could also be fixed if I could make the scroll view cancel (and subsequently call touchesShouldCancel) at my discretion, or at least delay the call to touchesShouldCancel, but I can't do that either.
Instead, I've been trying to capture a couple of touchesBegan/Moved calls (2 or 3 at most) in a separate overlay view over the UITableView and then forwarding my touches to the table. That way, my table is guaranteed to already know the dragging direction when touchesShouldCancel is called. You can read about variations on this method here:
http://theexciter.com/articles/touches-and-uiscrollview-inside-a-uitableview.html
http://forums.macrumors.com/showthread.php?t=640508
(Yes, they do things a bit differently, but I think the crux is forwarding touches to the UITableView after pre-processing is done.)
Unfortunately, this doesn't seem to work. Calling my table view with touchesBegan/Moved/Ended-Cancelled doesn't move the table, nor does forwarding them to the table's hitTest view (which by my testing is either the table itself or a table cell). What's more, I checked what the cells' nextResponder is, and it turns out to be the table, so that wouldn't work either. By my understanding, this is because UIScrollView, at some point in the near past, switched over to using gesture recognizers to perform its vital dragging/scrolling detection, and to my knowledge, you can't forward touches as you would normally when gesture recognizers are involved.
Here's another thing: even though gesture recognizers were officially released in 3.2, they're still around in 3.1.3, though you can't use the API. I think UIScrollView is already using them in 3.1.3.
Whew! So here are my questions:
The nextResponder method described in the two links above seems pretty recent. Am I doing something wrong, or has the implementation of UIScrollView really fundamentally changed since then?
Is there any way to forward touches to a class with UIGestureRecognizers, ensuring that the recognizers have a chance to handle the touches?
I can solve the problem by adding my own UIGestureRecognizer that detects the dragging angle to my table view, and then making sure that every gesture recognizer added before that in table.gestureRecognizers depends on mine finishing. (There are 3 default UIScrollView gesture recognizers, I think. A few are private API classes, but all are UIGestureRecognizer subclasses, obviously.) I'm not handling any of the private gesture recognizers by name, but I'm still manipulating them and also using my knowledge of UIScrollView's internals, which aren't documented by Apple. Could my app get rejected for this?
What do I do for 3.1.3? The UIScrollView is apparently already using gesture recognizers, but I can't actually access them because the API is only available in 3.2.
Thank you!
Okay, I finally figured out an answer to my problem. Two answers, actually.
Convoluted solution: subclass UIWindow and override sendEvent to store the last touch's location. (Overriding sendEvent is one of the examples given in the Event Handling Guide.) Then, the Scroll View can query the window for the last touch when touchesShouldCancel is called.
Easier solution: shortly after, I noticed that Facebook's Three20 library was storing UITouches without retaining them. I always thought that you shouldn't keep UITouch objects around beyond local scope, but Apple's docs only explicitly prohibit retention. ("A UITouch object is persistent throughout a multi-touch sequence. You should never retain an UITouch object when handling an event. If you need to keep information about a touch from one phase to another, you should copy that information from the UITouch object.") Therefore, it might be legal to simply store the initial UITouch in the table and query its new position when touchesShouldCancel is called.
Unfortunately, in the worst case scenario, both of these techniques only give me 2 sample points, which isn't a very accurate measurement of direction. It would be much better if I could simply delay the table's touch processing or call touchesShouldCancel manually, but as far as I can tell it's either very hacky or outright impossible/illegal to do that.
I'm talking, how much time can be expected to elapse between the user touching the screen and something like touchesBegan being called? (Or something lower level, if such a thing is available.) Sub-millisecond? Multiple milliseconds? Tens?
I'm pretty sure touchesBegan is called very quickly (meaning, with whatever minimal delay exists in the event path). I've noticed in my code that I get a single touchesBegan for a two-fingered touch with both touches already in the list. I assume that I don't touch both fingers down together very precisely but from what I've seen the touchesBegan event is delivered within a few milliseconds. I suspect Apple holds the touches very briefly before passing them along in order to batch them for more efficient handling, possibly also using that delay to filter out accidental touches (sort of like de-bouncing a mechanical switch).
I use a touchesBegan in my view to freeze my scrolling animations and my perception is that the scrolling stops immediately when I touch the screen.
The real delays come from the gesture recognizers. They often have to wait to see if you've moved far enough to signify a pan or if you've held long enough to be holding or if you've released without dragging to signify a tap. Those delays can be substantial of course, though they're still only a fraction of a second in my experience.