How does UIGestureRecognizer work? - cocoa-touch

How does UIGestureRecognizer work internally? Is it possible to emulate it in iOS < 3.2?

If you want a detailed explanation on how they work, it is worth watching this video from last year's WWDC.

See the video Deepak mentions for details, but yes, it is something you can build yourself if you want to.
Be sure to ask yourself a couple questions first, though: do you want to recreate the entire recognizer "framework", or just be able to recognize, say, a swipe? If the latter, there should be tons of examples on the web from pre 3.2 days of detecting swipes using the normal touch event handlers.
If you really want to recreate the framework, you can, and it's actually kind of an interesting exercise. The UIKit object does have some hooks into the event pipeline at earlier stages, but you can get a similar result by tracking the touches and building a pipeline of recognizer objects. If you read the docs on UIGestureRecognizer, you'll see that the state management that they use is pretty clearly laid out. You could copy that, and then just build you own custom MyPanGestureRecognizer, MySwipeGestureRecognizer, etc, that derive from a MyGestureRecognizer base. You should have some UIView subclass (MyGestureView) that handles all the touches and runs through its list of MyGestureRecognizers, using the state machine that's implied in the docs.

Related

Global events for Show desktop, show notification center, etc. in cocoa

for my program, I need to be able to discriminate between when users are performing some actions using gestures on the trackpad and when using corresponding hotkeys. Typically, I need to know when users show desktop, and if he performed an associated hotkey or associated gesture. Same for switching spaces, etc...
Basically, I have this need for showing notification center, application windows, show desktop, show dashboard,etc... Being able to handle hot corners would even be a plus.
So far I was hoping to be able to investigate global monitors for events, using NSAnyEventMask and slightly reverse engineer to figure out what type is the "Mission control open" event, but this was not a success. In fact, NSAnyEventMask does not seem to work at all as my method is never called (while it is with other masks such as keydown or mousemove).
I also had a look at the accessibility features, hoping I could add a relevant AXObserver notification, but did not find anything neither. I guess this is not surprising since the accessibility API provides a description of basic graphical components such as menus, windows, etc... therefore, virtual spaces and notification centers are not described by it.
Finally, CGevent tap does not seem to handle these events as when I use the function keys for showing desktop, the only events handled by my CGeventTaps are the corresponding keydown and keyup events.
I suspect few possible outcomes.
(1) I have been amazing at trying, but unfortunately this is not possible at all to handle these events ... I seriously doubt this as first I am far from being an amazing programmer, especially in Cocoa, and second, apple prove me that this is possible to access lots of amazing events programmatically and I believe in the power of their API.
(2) I have tried the good methods, but failed because of side factors. It is likely.
(3) other methods could help me to handle these events globally and programmatically (private API?).
Thanks a lot for your help,
Kind regards,
Just saw this, but this is caused by an error in Apple's implementation of NSAnyEventMask. The docs describe NSAnyEventMask as 0xffffffffUyet the implementation of NSAnyEventMask is an NSUIntegerMax which is 0xffffffffffffffffU. This is possibly due to the transition from 32 bit to 64 bit computers which changes NSUInteger's from unsigned int's to unsigned long's. Replacing NSAnyEventMask with '0xffffffffU' fixes the problem. I've already listed this as a bug to apple in hopes they would fix this.

Window list ordered by recently used

I'm trying to create a window switching application. Is there any way of getting a list of the windows of other applications, ordered by recently used?
Start with the Accessibility framework. Many of the hooks for screen readers are also useful here. Particularly look at the UIElementInspector sample and the NSAccessiblity protocol.
There's also Quartz Window services, which can easily give you a list of all the windows on screen. Unfortunately, it doesn't tie into concepts like window focus (just level), and I don't know of a way to get notifications back from it when levels change. You might do something like tap into the Quartz Event framework to capture Cmd-Tab and the like, but that's complex and fragile. There is unfortunately no good way to convert a CGWindowID into an AXUIElementRef (the post is for 10.5, but I don't know of anything that was added in 10.6 to improve this). But hopefully you can do everything you need through the Accessibility framework.
You would want to use
[NSWorkspace runningApplications]
to get you a list of all the applications running, and watch
[NSRunningApplication currentApplication]
to know when the user switches to a new application to keep up with which one is used more recently.

sample mac Firefox Plugins?

I'm trying to re-write an old image-viewing plugin for the mac. The old version uses QuickDraw (I said it was old) and resources (really really old) and so it doesn't work in Firefox 3.6 (which is why I'm re-writing it)
I know some Objective C, and so I figure I'm going co re-write this in that using new-fangled Mac routines and nibs, etc. However, I don't know how to start. I've got the BasicPlugin example that comes with mozilla source, so I know how to create a plugin with entrypoints, etc. However, I don't know how to create the nib, and how to interface Obj-C with the entrypoints, etc.
Does anyone know of a more advanced sample for mac than BasicPlugin.bundle? (Preferably simple enough that I can just look at it and understand it...)
thanks.
Sadly i don't really know of any good "intermediate" example. However, integrating Obj-C isn't that difficult. Thus, following is a short overview of what needs to be done.
You can use Obj-C and C/C++-sources in the same project, its just recommendable to keep them seperated to some extent. This can for example be done by letting the source file with the entry-points and other NPAPI-interfacing stay plain C or C++ files and e.g. forward calls into the plugin from there.
Opaque pointers help to keep a clean seperation, see e.g. here.
The main changes to your plugin include switching to different drawing and event models. These have to be negotiated in NPP_New(), here is an example for the drawing model. When using Cocoa and to support 64bit enviroments, you need to use the Cocoa event model.
To draw UI elements you should be able to use a NSGraphicsContext from the CGContextRef and then draw an NSView in the context. See also the details provided in this post and its follow-ups.

Using a NSTreeController, NSOutlineView with Drag and Drop

I have found a Tutorial here
on how to implement drag and drop in an Outline View. The only problem I have is I don't know where to put the code from the tutorial. I would appreciate it greatly if you could tell me where I should put the code in a Xcode Project to make it work. Thanks!
You might want to check out this tutorial as well (there is also a part two which details unordered trees).
In particular, the linked tutorial contains an XCode projects that should get you started. Check out DragController.m to see where you put the code you referenced with your link.
Apple has released a samplecode explaining how to do it. http://developer.apple.com/library/mac/#samplecode/DragNDropOutlineView/Introduction/Intro.html
I found this much better then all the other samples i've found on the internet.
They're delegate/data source methods, so you put them into the outline view's delegate and data source. Usually this is your controller object, but it's up to you to hook up the connections in IB or programatically. I'd actually suggest learning how data source and delegate methods work before using bindings or Core Data, since bindings isn't meant to replace knowledge of lower level code (and you're going to run into a lot of problems with bindings until you have a solid understanding of the basics).
Also, keep in mind NSTreeController has improved a bit since 10.5, from what I've heard you should be able to get the real observed object without using private methods anymore.

How do I create Cocoa interfaces without Interface Builder?

I would prefer to create my interfaces programatically. Seems as if all the docs on Apple Developer assume you're using Interface Builder. Is it possible to create these interfaces programatically, and if so where do I start learning about how to do this
I thought the relevant document for this, if possible would be in this section: http://developer.apple.com/referencelibrary/Cocoa/idxUserExperience-date.html
I like the question, and I'd also like to know of resources for going IB-less. Usefulness (the "why") is limited only by imagination. Off the top of my head, here are some possible reasons to program UIs explicitly:
Implementing a better Interface Builder.
Programming dynamic UIs, i.e., ones whose structure is not knowable statically (at compile/xcode time).
Implementing the Cocoa back-end of a cross-platform library or language for UIs.
There is a series of blog posts on working without a nib and a recent description by Michael Mucha on cocoa-dev.
I would prefer to create my interfaces programatically.
Why? Interface Builder is easier and faster. You can't write a typo by drag and drop, and you don't get those oh-so-handy Aqua guides when you're typing rectangles by hand.
Don't fight it. Interface Builder is your friend. Let it help you.
If you insist on wasting your own time and energy by writing your UI in code:
Not document-based (generally library-based, like Mail, iTunes, iPhoto): Create a subclass of NSObject, instantiate it, and make it the application's delegate, and in the delegate's applicationDidFinishLaunching: method, create a window, populate it with views, and order it front.
Document-based (like TextEdit, Preview, QuickTime Player): In the makeWindowControllers method in your subclass of NSDocument, create your windows (and populate them with views) and create window controllers for them, making sure to send yourself addWindowController: for each window controller.
As a completely blind developer I can say that IB is not compatible with VoiceOver (the built-in screen-reader on OS X).
This means that without access to robust documentation on using Cocoa without IB I cannot develop apps for OS X / iPhone in Cocoa, which means I (ironically) cannot easily develop apps that are accessible to the blind (and all others) on OS X / iOS.
My current solution, which I would prefer not to use, is Java + SWT, of course this works for OS X, not so much for iOS.
In fact IB becomes totally unusefull when you start to write your own UI classes. Let say that you create your own button that use an skin system based on a plist. Or you create an dinamic toolbar that load and unload items based on user selection.
IB doesn't accept custom UI elements, so more complex UI can't use him. And YES you will want to do more complex things that the UIKit gives you.
Though this is quiet a bit old...
I tried many times to do everything only with programmatically. This is hard, but possible.
Update:
I posted another question for this specific issue: View-based NSOutlineView without NIB?, and now
I believe everything can be done in programmatical way, but it's incredibly hard without consulting from Apple engineers due to lack of information or examples.
Below argument might be off-topic, but I like to note why I strongly prefer programmatically way.
I also prefer programmatic way. Because
Static layout tool cannot handle anything dynamic.
Reproducing same UI state across multiple NIBs is hard. Everything is implicit or hidden. You need to visit all the panels to find parameters. This kind of job is very easy to make mistake - mistake friendly.
Managing consistent state is hard. Because reproducing same look is hard.
Automation impossible. You cannot make auto-generated input form.
Parameter indirection - such as variable element size chosen by user - is not possible.
Aiming small point is a lot harder than hitting finger sized keys at fixed location - funny that this is serious usability issue for developers!
IB sometimes screws. Which means it's compilable, and still working, but when I open the source, it looks broken and extra editing becomes impossible. (you may not experienced this yet, but if XIB file goes complex, this must happen)
It's image based serialization. The concept is good. But the problem is image-base only. IB doesn't keep the source code for clean boot by replaying the source code. Clean boot is very important to guarantee specific running state. Also, we cannot fix the bugs in source-code. Bug s just will be stacked infinitely. This is core reason why we cannot reproduce the equal(not similar looking) UI state in IB.
Of course these stuffs can be solved by post-processing NIB UI, but if we have to configure everything again, there's no reason to use IB at first.
With text code, it's easy to reproducing the same state - just copy the code. Also easy to inspecting and fixing wrong part - because we have full control. But in IB, we have no control on hard-core details.
IB can't be ultimate solution. It's like a Photoshop, but even Photoshop offers text-based scripting facility. GUI is a moving program, and not a static image or graphic. An IB approach is completely wrong even for visual editing of GUI. If you're one of the Apple folks reading this, I beg you to remove whole dependency to IB completely ASAP.