I have found a Tutorial here
on how to implement drag and drop in an Outline View. The only problem I have is I don't know where to put the code from the tutorial. I would appreciate it greatly if you could tell me where I should put the code in a Xcode Project to make it work. Thanks!
You might want to check out this tutorial as well (there is also a part two which details unordered trees).
In particular, the linked tutorial contains an XCode projects that should get you started. Check out DragController.m to see where you put the code you referenced with your link.
Apple has released a samplecode explaining how to do it. http://developer.apple.com/library/mac/#samplecode/DragNDropOutlineView/Introduction/Intro.html
I found this much better then all the other samples i've found on the internet.
They're delegate/data source methods, so you put them into the outline view's delegate and data source. Usually this is your controller object, but it's up to you to hook up the connections in IB or programatically. I'd actually suggest learning how data source and delegate methods work before using bindings or Core Data, since bindings isn't meant to replace knowledge of lower level code (and you're going to run into a lot of problems with bindings until you have a solid understanding of the basics).
Also, keep in mind NSTreeController has improved a bit since 10.5, from what I've heard you should be able to get the real observed object without using private methods anymore.
Related
I'm an Objective-C newbie. Most of my experience is in Java. Also, I've never really used Xcode before and so I'm pretty new at that as well.
I'm trying to create a simple, single-view Quartz OS X app (not iOS) to display agent-modeling simulations. The graphics are pretty simple; just colored squares and grids. I have been looking at Quartz tutorials and I can see how I could accomplish this (as far as drawing things are concerned). What I can't find is an example that tells me how to tie it all together. What do I put in AppDelegate? Do I need a WindowController? How do I link that up with AppDelegate? I got as far as creating a Quartz Composer View in Interface Builder for my app, but I have no idea where to go from there.
As I mentioned before, I've looked for numerous tutorials but there is nothing that I can find that gives me information as far as linking everything together.
You should visit this web page before you do anything else. It will show you how a Cocoa application is structured and where the appropriate entry points are to place your code.
While the entire article merits reading, visit the section "Entry and Exit Points," which best addresses your particular questions.
How does UIGestureRecognizer work internally? Is it possible to emulate it in iOS < 3.2?
If you want a detailed explanation on how they work, it is worth watching this video from last year's WWDC.
See the video Deepak mentions for details, but yes, it is something you can build yourself if you want to.
Be sure to ask yourself a couple questions first, though: do you want to recreate the entire recognizer "framework", or just be able to recognize, say, a swipe? If the latter, there should be tons of examples on the web from pre 3.2 days of detecting swipes using the normal touch event handlers.
If you really want to recreate the framework, you can, and it's actually kind of an interesting exercise. The UIKit object does have some hooks into the event pipeline at earlier stages, but you can get a similar result by tracking the touches and building a pipeline of recognizer objects. If you read the docs on UIGestureRecognizer, you'll see that the state management that they use is pretty clearly laid out. You could copy that, and then just build you own custom MyPanGestureRecognizer, MySwipeGestureRecognizer, etc, that derive from a MyGestureRecognizer base. You should have some UIView subclass (MyGestureView) that handles all the touches and runs through its list of MyGestureRecognizers, using the state machine that's implied in the docs.
I would like to make a patch bay type control... any source online that anyone knows of that I could work from?
Thanks
The closest thing I'm aware of is EFLaceView: http://www.cocoadev.com/index.pl?FlowChartView
Edit: EFLaceView seems to have disappeared, but I have a saved copy: EFLaceView
Edit: Version of EFLaceView on github, with more recent changes than link above.
Unless you can find someone online who is sharing exactly the kind of control you're looking for, you don't have any choice but to build this for yourself. For that, you need to understand Control and Cell Programming, Cocoa Drawing, and create your own custom view.
I'm trying to re-write an old image-viewing plugin for the mac. The old version uses QuickDraw (I said it was old) and resources (really really old) and so it doesn't work in Firefox 3.6 (which is why I'm re-writing it)
I know some Objective C, and so I figure I'm going co re-write this in that using new-fangled Mac routines and nibs, etc. However, I don't know how to start. I've got the BasicPlugin example that comes with mozilla source, so I know how to create a plugin with entrypoints, etc. However, I don't know how to create the nib, and how to interface Obj-C with the entrypoints, etc.
Does anyone know of a more advanced sample for mac than BasicPlugin.bundle? (Preferably simple enough that I can just look at it and understand it...)
thanks.
Sadly i don't really know of any good "intermediate" example. However, integrating Obj-C isn't that difficult. Thus, following is a short overview of what needs to be done.
You can use Obj-C and C/C++-sources in the same project, its just recommendable to keep them seperated to some extent. This can for example be done by letting the source file with the entry-points and other NPAPI-interfacing stay plain C or C++ files and e.g. forward calls into the plugin from there.
Opaque pointers help to keep a clean seperation, see e.g. here.
The main changes to your plugin include switching to different drawing and event models. These have to be negotiated in NPP_New(), here is an example for the drawing model. When using Cocoa and to support 64bit enviroments, you need to use the Cocoa event model.
To draw UI elements you should be able to use a NSGraphicsContext from the CGContextRef and then draw an NSView in the context. See also the details provided in this post and its follow-ups.
I would prefer to create my interfaces programatically. Seems as if all the docs on Apple Developer assume you're using Interface Builder. Is it possible to create these interfaces programatically, and if so where do I start learning about how to do this
I thought the relevant document for this, if possible would be in this section: http://developer.apple.com/referencelibrary/Cocoa/idxUserExperience-date.html
I like the question, and I'd also like to know of resources for going IB-less. Usefulness (the "why") is limited only by imagination. Off the top of my head, here are some possible reasons to program UIs explicitly:
Implementing a better Interface Builder.
Programming dynamic UIs, i.e., ones whose structure is not knowable statically (at compile/xcode time).
Implementing the Cocoa back-end of a cross-platform library or language for UIs.
There is a series of blog posts on working without a nib and a recent description by Michael Mucha on cocoa-dev.
I would prefer to create my interfaces programatically.
Why? Interface Builder is easier and faster. You can't write a typo by drag and drop, and you don't get those oh-so-handy Aqua guides when you're typing rectangles by hand.
Don't fight it. Interface Builder is your friend. Let it help you.
If you insist on wasting your own time and energy by writing your UI in code:
Not document-based (generally library-based, like Mail, iTunes, iPhoto): Create a subclass of NSObject, instantiate it, and make it the application's delegate, and in the delegate's applicationDidFinishLaunching: method, create a window, populate it with views, and order it front.
Document-based (like TextEdit, Preview, QuickTime Player): In the makeWindowControllers method in your subclass of NSDocument, create your windows (and populate them with views) and create window controllers for them, making sure to send yourself addWindowController: for each window controller.
As a completely blind developer I can say that IB is not compatible with VoiceOver (the built-in screen-reader on OS X).
This means that without access to robust documentation on using Cocoa without IB I cannot develop apps for OS X / iPhone in Cocoa, which means I (ironically) cannot easily develop apps that are accessible to the blind (and all others) on OS X / iOS.
My current solution, which I would prefer not to use, is Java + SWT, of course this works for OS X, not so much for iOS.
In fact IB becomes totally unusefull when you start to write your own UI classes. Let say that you create your own button that use an skin system based on a plist. Or you create an dinamic toolbar that load and unload items based on user selection.
IB doesn't accept custom UI elements, so more complex UI can't use him. And YES you will want to do more complex things that the UIKit gives you.
Though this is quiet a bit old...
I tried many times to do everything only with programmatically. This is hard, but possible.
Update:
I posted another question for this specific issue: View-based NSOutlineView without NIB?, and now
I believe everything can be done in programmatical way, but it's incredibly hard without consulting from Apple engineers due to lack of information or examples.
Below argument might be off-topic, but I like to note why I strongly prefer programmatically way.
I also prefer programmatic way. Because
Static layout tool cannot handle anything dynamic.
Reproducing same UI state across multiple NIBs is hard. Everything is implicit or hidden. You need to visit all the panels to find parameters. This kind of job is very easy to make mistake - mistake friendly.
Managing consistent state is hard. Because reproducing same look is hard.
Automation impossible. You cannot make auto-generated input form.
Parameter indirection - such as variable element size chosen by user - is not possible.
Aiming small point is a lot harder than hitting finger sized keys at fixed location - funny that this is serious usability issue for developers!
IB sometimes screws. Which means it's compilable, and still working, but when I open the source, it looks broken and extra editing becomes impossible. (you may not experienced this yet, but if XIB file goes complex, this must happen)
It's image based serialization. The concept is good. But the problem is image-base only. IB doesn't keep the source code for clean boot by replaying the source code. Clean boot is very important to guarantee specific running state. Also, we cannot fix the bugs in source-code. Bug s just will be stacked infinitely. This is core reason why we cannot reproduce the equal(not similar looking) UI state in IB.
Of course these stuffs can be solved by post-processing NIB UI, but if we have to configure everything again, there's no reason to use IB at first.
With text code, it's easy to reproducing the same state - just copy the code. Also easy to inspecting and fixing wrong part - because we have full control. But in IB, we have no control on hard-core details.
IB can't be ultimate solution. It's like a Photoshop, but even Photoshop offers text-based scripting facility. GUI is a moving program, and not a static image or graphic. An IB approach is completely wrong even for visual editing of GUI. If you're one of the Apple folks reading this, I beg you to remove whole dependency to IB completely ASAP.