I've been considering an app now that implements a drag and drop sort of idiom from maybe a side pane or a drawer, etc. what I can't wrap my head around are how to keep reference to the objects I drop. I mean; it would be easy if it was just drop the object, then let it alone, but I want more manipulation after the fact.
My brain just cannot wrap around the concept of creating objects out of thin air to place on the 'canvas', or having preset objects (which I imagine would be limited, cumbersome and awkward) already on the canvas that would then just be activated and manipulated easily, seeing as the references to them are created before the fact (my apologies for the loose term 'reference', I mean something like selecting the object and having it's unique properties recognized or displayed).
There must be something I'm missing. So, I wonder how one might go about implementing drag and drop with interface and manipulation with the dropped object after the fact or maybe sample code or a link to a git or svn repo? (something like how MIT's scratch, or Xcode's interface builder might work).
For clarity's sake, I know how to go about fiddling with drag and drop thanks to DragKit, but not about editing 'properties' on the object dropped onto the 'canvas', and I would like there to be a near infinite amount of objects that can be dropped on the canvas, yet a set amount of items in the drawer/side view.
If I'm understanding your question correctly, you want to be able to drag objects onto a canvas and then manipulate their properties individually. For instance, you'd drag square views onto the screen and then increase its size or change its color.
In order to do something like that, I would have a NSMutableArray or an NSMutableSet that would hold all of the on canvas objects. Then when any interaction comes, you could either dynamically generate gesture recognizers if the objects are UIViews or a subclass. Then in the target of the gesture recognizer you would use the recognizer.view property.
Or you would have to check which object on the canvas you were currently manipulating. That would be done by iterating through the array and seeing which object equals the one you are touching.
Is there anything that you are trying to do but is not working? Have you written any code in attempt to do this?
Related
I have always found it hard to decide when I should create a single object and pass it to every object that needs it, or create a new object for every object which needs that item.
Some cases are so obvious, like entity objects which are readonly after allocation and instantiation. You can pass the same object anywhere and not worry about another object modifying it, which ruins the other objects holding a reference to it.
My problem mainly lies with objects that represent themselves to the user. Objects like CCSprite in cocos2d, NSMenuItem in Cocoa (AppKit) and other objects with visual representation..
Examples:
In cocoa, i know that I have to create an NSMenu for each NSPopUpButton, so that the selection for a specific button does not affect the rest. But, what about the NSMenuItems contained within? Can I create a single set and pass them to all the menus? What are your grounds, or how did you come to such a conclusion?
Other example:
In cocos2d, and almost all GUI based applications, you can pass to a button two images/sprites/...etc. such that one is for the normal state, and the other is for the selected (highlighted, pressed, clicked) state. Can I pass the same image/sprite/...etc. to both, making them point to the same object?
I hope this is not an implementation related issue (that it ultimately depends on the implementation) , since I face it in a lot of my programming projects, in cocoa, cocos2d, Java ... etc.
PS: I need help with choosing the proper tags
I suggest creating new instances unless doing this causes a performance problem. Sharing an NSMenuItem instance among many NSMenu makes it more difficulty to maintain control over that instance, which increases the risk of errors.
I want to add a button that when pressed will lock two sliders together such that the values for the two sliders will always be the same.
I have a solution for this using code, but I'm wondering if there is a way to do this with interface builder alone.
I am worried that the code based solution that one slider may lag behind the other in high CPU utilization environments.
No, there is no way to do this with Interface Builder alone.
Actually everything becomes code in the end, as far as I understand, Interface Builder was built to improve the development time, not necessarily to improve performance, I found this interesting quote on Apple's site about NIBs:
Xcode works in conjunction with these frameworks to help you connect
the controls of your user interface to the objects in your project
that respond to those controls.
Taking into account that, everything will become code (of some level). About NIB files.
At runtime, these descriptions are used to recreate the objects and
their configuration inside your application. When you load a nib file
at runtime, you get an exact replica of the objects that were in your
Xcode document. The nib-loading code instantiates the objects,
configures them, and reestablishes any inter-object connections that
you created in your nib file.
If you would really want to avoid such behavior probably the best you would be able to do is create the widget from scratch, but that would be a totally different question.
Just curious, why wouldn't you want to use code?
Locking the two sliders together in IB is easy. And I've never seen lag. Having that lock dependent on the press of a button is another story, that would have to be done in code, but it would not be too complicated. Assuming you have outlets connected in IB and declared in the controller
-(IBAction)lockSliders:(id)sender {
[slider1 setContinuous:YES];
[slider1 takeIntegerValueFrom:slider2]; // or takeFloatValueFrom or takeDoubleValueFrom
[slider2 setContinuous:YES];
[slider2 takeIntegerValueFrom:slider1];
}
I would like to build something similar to the code-completion feature in Xcode 4. (The visual style and behavior, not the data structure type work required for code completion).
As the user is typing, a pop-up window presents other word choices that can be selected.
The Feature in action:
I'm not exactly sure where to start. I am mainly concerned with the visual appearance of the window and how I should populate the list with a given set of words. Later I will get into making the window follow the cursor around the screen and etc.
I am mainly looking for an overview of how to display such data in a "window", and how to cusomize the appearance of the thing so it looks like a nice little informational popup rather than a full-on OS X window.
Just add a subview to your current view that happens to be a tableview. Programmatically cause it to be visible on an event (such as mouseDown), and adjust it's position based on where you want it. You will need to instantiate the proper delegate/datasource methods, but it should be pretty straight forward. You will also need a source for the words you want to use in your autocomplete, and put them in an array or something for your tableview datasource to go through.
Like I said, it's not terribly hard, provided you are comfortable using tableviews and adding views to your existing view. If this doesn't explain enough, leave a comment and I can flesh this comment out more.
Add a (completions) subview to your view and set its visible property to NO. Create a separate AutoComplete object that includes the subview as a property and fills it with potential completions. Your controller can respond to key pressed (key typed) events and give the last word (substring the text from the end to the 1st preceding blank) to the AutoComplete on each event. The basic logic in AutoComplete could be something like:
Given an AutoComplete with a list of known words "dog, spaghetti, minute, horse, spare, speed"
When asked to complete a fragment "sp"
Then the following words should be offered as potential completions, "spaghetti, spare, speed"
Which would imply that you need to instantiate it with a list of words (this could be done in the init of your controller), and create a method "-(NSArray*) completeFragment:(NSString*)fragment;". You could drop this into an OCUnit test case and iterate over your implementation of autocomplete until you get it right. You would then write a method that takes the completions from AutoComplete and lists them in the subview. Even better you might create currentWord and potentialCompletions properties in AutoComplete that gets updated as you send it "newFragment:(NSString*)fragment;" messages. Throw that into OCUnit and work it out then use that potentialCompletions property to drive updated of a the subview (which is probably best modeled as a custom tableView).
My app requires an interface that has many buttons, text fields and matrixes. And they need to change from time to time. Right now I do this by having all elements in IB already and hiding/showing/moving them when needed. What would others recommend? Should I do that? Should I use an NSTabView? NSView? Should create the elements programatically? If so, what if I have an element that is already created that I need again without changes? It would be a waste of releasing it and creating it again.
Any help would be greatly appreciated.
In my opinion, it's better to create interfaces programmatically if you have to animate views around a lot. If it's just a matter of hiding/unhiding them, IB works great, but if you need re-layout or create unknown numbers of views dynamically it's not worth trying to make it all work with nib files.
As for general advice:
Create subclasses (from UIView or UIControl or one of their subclasses) for every kind of element you're going to use. It's tempting to piece together composite views from your UIViewController, but you'll really be much better off creating real classes.
Study the standard Cocoa view classes, and try to create similar API:s in your own controls and views.
Put as much data (sub-element positioning etc) into a plist, so that you can easily change it from one centralized place instead of having to dig around in the code.
If you are often creating several dozen short-lived views, it's worth keeping them in a pool and reusing them. But if it's just a few labels being added and removed intermittently I wouldn't worry too much about it. As usual: don't optimize too early.
Your current approach sounds fine. If you're showing/hiding them but otherwise they remain unchanged, why go through the trouble of creating them with code, when your XIB keeps a "freeze-dried" copy of exactly what you need already?
As long as you're keeping them within logical groups, you can just move/swap/show/hide the group's container (like NSBox or an NSView). If you have a LOT of logical groups, which aren't always shown every session, you can separate them out into their own XIBs and only load them when they're needed, to save launch time and memory.
If you use NSViewController, it's even better because you can make clean breaks for each logical group. Load the panel as the view and the view controller will keep outlets/actions and has a one-to-one relationship with a xib.
I'm trying to re-implement an old Reversi board game I wrote with a bit more of a snazzy UI. I've looked at Jens Alfke's GeekGameBoard code for inspiration, and CALayers looks like the way to go for implementing the UI.
However, there is no clean separation of model and view in the GeekGameBoard code; the model is the view, which makes it hard to, for example, make a copy of the game state in order to perform game-tree search for the AI player. However, I don't seem to be able to come up with an alternative way to structure that allows a separation of model and view that doesn't involve a constant battle to keep two parallel grids (on for the model, one for the view) in synch. This, of course, has its own problems.
How do I best best implement the relationship between an AI search-friendly model structure and a display-friendly view? Any suggestions / experiences would be appreciated. I'm dreading / half expecting an answer along the lines of "there is no good answer: deal with it as best you can" but I'm prepared to be surprised!
Thanks for the answer Peter. I'm not entirely sure I understand it fully, however. I can see how this works if you just have an initial set of pieces that are moved around, and even removed, but what happens when a person puts a new piece down? Would it work like this:
User clicks in the view.
View click is translated to a board location and controller is notified.
Controller creates a new Board with the successor state (if appropriate, i.e. it was a legal move).
The view picks up the new board via its bindings, tears down the existing view/layer hierarchy and replaces it with the current state.
Does that sound right?
PS: Sorry for failing to specify whether it was for the iPhone or Mac. I'm most interested in something that works for the iPhone, but if I can get it to work nicely on the Mac first I'm sure I can adapt the solution to work on the iPhone myself. (Or post a new question!)
In theory, it should be the same as for an NSView-based UI: Add a model property (or properties), expose it (or them) as bindings, then bind the view (layer) to the model through a controller.
For example, you might have a Board class with Pieces on it (each Piece having a reference to the Player who owns it), with all of those being model classes. Your controller would own a Board, and your view/layer would be able to display a Board, possibly with a subview/sublayer for each Piece.
You'd bind your board view/layer to the controller's board property, and in your view/layer's setter for that property, create a subview/sublayer for each piece, and bind it to any properties of the Piece that it will need. (Don't forget to unbind and remove all the subviews/sublayers when replacing the main view/layer's Board.)
When you want to move or modify a Piece, you'd do so using its own properties; these will translate to property accesses on the view/layer. Ostensibly, you'll have your layer's properties set up to animate changes (so that, for example, changing a Piece's position will cause the layer for it to move accordingly).
The same goes for the Board. You might let the user change one or both tile colors; you'll bind your color well(s) through your game controller to its Board object, and with the view/layer bound to the same property of the same Board, it'll pick up the change automatically.
Disclaimers: I've never used Core Animation for anything, and if you're asking about Cocoa Touch instead of Cocoa, the above solution won't work, since it depends on Cocoa Bindings.
I have an iPhone application where almost all of the interface is constructed using Core Animation CALayers, and I use a very similar pattern to what Peter describes. He's correct in that you want to treat your CALayers as if they were NSViews / UIViews and manage their logic through controllers and data via model objects.
In my case, I create a hierarchy of controller objects which also function as model objects (I may refactor to split out the model components). Each of the controller objects manages a CALayer, so there ends up being a parallel CALayer display hierarchy to the model-controller one. For my application, I need to perform calculations for equations constructed using this hierarchy, so I use the controllers to provide calculated values from the bottom of the tree up. The controllers also handle user editing events, such as the insertion of new suboperations or deletion of operation trees.
I've created a layer-hosting view class that allows the CALayer tree to respond to touch or mouse events (the source of which is now available within the Core Plot project). For your boardgame example, the CALayer pieces could take in the touch events, and have their controllers manage the back-end logic (determine a legal move, etc.). You should just be able to move pieces around and maintain the same controllers without tearing everything down on every move.