Hooking up Chipmunk bodies to UIKit components? - objective-c

I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!

Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.

Related

NSSegmentedCell Subclass and Custom Geometry/Layout Impossible?

A Tale of Two Subclasses
By Ben Stock
Prologue
I'm in the process of making a really nice looking set of controls which automatically change their appearance depending on the type of window they're used in (e.g. If you drop a button in a normal window, it looks like any other standard Aqua button. If you drop it on an NSPanel with a window mask of NSHUDWindowMask, however, it'll automatically switch its style to look good on a HUD background. So far, I've subclassed NSButton, NSTextField, NSSlider, and NSSearchField. Last night I started on NSTabView, only to be slammed down by its lack of customizability. It's a real pain in the ass, but I'm a developer, so I'm used to finding my own way. The first thing I think to do is add an instance of NSSegmentedControl in place of the private tabs used by NSTabView. So far, so good. I've got the buttons selectable, they automatically update when new NSTabViewItem's are added, and they work just like the real thing.
And the Pain Begins …
Finally, I start to style my segments, and … WTF have I gotten myself into‽ I should've just gone into acting or something. Objective-C development is slowly taking years off my life. No matter what I do, the "tracking areas" used by NSSegmentedCell don't seem to be updating when my segment widths change. So when my widths change, my artwork does, too. However, the actual tracking area doesn't update (even when I override -updateTrackingAreas. It's really hard to explain, so I decided to draw my segment rectangles behind and in front of the ones drawn by super in -drawSegment:inFrame:withView. Here's a screenshot with my art drawn on top of the underlying tracking areas:
And here's super's implementation above my segment rects:
I've tried overriding everything I can think of. Here are a few of the methods I've overridden (and un-overridden):
-cellSize (NSSegmentedCell)
-cellSizeForBounds: (NSSegmentedCell)
-sizeToFit (NSSegmentedControl)
-intrinsicContentSize (NSSegmentedControl)
-setWidth:forSegment: (NSSegmentedControl/Cell)
-startTrackingAt:inView: (NSSegmentedCell)
-continueTracking:at:inView: (NSSegmentedCell)
-stopTracking:at:inView:mouseIsUp: (NSSegmentedCell)
At this point, some of those methods in the above list are still using my overrides and some aren't. I've mixed and matched, deleted, simplified, rewrote, and refactored, and no matter what I do, the underlying rectangles don't change. I love Apple as much as the next guy, but their view of customization needs to change. I can't stand not being able to understand what's going on in the implementation of all these stupid controls. Not to mention the fact that I still can't fully wrap my head around Auto Layout (which is about the most un-"auto" thing I've ever dealt with), but that's a post for another day. Anyway, if anybody could help a brotha out, I'd be super grateful. Sorry for ranting and thanks for reading!
P.S. None of these things are finished, so please don't be too hard on a few pixel imperfections. ;-)

Using Opacity-generated layers in Quartz 2d instead of drawing to the view's layer

Firstly, be gently with me. I'm new to Objective C and iOS programming.
I've just purchased a copy of Opacity and I'm playing around with layers. My only grade A success so far is to design and successfully generate my own design of a calculator - so I haven't got that far!
Specifically, I want to know how to use Opacity CALayer subclasses to 'replace' the backing layer on a UIView. Adding sublayers to the existing view layer is obvious enough - and I've managed to get that working - but how do I use an Opacity CALayer subclass to provide the initial content of the view's backing layer? I've tried copying the drawInContext code from an Opacity subclass into drawRect on my UIView subclass that hosts the proxy view for the controllers' 'view' property but (for some reason I can't fathom) that didn't work.
I assume that the view's original backing layer is item '0' in the layer array? My ultimate goal is to have just my layers behind the view without having to 'ignore' the original one.
If there's a chance that Dr Brad Larson might read this;...
I've watched your Quartz 2D and animation videos over and over and I've read all of the copies of your course notes that I can lay hands on but all the examples I've found start with an image already in the hosting view - onto which more layers are added, which isn't really what I"m after. I've also read the Big Nerd Ranch Guide and got all of Conway and Hillegass's example code too but - I'll be darned - they also start with an image already in the view!
If any one can help me out - just point me at the relevant documentation, please don't bother writing huge tranches of code here - I'd be seriously grateful.
VV.
PS: I'm deliberately NOT using IB yet as I want to grip the underlying mechanics of Cocoa Touch first and I won't use dot notation on principal :-). (See my profile!).
The trick is to override the view's class method 'layerClass' to be the base layer sub-class that's needed before nailing other layers into the hierarchy. This builds the view's base (implicit) layer when the view is instantiated and then I can slap my other layers down afterwards.
This is a fun game. I'll get the mind-set soon!
This technique differs markedly from the same requirement using NSViews which have a 'setLayer' instance method that can be used to change the implicit layer AFTER instantiation. An expensive procedure not offered on a lightweight object like a UIView.

Making classes work together in obj-C

I'm writing a program for iPhone that will first let the user take a photo, then will dynamically retrieve a colour of the place where the user taps on the image, and draw a rectangle of that colour. I have two relevant classes for this: AppViewController and AppView. The former contains all the UI elements and IBActions, the latter the position of last tap, the touches-handling methods and the drawRect (and a static method to get colour data at a given coords of an image).
What I wanted to do is to put the touch-handling (calling drawRect in touchesMoved/Ended) and the drawRect in the AppViewController. That doesn't work, since that class doesn't inherit from UIView, but from UIViewController. What's the correct way to do that?
Another way to phrase that: How to constantly change something (well, constantly as long as the user is swiping across the screen) in a class that doesn't support touch-detection methods?
(This probably doesn't explain it well. Please ask clarifying questions).
I think the delegate pattern might be helpful to you in this situation. You could call your delegate's shouldUpdateRectangle selector in touchesMoved/Ended.
There is not really a correct way to move the view's behaviors into a view controller, since that is not the way the classes are meant to be used. You should probably look at what's driving you to try to subvert the framework's design this way, because that is likely going to be easier to fix.
It's not uncommon, though, for a view to call out to a supporting class for help in this stuff. You could certainly have your view's drawRect: call methods in the view controller, though I would be careful about mixing their concerns too much, because it could get hard to figure out who's responsible for what.

Fitting Cocoa Animation into MVC/OOP patterns

MVC/OOP design patterns say you don't set a property, per se, you ask an object to set its property. Similarly, in Cocoa you don't tell an object when to draw itself. Your object's code has detailed HOW it will draw itself so we trust the frameworks to decide when (for the most part) it should draw.
But, when it comes to animation in Cocoa (specifically Cocoa-Touch) it seems that we now must take control of when the object draws itself from within the objects view controller. I can't send a message to a UIView subclass asking it to change some value and then leave it alone knowing it will slowly (duration = X) animate itself to a new position, alpha, rotation, etc. depending on the property changes. Or can I?
Basically, I'm looking for a way to set the property and then walk away. Instead, it seems, I need to wrap the code that calls the object asking it to change its property with an animation block of some sort "[UIView beginAnimations:nil context:NULL]; ... [UIView commitAnimations];"
I'm ending up with lots and lots of animation blocks in my view controllers and none in my view objects...I guess I'm just looking for someone to verify that this is how things are done and I'm not overlooking something. I haven't gotten much farther than the UIView animations within Cocoa-Touch, so maybe that's my problem and it's time to dig deeper?!?
You are correct that UIView does not animate its property changes by default the way CALayer does, but I don't think this indicates a break in MVC. It is appropriate for a Controller to instruct a View in how it should transform. That is the role of a Controller class as surely as it is appropriate for the Controller to know the correct frame for the View and even manage layout. I agree that it's a little weird that you call -beginAnimations:context: on the UIView class rather than on an instance, but in practice it does actually work much better that way since you may want to animate many views together.
That said, if you had a UIView subclass that managed the layout of its subviews, there would be nothing wrong with allowing that UIView to manage the animation rather than relying on a UIViewController to do it. So this is something that could go either place, but in practice it generally goes in the Controller as you've discovered.
I am using "MVC" here in the typical Cocoa sense. You're correct that this might not be appropriate in a SmallTalk program, but then SmallTalk Controllers have a much more limited role (management of user input events). Cocoa significantly expands the role of Controllers in MVC and I think it's an improvement, even if it means there are now some functions that could go in either the Controller or the View (and this is one of them).

How to keep model & controller separate from a CALayer based UI?

I'm trying to re-implement an old Reversi board game I wrote with a bit more of a snazzy UI. I've looked at Jens Alfke's GeekGameBoard code for inspiration, and CALayers looks like the way to go for implementing the UI.
However, there is no clean separation of model and view in the GeekGameBoard code; the model is the view, which makes it hard to, for example, make a copy of the game state in order to perform game-tree search for the AI player. However, I don't seem to be able to come up with an alternative way to structure that allows a separation of model and view that doesn't involve a constant battle to keep two parallel grids (on for the model, one for the view) in synch. This, of course, has its own problems.
How do I best best implement the relationship between an AI search-friendly model structure and a display-friendly view? Any suggestions / experiences would be appreciated. I'm dreading / half expecting an answer along the lines of "there is no good answer: deal with it as best you can" but I'm prepared to be surprised!
Thanks for the answer Peter. I'm not entirely sure I understand it fully, however. I can see how this works if you just have an initial set of pieces that are moved around, and even removed, but what happens when a person puts a new piece down? Would it work like this:
User clicks in the view.
View click is translated to a board location and controller is notified.
Controller creates a new Board with the successor state (if appropriate, i.e. it was a legal move).
The view picks up the new board via its bindings, tears down the existing view/layer hierarchy and replaces it with the current state.
Does that sound right?
PS: Sorry for failing to specify whether it was for the iPhone or Mac. I'm most interested in something that works for the iPhone, but if I can get it to work nicely on the Mac first I'm sure I can adapt the solution to work on the iPhone myself. (Or post a new question!)
In theory, it should be the same as for an NSView-based UI: Add a model property (or properties), expose it (or them) as bindings, then bind the view (layer) to the model through a controller.
For example, you might have a Board class with Pieces on it (each Piece having a reference to the Player who owns it), with all of those being model classes. Your controller would own a Board, and your view/layer would be able to display a Board, possibly with a subview/sublayer for each Piece.
You'd bind your board view/layer to the controller's board property, and in your view/layer's setter for that property, create a subview/sublayer for each piece, and bind it to any properties of the Piece that it will need. (Don't forget to unbind and remove all the subviews/sublayers when replacing the main view/layer's Board.)
When you want to move or modify a Piece, you'd do so using its own properties; these will translate to property accesses on the view/layer. Ostensibly, you'll have your layer's properties set up to animate changes (so that, for example, changing a Piece's position will cause the layer for it to move accordingly).
The same goes for the Board. You might let the user change one or both tile colors; you'll bind your color well(s) through your game controller to its Board object, and with the view/layer bound to the same property of the same Board, it'll pick up the change automatically.
Disclaimers: I've never used Core Animation for anything, and if you're asking about Cocoa Touch instead of Cocoa, the above solution won't work, since it depends on Cocoa Bindings.
I have an iPhone application where almost all of the interface is constructed using Core Animation CALayers, and I use a very similar pattern to what Peter describes. He's correct in that you want to treat your CALayers as if they were NSViews / UIViews and manage their logic through controllers and data via model objects.
In my case, I create a hierarchy of controller objects which also function as model objects (I may refactor to split out the model components). Each of the controller objects manages a CALayer, so there ends up being a parallel CALayer display hierarchy to the model-controller one. For my application, I need to perform calculations for equations constructed using this hierarchy, so I use the controllers to provide calculated values from the bottom of the tree up. The controllers also handle user editing events, such as the insertion of new suboperations or deletion of operation trees.
I've created a layer-hosting view class that allows the CALayer tree to respond to touch or mouse events (the source of which is now available within the Core Plot project). For your boardgame example, the CALayer pieces could take in the touch events, and have their controllers manage the back-end logic (determine a legal move, etc.). You should just be able to move pieces around and maintain the same controllers without tearing everything down on every move.