Fitting Cocoa Animation into MVC/OOP patterns - cocoa-touch

MVC/OOP design patterns say you don't set a property, per se, you ask an object to set its property. Similarly, in Cocoa you don't tell an object when to draw itself. Your object's code has detailed HOW it will draw itself so we trust the frameworks to decide when (for the most part) it should draw.
But, when it comes to animation in Cocoa (specifically Cocoa-Touch) it seems that we now must take control of when the object draws itself from within the objects view controller. I can't send a message to a UIView subclass asking it to change some value and then leave it alone knowing it will slowly (duration = X) animate itself to a new position, alpha, rotation, etc. depending on the property changes. Or can I?
Basically, I'm looking for a way to set the property and then walk away. Instead, it seems, I need to wrap the code that calls the object asking it to change its property with an animation block of some sort "[UIView beginAnimations:nil context:NULL]; ... [UIView commitAnimations];"
I'm ending up with lots and lots of animation blocks in my view controllers and none in my view objects...I guess I'm just looking for someone to verify that this is how things are done and I'm not overlooking something. I haven't gotten much farther than the UIView animations within Cocoa-Touch, so maybe that's my problem and it's time to dig deeper?!?

You are correct that UIView does not animate its property changes by default the way CALayer does, but I don't think this indicates a break in MVC. It is appropriate for a Controller to instruct a View in how it should transform. That is the role of a Controller class as surely as it is appropriate for the Controller to know the correct frame for the View and even manage layout. I agree that it's a little weird that you call -beginAnimations:context: on the UIView class rather than on an instance, but in practice it does actually work much better that way since you may want to animate many views together.
That said, if you had a UIView subclass that managed the layout of its subviews, there would be nothing wrong with allowing that UIView to manage the animation rather than relying on a UIViewController to do it. So this is something that could go either place, but in practice it generally goes in the Controller as you've discovered.
I am using "MVC" here in the typical Cocoa sense. You're correct that this might not be appropriate in a SmallTalk program, but then SmallTalk Controllers have a much more limited role (management of user input events). Cocoa significantly expands the role of Controllers in MVC and I think it's an improvement, even if it means there are now some functions that could go in either the Controller or the View (and this is one of them).

Related

Is it better to layout subviews in layout/layoutSubviews method?

This question is for both cocoa and cocoa touch. But I'll write an example just for cocoa.
As I understood, I can setNeedsLayout to YES multiple times in a cycle and -layout will be called just once. But are there any other benefits of laying out subviews in -layout method?
Explanation / example: At the moment I'm laying out my subviews in custom viewController (that has default NSView) every time I call custom redraw method. And I call redraw method only when user changes some properties so I really want to relayout subviews.
There are plenty of external circumstances not under your direct control that might cause the system to want to lay out your views. For example, device rotation or incoming calls on iOS, or window resizing on OS X. If you have your layout logic in the standard places, then your code accommodates these without any additional effort, and in the places your internal state changes, you can request such a layout explicitly.
To turn your question around: is there a significant benefit to not doing your layout in the standard way? Do you believe that this will be a performance issue? Have you measured it to see whether it is actually a performance issue?

Best practices when using drawRect

I have recently started creating my own controls and I seem to have a bit of trouble understanding how I should use drawRect.
Basically I have 3 Questions.
Is it a good idea to have conditional drawRect's? ie. different drawing code based on properties or instance variables.
What is the best method for animating changes to the drawRect's drawing? For example, a fuel gauge control with animated fill and un-fill.
And, finally, the examples I have seen for animating with drawRect tend to use timers, is that really a good method in practice? It seems like the heavier apps would have issues with that method.
I guess a 4th would be, is there, perhaps, a better place to do this kind of stuff?
Is it a good idea to have conditional drawRect's? ie. different drawing code based on properties or instance variables.
Sure, why not? If your drawRect: method becomes unwieldy, you could split it into multiple methods that you then call from drawRect: depending on the properties of your view. E.g. you could have methods like drawBackground, drawTitle, etc.
What is the best method for animating changes to the drawRect's drawing? For example, a fuel gauge control with animated fill and un-fill.
That depends. For very small views, you could call setNeedsDisplay from a timer, but for larger views, you'll often run into performance issues with this approach.
Animating changes is often better done by compositing your view out of multiple subviews or layers that can be animated with Core Animation (or the simplified UIView animation methods).

Registering all view controllers for NSNotifications

I have a custom graphic that is to be displayed to a user when an event occurs. The graphic needs to be displayed on whichever viewController is currently being displayed to the user.
The way i have programmed it so far is by adding to ALL viewcontrtollers:
1) the .h file for the custom graphic class
2) an observer for the NSNotification event that is raised
3) the method which actually draws the graphic.
This doesnt feel like a very efficient way of doing things and i was wondering if anyone has a better way of doing things?
To me it sounds like you've done it in a fairly sane way. The only other way I can think is to just add the graphic to the window which would then overlay on the current view controller and you'd only need to have one object listening for the notification. You could use the app delegate for instance. But then you would have to worry about rotation of the screen yourself when adding the graphic over the top.
What you are doing is correct .. The only thing you can improve is to mauve the drawing graphics part to the custom graphic class.. (if you are not already doing so...
just Make a UIViewController variable as a member variable to the graphics class..and then set it up to the current view displaying..after you receive the notifications..and the class will itself draw the code based on the ViewController you set it up to
The reason it doesn't feel efficient is that you're duplicating a lot of code. That's more work at the outset, and it creates a maintenance headache. You should be taking advantage of the inheritance that's built into object oriented languages, including Objective-C.
If you want all your view controllers to share some behavior, then implement that behavior in a common superclass. Derive all your other view controllers from that superclass, and they'll all automatically get the desired behavior. Your superclass's initializer can take care of registering the view controller for the notification(s) that you care about, and -dealloc can unregister it. This way, you don't have to clutter up each view controller with the same repeated code, and if you want to change the code you only have to do it in one place.

Making classes work together in obj-C

I'm writing a program for iPhone that will first let the user take a photo, then will dynamically retrieve a colour of the place where the user taps on the image, and draw a rectangle of that colour. I have two relevant classes for this: AppViewController and AppView. The former contains all the UI elements and IBActions, the latter the position of last tap, the touches-handling methods and the drawRect (and a static method to get colour data at a given coords of an image).
What I wanted to do is to put the touch-handling (calling drawRect in touchesMoved/Ended) and the drawRect in the AppViewController. That doesn't work, since that class doesn't inherit from UIView, but from UIViewController. What's the correct way to do that?
Another way to phrase that: How to constantly change something (well, constantly as long as the user is swiping across the screen) in a class that doesn't support touch-detection methods?
(This probably doesn't explain it well. Please ask clarifying questions).
I think the delegate pattern might be helpful to you in this situation. You could call your delegate's shouldUpdateRectangle selector in touchesMoved/Ended.
There is not really a correct way to move the view's behaviors into a view controller, since that is not the way the classes are meant to be used. You should probably look at what's driving you to try to subvert the framework's design this way, because that is likely going to be easier to fix.
It's not uncommon, though, for a view to call out to a supporting class for help in this stuff. You could certainly have your view's drawRect: call methods in the view controller, though I would be careful about mixing their concerns too much, because it could get hard to figure out who's responsible for what.

How to keep model & controller separate from a CALayer based UI?

I'm trying to re-implement an old Reversi board game I wrote with a bit more of a snazzy UI. I've looked at Jens Alfke's GeekGameBoard code for inspiration, and CALayers looks like the way to go for implementing the UI.
However, there is no clean separation of model and view in the GeekGameBoard code; the model is the view, which makes it hard to, for example, make a copy of the game state in order to perform game-tree search for the AI player. However, I don't seem to be able to come up with an alternative way to structure that allows a separation of model and view that doesn't involve a constant battle to keep two parallel grids (on for the model, one for the view) in synch. This, of course, has its own problems.
How do I best best implement the relationship between an AI search-friendly model structure and a display-friendly view? Any suggestions / experiences would be appreciated. I'm dreading / half expecting an answer along the lines of "there is no good answer: deal with it as best you can" but I'm prepared to be surprised!
Thanks for the answer Peter. I'm not entirely sure I understand it fully, however. I can see how this works if you just have an initial set of pieces that are moved around, and even removed, but what happens when a person puts a new piece down? Would it work like this:
User clicks in the view.
View click is translated to a board location and controller is notified.
Controller creates a new Board with the successor state (if appropriate, i.e. it was a legal move).
The view picks up the new board via its bindings, tears down the existing view/layer hierarchy and replaces it with the current state.
Does that sound right?
PS: Sorry for failing to specify whether it was for the iPhone or Mac. I'm most interested in something that works for the iPhone, but if I can get it to work nicely on the Mac first I'm sure I can adapt the solution to work on the iPhone myself. (Or post a new question!)
In theory, it should be the same as for an NSView-based UI: Add a model property (or properties), expose it (or them) as bindings, then bind the view (layer) to the model through a controller.
For example, you might have a Board class with Pieces on it (each Piece having a reference to the Player who owns it), with all of those being model classes. Your controller would own a Board, and your view/layer would be able to display a Board, possibly with a subview/sublayer for each Piece.
You'd bind your board view/layer to the controller's board property, and in your view/layer's setter for that property, create a subview/sublayer for each piece, and bind it to any properties of the Piece that it will need. (Don't forget to unbind and remove all the subviews/sublayers when replacing the main view/layer's Board.)
When you want to move or modify a Piece, you'd do so using its own properties; these will translate to property accesses on the view/layer. Ostensibly, you'll have your layer's properties set up to animate changes (so that, for example, changing a Piece's position will cause the layer for it to move accordingly).
The same goes for the Board. You might let the user change one or both tile colors; you'll bind your color well(s) through your game controller to its Board object, and with the view/layer bound to the same property of the same Board, it'll pick up the change automatically.
Disclaimers: I've never used Core Animation for anything, and if you're asking about Cocoa Touch instead of Cocoa, the above solution won't work, since it depends on Cocoa Bindings.
I have an iPhone application where almost all of the interface is constructed using Core Animation CALayers, and I use a very similar pattern to what Peter describes. He's correct in that you want to treat your CALayers as if they were NSViews / UIViews and manage their logic through controllers and data via model objects.
In my case, I create a hierarchy of controller objects which also function as model objects (I may refactor to split out the model components). Each of the controller objects manages a CALayer, so there ends up being a parallel CALayer display hierarchy to the model-controller one. For my application, I need to perform calculations for equations constructed using this hierarchy, so I use the controllers to provide calculated values from the bottom of the tree up. The controllers also handle user editing events, such as the insertion of new suboperations or deletion of operation trees.
I've created a layer-hosting view class that allows the CALayer tree to respond to touch or mouse events (the source of which is now available within the Core Plot project). For your boardgame example, the CALayer pieces could take in the touch events, and have their controllers manage the back-end logic (determine a legal move, etc.). You should just be able to move pieces around and maintain the same controllers without tearing everything down on every move.