Today I decided to try to use CALayers to show a rectangular box overlaying a NSView. The layer will contain some text and will be turned on and off depending on when it is necessary to show the variable text. The reason I wanted to use CALayer for this was the nifty rendering and animation you can easily do with CALayer. I implemented my layer and it worked like a charm. However, after using my GUI and clicking several times on various buttons turning the layer on and off, it seemed that the hierarchy of what I thought was my layer view was skewed. I think focus must have been switched to some other NSView which again was turned off. I basically got very confused as to which layer I was handling at a given time and I lost control of the view hierarchy.
My question is: should I use subviews of NSView, or CALayers to show something that may occur many times on and off in an application? It seems to me that it is easy to loose control of which layer you are working on. Is there a way to identify by name the current layer so you can reuse the layer, or is it best to work with layers, delete them and then re-create the layer the next time you need them?
Thanks for your time. Cheers, Trond
FWIW,
I have often turned CALayers "on and off" very quickly, with no problems at all. So you can do that if you want to!
Deleting them and recreating them quickly on the fly, does not seem to cause any problems. (It's not a big resource user, and I've never seen any other problems doing that.) So definitely do that if you want to!
I actually don't understand what you mean about "naming" layers - of course, as iVars they have a name! You have to name all your CALayers.
Here's a typical bit of production code (from a .h file):
CALayer *rearLayer;
CALayer *hugeBasket; // holds everything for ez-on/off
CALayer *theActualSkyline; // nb, same name as similar UIView
CALayer *someTrees; // minor stuff
CALayer *someBushes; // overs
// for the stupid help basket..
CALayer *LLDRear;
CALayer *LLDArrowLeft;
CALayer *LLDArrowRight;
CALayer *LLDPointlessUpArrow;
CALayer *LLDYetAnotherStupidShadow;
// etc etc..
And so on and on. I don't really see how you can "not" name them, you know!
Finally,
4, Don't forget layers are much "better" than NSViews, because: NSViews have shoddy/buggy relationship between overlapping siblings: they essentially don't work. Read about that here:
port an iOS (iPhone) app to mac?
Hope it helps!
PS - these may also help with CALayers...
Exactly what should happen in a CALayer's display/drawRect methods?
What's the difference and compatibility of CGLayer and CALayer?
Related
I'm using CGRect's for hitboxes, and my collisions seem to be a bit off. I want to quickly see where my hitboxes actually are.
I tried a bunch of different approaches but most of them seem to be outdated, or just didn't work for me.
I tried this already and a bunch of similar approaches.
What is the simplest way to show the borders of a CGRect?
With cocos2d 2.0, in ccConfig.h there is a CC_SPRITE_DEBUG_DRAW symbol. If you set that to 1, the box will be drawn during the visit cycle.
If CC_SPRITE_DEBUG_DRAW, as YvesLeBorg suggested, doesn't suit you, you can override draw method in you layer or nodes and draw in that method using helper functions from CCDrawingPrimitives.h. Don't forget to call [super draw].
I'm trying to get to grips with using Chipmunk (not the Obj-C version) with UIKit components on iOS, and still struggling immensely.
I'm trying to establish how, in the ChipmunkColorMatch example in the documentation, the UIButton instances are actually hooked up to any of the physics calculations. I see that the UIButtons are created inside the Ball class, and some of their properties are set, (type, image, etc.), but I'm not understanding where the cpBody or cpShape or whichever it is is actually attached to that UIButton. I assume it needs to be, else none of the physics will be reflected in the UI.
I've looked in the SimpleObjectiveChipmunk tutorial on the website too, but due to the fact that it uses libraries unavailable to me (the Obj-C libraries), I can't establish how it works there, either. Again, I see a UIButton being created and positioned on-screen, but I don't see how the cpBody (or in that case, ChipmunkBody) is linked to the button in any way.
Could anyone shed some light on how this works? Effectively what I'm going to need are some UIButton instances which can be flicked around, but I've not even got as far as working out how to create forces yet, since I can't get the bodies hooked up to the buttons.
Much obliged, thanks in advance.
EDIT: Should also point out that I am not, and do not want to use cocos2d in this project at all. I've seen tutorials using that, but that's a third layer of confusion to add in. Thanks!
Assuming this source is the project you're asking about, it looks like the magic happens in Ball's sync method -- it creates a CGAffineTransform representing the translation and rotation determined by the physics engine, and applies that to the button.
In turn, that method is called by the view controller's draw: method, which is timed to occur on every frame using CADisplayLink, and updates the physics engine before telling each Ball to sync.
I have recently started creating my own controls and I seem to have a bit of trouble understanding how I should use drawRect.
Basically I have 3 Questions.
Is it a good idea to have conditional drawRect's? ie. different drawing code based on properties or instance variables.
What is the best method for animating changes to the drawRect's drawing? For example, a fuel gauge control with animated fill and un-fill.
And, finally, the examples I have seen for animating with drawRect tend to use timers, is that really a good method in practice? It seems like the heavier apps would have issues with that method.
I guess a 4th would be, is there, perhaps, a better place to do this kind of stuff?
Is it a good idea to have conditional drawRect's? ie. different drawing code based on properties or instance variables.
Sure, why not? If your drawRect: method becomes unwieldy, you could split it into multiple methods that you then call from drawRect: depending on the properties of your view. E.g. you could have methods like drawBackground, drawTitle, etc.
What is the best method for animating changes to the drawRect's drawing? For example, a fuel gauge control with animated fill and un-fill.
That depends. For very small views, you could call setNeedsDisplay from a timer, but for larger views, you'll often run into performance issues with this approach.
Animating changes is often better done by compositing your view out of multiple subviews or layers that can be animated with Core Animation (or the simplified UIView animation methods).
Firstly, be gently with me. I'm new to Objective C and iOS programming.
I've just purchased a copy of Opacity and I'm playing around with layers. My only grade A success so far is to design and successfully generate my own design of a calculator - so I haven't got that far!
Specifically, I want to know how to use Opacity CALayer subclasses to 'replace' the backing layer on a UIView. Adding sublayers to the existing view layer is obvious enough - and I've managed to get that working - but how do I use an Opacity CALayer subclass to provide the initial content of the view's backing layer? I've tried copying the drawInContext code from an Opacity subclass into drawRect on my UIView subclass that hosts the proxy view for the controllers' 'view' property but (for some reason I can't fathom) that didn't work.
I assume that the view's original backing layer is item '0' in the layer array? My ultimate goal is to have just my layers behind the view without having to 'ignore' the original one.
If there's a chance that Dr Brad Larson might read this;...
I've watched your Quartz 2D and animation videos over and over and I've read all of the copies of your course notes that I can lay hands on but all the examples I've found start with an image already in the hosting view - onto which more layers are added, which isn't really what I"m after. I've also read the Big Nerd Ranch Guide and got all of Conway and Hillegass's example code too but - I'll be darned - they also start with an image already in the view!
If any one can help me out - just point me at the relevant documentation, please don't bother writing huge tranches of code here - I'd be seriously grateful.
VV.
PS: I'm deliberately NOT using IB yet as I want to grip the underlying mechanics of Cocoa Touch first and I won't use dot notation on principal :-). (See my profile!).
The trick is to override the view's class method 'layerClass' to be the base layer sub-class that's needed before nailing other layers into the hierarchy. This builds the view's base (implicit) layer when the view is instantiated and then I can slap my other layers down afterwards.
This is a fun game. I'll get the mind-set soon!
This technique differs markedly from the same requirement using NSViews which have a 'setLayer' instance method that can be used to change the implicit layer AFTER instantiation. An expensive procedure not offered on a lightweight object like a UIView.
MVC/OOP design patterns say you don't set a property, per se, you ask an object to set its property. Similarly, in Cocoa you don't tell an object when to draw itself. Your object's code has detailed HOW it will draw itself so we trust the frameworks to decide when (for the most part) it should draw.
But, when it comes to animation in Cocoa (specifically Cocoa-Touch) it seems that we now must take control of when the object draws itself from within the objects view controller. I can't send a message to a UIView subclass asking it to change some value and then leave it alone knowing it will slowly (duration = X) animate itself to a new position, alpha, rotation, etc. depending on the property changes. Or can I?
Basically, I'm looking for a way to set the property and then walk away. Instead, it seems, I need to wrap the code that calls the object asking it to change its property with an animation block of some sort "[UIView beginAnimations:nil context:NULL]; ... [UIView commitAnimations];"
I'm ending up with lots and lots of animation blocks in my view controllers and none in my view objects...I guess I'm just looking for someone to verify that this is how things are done and I'm not overlooking something. I haven't gotten much farther than the UIView animations within Cocoa-Touch, so maybe that's my problem and it's time to dig deeper?!?
You are correct that UIView does not animate its property changes by default the way CALayer does, but I don't think this indicates a break in MVC. It is appropriate for a Controller to instruct a View in how it should transform. That is the role of a Controller class as surely as it is appropriate for the Controller to know the correct frame for the View and even manage layout. I agree that it's a little weird that you call -beginAnimations:context: on the UIView class rather than on an instance, but in practice it does actually work much better that way since you may want to animate many views together.
That said, if you had a UIView subclass that managed the layout of its subviews, there would be nothing wrong with allowing that UIView to manage the animation rather than relying on a UIViewController to do it. So this is something that could go either place, but in practice it generally goes in the Controller as you've discovered.
I am using "MVC" here in the typical Cocoa sense. You're correct that this might not be appropriate in a SmallTalk program, but then SmallTalk Controllers have a much more limited role (management of user input events). Cocoa significantly expands the role of Controllers in MVC and I think it's an improvement, even if it means there are now some functions that could go in either the Controller or the View (and this is one of them).