Is this a good OO design? - oop

I'm building an API for myself to do 2D skeletal animation.
I have a Bone class
and a Skeleton class.
The Skeleton creates a root bone and then subsequent bones are added via the Skeleton's add method by providing the parent bone.
What I now want to do is add animation and frames.
What I was thinking of, is a class that can load and interpolate animations. So it would be an object that would load an animation. It would then, at each frame, take in a Skeleton and modify the Skeleton accordingly.
Is this a good design? Should an animation take in a Skeleton, or should a Skeleton take in an animation and apply it onto itself?

It's best to create an Animation that makes use of a Skeleton instead of the opposite. This because, logically speaking, the object Skeleton does not require an Animation to live, but an Animation strongly requires a Skeleton.
So you can couple those elements in the Animation itself. Do not put too much logics in the objects, and try to put it just where necessary.

Presumably every bone has a 2d position/angle and an animation is a collection of frames where each frame is a collection of bone identifiers and position/angles?
Then you might consider something like
public class Skeleton
{
public List<Bone> Bones {get;set;}
public void Animate(Animation animation)
{
foreach(Bone bone in Bones)
{
bone.Displace(animation.Displacements.FirstOrDefault(o=>o.BoneId == bone.BoneID));
}
}
}

I would create an Animation class containing a std::vector<Skeleton> data member that you can use to manipulate individual Skeleton objects on each frame or interpolate across multiple Skeleton objects in the vector from keyframes. Then when you "play" the animation, you merely have to iterate over the vector, calling out each Skeleton object, and pass that to some other function or class that will display the results on-screen (or do whatever else the Skeleton can be useful for, such as warping a mesh, etc.)
Having an animation object will make it much easier to manipulate the frames of the animation, allowing you to remove/replace frames, etc. Otherwise if you try to pile all of this functionality into a Skeleton object, then you're going to find there's a lot of baggage when trying to manipulate individual aspects of the Skeleton separately from the animation sequence (i.e. suppose you need to change the hierarchy of the Skeleton for a segment of frames, etc.? ... that would be very easy if there is a Skeleton on each frame, but not if you have a monolithic Skelton object).

Related

How to actually get bone and animation data with assimp?

Hello Im trying to import animation and bones data from a FBX file.
I actually tried with several different models in different formats, but no matter what I Do it always says
scene->mNumAnimations = 0;
scene->mNumBones = 0;
What could be happening?
What assimp uses to animate the models is a bone chain based on the meshes. We have a bone hierarchy which stores the affected vertices by the bone animation. The bones itself are descriped by an offset-matrix, which is just the inverse of the global transform for the current mesh in respect to the root-node of your animation hierarchy. This comes from the X-Fileformat and I want to add a better way to descripe bone-animations.
The pretransform-bone-post-processor will perform the transformation after the import directly. So your bones will be gone.

Design pattern for child calling method in parent

I am currently working on my biggest project and I am having trouble figuring out how to structure my code. I'm looking for some guidance.
I have 2 objects a Tile and Container. Each Tile has a 2D coordinate and are all children of the Container. The Container has methods that return tile for location, switch tiles, add tiles, and remove tiles.
Now when you click on a tile it disappears, that was easy because it was self contained. The problem comes when I created different types of tiles that inherit from the base Tile. Each different type of tile does a different action when you click on it. Some destroy surrounding tiles some switch with other tiles and others add new tiles. For simplicity we will call these 3 subclasses Tile-destroy, Tile-swap, and Tile-add.
My problem is when I click on these tiles how can they act on other tiles in the Container. Should I just call functions in the parent class or is there a better way to do this? I am having trouble #including the Tile in the Container as well as the other way around. I feel like its not a proper pattern.
I have it set up so when a click takes place the Container handles it and checks the type of tile that is clicked and acts from there with a large else-if statement however this makes it very difficult to add new tile types. Ideally all the information for what happens when you click on a tile is contained within each tile subclass.
Any ideas?
I can suggest you the simpliest design:
Your Container will be a game controller
Each tile has Parent property which is refer to Container
When you click on tile it sends Command to Container (for example, DestroyTile(x, y) or AddTile(x, y)
Container handle this commands and destroys, adds or swap tiles.
If you want really good and more decoupled design you can also create handlers for all operation types DestroyTileHandler, AddTileHandler. In Container on different commands you will just pass them [commands] to appropriate handler. Also you need to pass context object (like Field with tiles) to handler. This allows you to add and modify new operations without even changing Container code.
See related patterns: Command, Observer
Feel free to ask questions and good luck!

Simple Drawing App Design -- Hillegass Book, Ch. 18

I am working through Aaron Hillegass' Cocoa Programming for Mac OS X and am doing the challenge for Chapter 18. Basically, the challenge is to write an app that can draw ovals using your mouse, and then additionally, add saving/loading and undo support. I'm trying to think of a good class design for this app that follows MVC. Here's what I had in mind:
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
The thing is that I wasn't sure how to add archiving. Should I archive each JBOval? I think this would work, but archiving an NSView doesn't seem very efficient. Any ideas on a better class design?
Thanks.
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
That doesn't sound very MVC. “JBOval” sounds like a model class to me.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
I do like this part.
My suggestion is to have each model object (JBOval, etc.) able to create a Bézier path representing itself. The JBDrawingView (and you should come up with a better name for that, as all views draw by definition) should ask each model object for its Bézier path, fill settings, and stroke settings, and draw the object accordingly.
This keeps the knowledge of how to draw (the path, line width, colors, etc.) in the various shape classes where they belong, while also keeping the actual drawing code in the view layer where it belongs.
The answer to where to put archiving code should be intuitively obvious from this point.
Having a whole NSView for each oval seems rather heavyweight to me. I would descend them from NSObject instead and just have them draw to the current view.
They could also know how to archive themselves, although at that point you'd probably want to think about pulling them out of the view and thinking of them more as part of your model.
Your JBOval views would each be responsible for drawing themselves (basically drawing an oval path and filling it, within their bounds), but JBDrawingView would be responsible for mousing and dragging (and thereby sizing and positioning the JBOvals, which would be its subviews). The drawingView would do no drawing itself.
So far as archiving, you could either have a model class to represent each oval (such as its bounding rectangle, or any other dimensions you choose to represent each oval with). You could archive and unarchive these models to recreate your views.
Finally, I use the JB prefix too, so … :P at you.

What way to use the CGContext to draw is suitable?

I know that the CGContext cannot call it to draw directly, and it needs to fill the drawing logic in the drawInContext, and call the CGContext to draw using "setNeedsDisplay", so, I designed a cmd to execute, but it cause some problems... like this :
Why I can't draw in a loop? (Using UIView in iPhone)
I think the CGContext is very different from my previous programming experience....(I used HTML5 canvas, that allow me add more details, after I draw, so do the Java Swing)
Actually, I want to know what is the suitable to implement these kind of thing in Apples' programmer mind. Thz.
There are three approaches to what you're asking. You can draw everything in drawRect:, you can manage multiple layers, or you can draw in an image. Each has advantages, but first you need to think correctly about the problem so that you don't destroy performance.
Drawing happens constantly. Every time anything changes, there may be quite a lot of drawing that has to be done. Not the whole screen usually, but still a lot of drawing. Since drawRect: and drawInContext: can be called many times, they need to be efficient. That means that you don't want to do a lot of expensive calculations, and you don't want to do a lot of useless drawing. "Useless" means "won't actually be displayed because it's off screen or obscured by other drawing."
So in the usual case, you put your actual drawing code in drawRect:, but you do all your calculations elsewhere, generally when your data changes. For example, you read your files, figure out your coordinates, create CGPaths, etc whenever your data changes (which should be much less frequent then drawing). You save all the results into ivars, and then in drawRect: you just draw the final result. So in your loop example, you would probably have an NSArray of images in your view object, and in drawRect: you would draw them all in order.
Another approach is to create a separate layer for each image, set the image as the content, and then attach the layer to the view. You're done at that point. There is no more drawing code you need to write. Quartz handles layers very efficiently, so this can be a very good solution to a wide variety of problems.
Finally, you can composite everything into an image, and then stick that image in an image view, or draw the image directly in the view, or attach the image to a layer. This is a good solution if you have very complicated drawing (particularly using CGPath). This can be expensive if you're constantly changing things, since you have to create a new image context, draw the old image into the new context, draw on top of it, and then create a new image from the context. But it's good for a complicated drawing that doesn't change often.
But you're correct, CGContext is not like a canvas. It needs to be redrawn every draw cycle. You can do that yourself, or you can use another view object (like UIImageView) to do it for you. But it has to be done one way or another.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.