Cocoa 2D graphics: Quartz, Core Image or Core Animation? - objective-c

I have been reading for several hours now documentation about drawing two dimensional graphics in a objective-c cocoa application. There appears to be several different technologies all specific to certain tasks. My understanding is that the following technologies do the following things. Please correct me if I'm wrong.
Quartz 2D: The primary library for drawing shapes, text, and images to the screen.
Core Graphics: this is the name of the framework that contains Quartz. This can be used as a synonym for Quartz.
QuartzGL: A GPU acceleration mode for Quartz that is not enabled by default and not necessarily faster for drawing things on the screen.
OpenGL: The most low level library, talk directly to the graphics card at the cost of more lines of code. More suited for 3D graphics.
Core Image: A library for displaying images and text, but not so much for drawing shape primitives.
Core Animation: A library for automatically animating objects. Apparently not suited for moving large numbers of objects. Nor for continuous animation of objects.
QuickTime: A library that apparently also does images and text in addition to video, but probably not good for drawing primitive shapes.
What I would like to do is create a browser for some specific type of data. The view would not very complicated and would consist of drawing rectangles at specific locations. However, the user should be able to move around by dragging the view to the left or the right and the this movement should be fluid. Here is a example that is very close to what I'm trying to make:
http://jbrowse.org/ucsc/hg19/
What drawing technology would you recommand I start coding with?

You want Quartz. Unless your graphing MASSIVE amounts of data, any Mac (I'm assuming Mac not iOS) should handle it easily. It is easy, efficient, and will probably get you where you need to go. For the dragging movement, you'll probably manage that with Core Animation layers.
Note: Everything in the end is handled by AppKit (Mac) or UIKit (iOS) and, eventually, Core Animation. If you're doing graphics, you will encounter Core Animation at some point, as it manages everything displayed.
Note: If you are graphing that much data, you can use OpenGL, but even then, the need shouldn't be too much until you start displaying, probably many millions of vertices or complex visualisations.

Related

Would using cocos2d be easier for a drag and match game?

I am pretty new to making games, but I am pretty familiar with programing iOS. I am creating a shape matching game, so there would be an array of different shapes and the user would drag the shape to the correct corresponding shape if they get it right it would stay and if they get it wrong it would shoot back. Now my question is would that be easier using cocso2d or any game engine or would it be just as easy not using one, just using a touch event?
Since the game you are describing is not graphically intense - I would recommend using UIKit. Couple of reasons why I would use UIKit over cocos2d:
Interface builder / Storyboards are awesome. You can lay out your
screens and game elements on screen. (I know tools exist to do this
using cocos like CocosBuilder, but IMO they just don't compare to
working directly in XCode)
UIKit animations couldn't be easier and you can do some pretty powerful things with minimal code.
You have direct access elements such as UITableView, UICollectionView, UIScrollView, etc. There are cocos nodes that mimic these, but they don't match up in terms of response and behavior.
For more graphically intense games I would still use cocos2d hands down. Some scenarios when you would use it:
You have a large number of sprites with a large number of animations (opengl is fast)
You want to use opengl based effects like particles, lighting, etc.
You need a physics engine
You want to work off a prebuilt game engine (there are tons such as levelsvg, kobold2d, line starter kit, etc)
Hope this helps you.

Objective-C, Methods for animating gui

I've created many types of interfaces using the Cocoa API — some of them using documented basic animation techniques and others simply by experimenting (such as placing an animated .gif inside an NSImage class) — which had somewhat catastrophic consequences. The question I have is what is the correct or the most effective way to create an animated and dynamic GUI so that it runs optimally and properly?
The closest example I can think of that would use a similar type of animation would be something one might see done in flash on any number of interactive websites or interfaces. I'm sure flash can be used in a Cocoa app, although if there is a way to achieve a similar result without re-inventing the wheel, or having to use 3rd party SDKs, I would love to get some input. Keep in mind I'm not just thinking of animation for games, iOS, etc. — I'm most interested in an animated GUI for Mac OS X, and making it 'flow' as one might interact in it.
If u wish to add many graphics animations, then go for OpenGLES based xcode project for iOS. That helps u to reduce performance problem. You can render each of the frames in gif as 2D texture.
I would recommend that you take a look at Core Animation. It is Apples framework for hardware accelerated animations for both OS X and iOS. It's built for making animated GUIs.
You can animate the property changes for things like position, opacity, color, transforms etc and also animate gradients with CAGradientLayer and animate non-rectagunal shapes using CAShapeLayer and a lot of other things.
A good resource to get you started is the Core Animation Programming Guide.

Why are OpenGL ES and cocos2D faster than Cocoa Touch / iOS frameworks itself?

I wonder if cocos2D is built on top of iOS's frameworks, won't cocos2D be slightly slower than using the Cocoa framework directly? (is cocos2D on top of OpenGL ES, which in turn is on top of Cocoa Touch / iOS frameworks including Core Animation and Quartz?).
However, I heard that OpenGL ES is actually faster than using Core Graphics, Core Animation, and Quartz?
So is OpenGL ES the fastest, cocos2D the second, and Core Animation the slowest? Does someone know why using OpenGL ES is faster than using Cocoa framework directly?
cocos2D is built on top of OpenGL. When creating a sprite in cocos2D, you are actually creating a 3D model and applying a texture to it. The 3D model is just a flat square and the camera is always looking straight at it which is why it all appears flat and 2D. But this is why you can do things like scaling and rotating sprites easily - all you are really doing is rotating the 2D square (well, two triangles really) or moving them closer or further away from the camera. But Cocos2D handles all that for you.
OpenGL is designed from the start to pump out 3D graphics very very quickly. So it is designed to handle shoving points and triangles around. This is then enhanced by a 3D rendering hardware which it can use specifically for this. As this is all it does, it can be very optimised for doing all the maths on the points that build up the objects and mapping textures onto those object. It doesn't have to worry about handling touches or other system things that Cocoa does.
Cocoa Touch doesn't use openGl. It may use some hardware acceleration, but it isn't designed for that - it's designed for creating 2D buttons, etc. What it does, it does well, but it has lots of layers to pass through to do what it needs to do which doesn't make it as efficient as something designed just for graphics (openGL).
OpenGL is the fastest
cocos2D is slightly slower, but only because there are some wrappers to make your life easier. If you were to do the same thing, then you may get it faster, but with the cost of flexibility.
Core Animation is the slowest.
But they all have their uses and are excellent in their individual niche areas.

Vector art on iOS

We've now got 4-resolutions to support and my app needs at least 6 full-screen background images to be pretty. Don't want to break the bank on megabytes of images.
I see guides online about loading PDFs as images and custom SVG libraries but no discussion of prectically.
Here's the question: considering rendering speed and file size, what is the bet way to use vector images in iOS? And in addition, are there any practical caching or other considerations one should make in real world app development?
Something to consider for simple graphics, such as the type of thing used for backgrounds, etc., is just to render them at runtime using CG.
For example, in one of our apps, instead of including the typical repeating background tile image in all the required resolutions, we instead draw it once into a CGPatternRef, then convert it to a UIColor, at which point things become simple.
We still use graphic files for complex things, but for anything that's simple in nature, we just render it at runtime and cache the result, so we get resolution independence without gobs of image files. It's also made maintenance quite a bit easier.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.