CCAnimate with sprite file names? - objective-c

CCAnimate requires CCSpriteFrames, while they require a texture2d.
Is it not possible to simply use CCAnimate by providing my file names? Like anim1.png, anim2.png, anim3.png...

Not directly.
If you are on versions 0.99.*, you can load the files into UIImages, then create CCTexture2Ds using the initWithImage: function, then create CCSpriteFrames.
If you are on version 1.0.0 or later, you can load the textures from files using the CCTextureCache singleton, then create CCSpriteFrames.
However, the whole point behind this API is that you can place all the frames of your animation into one image file, load it as a texture, and then carve out the individual frames using the rect property/argument. This should also improve performance, since the graphics chip only has to load one texture and then perform cheap clipping operation instead of loading multiple textures.
EDIT: Cocos2D has a function for direct CCSpriteFrame loading since version 1.1.

Related

How to delete textures in OpenGL 2.x?

I followed this tutorial to create a basic game using OpenGL and after profiling it, discovered that even after a sprite is removed, the textures are not being released creating a memory leak. I easily fixed the problem by creating a cache in the Sprite class, but I would like to know how I can delete the texture itself for future reference. It is loaded with GLKTextureLoader.
GLuint index = self.textureInfo.name;
glDeleteTextures(1, &index);

Recommended approach to load proprietary binary image file into NSImage?

I have a bunch of image files in a proprietary binary format that I want to load into NSImages. The format is not a simple bitmap, but rather a kind of an RLE representation mixed with transparency and miscellaneous additional information.
In order to display one of these images in a Cocoa app, I need a way to parse the image file byte by byte and "calculate" a bitmap from it which I will then put into an NSImage.
What is a good approach for doing this in Objective-C/Cocoa?
The tasks of interpreting image data are handled by the image's representation object(s). To use a proprietary format, you have a few options: (a) create a custom representation class, (b) use NSCustomImageRep with a custom delegate, or (c) use a custom object to translate your image to a supported format, such as a raw bitmap.
If you choose to create a custom representation class, you will create a subclass of NSImageRep as described in Creating New Image Representation Classes. This basically requires that your class register itself and be able to draw the image data. In addition to this, you can override methods to return information about the image, and you will be able to instantiate your images using the normal NSImage methods. This method requires the most work.
Using NSCustomImageRep requires less work than creating a custom implementation. Your delegate object only needs to be able to draw the image at a fixed location. However, you cannot return other information about the image, and you will need to create the NSCustomImageRep object manually before creating the NSImage.
Translating the image into a different format is also simpler than creating a custom representation. It could be as simple as creating a blank NSImage of the proper size and drawing into it. Creating the image is still more complicated since you need to call your translation method, and this will affect efficiency (both future drawing time and memory usage) since you are changing formats, which could be good or bad. You will also lose any association between the image object and its source.

How to create JPEG image on iOS from scratch

I'm trying to create an objective C classe for my iPad application which can convert a powerpoint file to a jpeg file.
Accordingly i've to read into the pptx format to see how the file is structured and create an image, from scratch, in which i can say this element goes there, this one here, this text there.
But actually i've no idea how to do this, if the best way is to use a already existing framework in iOS or an additional library?
Thanks to everyone ;)
Bye
The fastest way to visualize elements is, to me, OpenGL ES. You can use mobile GPU to visualize then there is CIImage for managing image.
Take a look at Quartz 2D, the drawing engine used as the main workhorse for 2D graphics on iOS. It gives you all the primitives for drawing shapes, fills, text and other objects you need to render the presentation.

about fast swapping texture on IOS opengl ES

i'm work on a buffer for load very large pictures ( screen size) to single surface.
The idea is to animate a lot of pictures ( more than the video memory can store ) frame by frame.
I have create a code for make a buffer but i have a big problem with the loading time of bitmap.
My code work a this :
I load an array of local bitmap files path.
I (think ) i preload my bitmap datas in memory. I'm using a thread for store a CGImageRef in an NSArray for all my picture ( 40 for moment )
In a second thread, the code look another NSArray for determine if is empty of not, if is empty, i bind my cgimageRef to the video memory by creating textures. ( use sharedgroup for this)
This array store the adress of 20 textures names, and it's use directly by openGL for draw the surface. this array is my (buffer)
When i play my animation, i delete old textures from my "buffer" and my thread ( at point 3) load a new texture.
It's work great, but is really slow, and after few second, the animation lack.
Can you help me for optimise my code ?
Depending on device and iOS version glTexImage is just slow.
With iOS 4 performance was improved so that you can expect decent speed on 2nd gen devices too, and with decent I mean one or two texture uploads per frame...
Anyway:
Use glTexSubImage and reuse already created texture-IDs.
Also, when using glTex(Sub)Image, try to use a texture-ID that wasn't used for rendering in that frame. I mean: add some kind of texture-ID-doublebuffering.
I asume you do all your GL stuff in the same thread, if not change it.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.