I understand that i can't adapt GLPainter example from apple to retina due to a bug mentioned here: Problems displaying full-screen CAEAGLLayer on Retina iPad
Any one knows a good starting point to creating an Open-GL basic painter with brushes, that will work on Retina?
or - creating an openGL painter without CAEAGLLayer
I think that the starting point can still be GLPaint, only you need to set to NO hte value of kEAGLDrawablePropertyRetainedBacking and change the way you draw in your GL view.
GLPaint will only render to the gl buffer the strokes you draw by touching the screen, relying on kEAGLDrawablePropertyRetainedBacking to make the full buffer content retained. An alternative might be redrawing at each step the full content of the buffer. This would require keeping track of all the strokes that were drawn and kind-of "replay" them.
I suspect that in any serious painting app you would not rely on kEAGLDrawablePropertyRetainedBacking to retain the buffer content due both to performance and the need for managing you own data structure representing the painting (for anything like storing, sending the painting etc.) and would therefore implement your own solution for it.
Related
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
I'm making a game using UIView.
I use a large (8192x8192) UIView as the map area, (the game is birds-eye-view) with a UIImageView stretched across it displaying a grass texture.
This uses heaps of memory, doesn't run on older devices and nearly crashes Xcode whenever I try to edit it...
Is there an alternate method of creating a 8192x8192 map, but without being laggy?
If it's possible to tile your graphics, something involving CATiledLayer would probably be a good fit. CATiledLayer allows you to provide only the images that are necessary to display the currently viewable area of the view (just like Maps does).
Here is some example code for displaying a large PDF.
I'm trying to build a weather application on the iPad but it seems that I need some help in animation. Say I'm animating a Radar, so the radar source files have 10 gif/jpeg pictures in 900x700 pixel size. I've tried the UIImage animation technique using the tutorial here:
http://www.icodeblog.com/2009/07/24/iphone-programming-tutorial-animating-a-game-sprite/
but it seems that loading 10 images that big is too much for the iPad to handle and its crashing due to memory warnings. I'm researching other techniques to animate but I can't seem to find something that will do this efficiently.
I've looked at others like Core Animation using sprites, and Cocos2D with sprites. Can someone point in the right direction the best way to animate these big images? (keep in mind that these images are dynamic and changes often so the sprites will have to be recreated on a server and fetched from the iPad to do the animation). Thanks
OpenGL only creates textures with dimensions at powers of 2. In the case of your images, that's 1024x1024, which is a meg of memory per image. Still, that shouldn't be a problem with the iPad.
First, investigate using Xcode's profiling tools to ensure the images aren't being repeatedly loaded into memory at each loop of the animation (likely by way of new objects that aren't sharing cached textures). That could solve your problem from the start.
Second, I recommend using Cocos2D if only for the easy handling of textures and caching. Toss the images into a CCAnimation, pop that into a CCRepeatForever, run it with a CCSequence. When you're done hit CCTextureCache to release unused textures.
Third, lower your animation framerate to 30 or less (if only for this animation). It may be the iPad, but you making a weather app. Not a video game.
Finally, downgrade the size of your image. Justify all you want, but a large radar animation will not sell your app. And just because a website might already be playing that animation beautifully, remember that a desktop has vastly more memory and power than any smart phone.
Try breaking the animation image into into smaller parts and animate those instead by treating each components as sprites. It would be best if you use primarily code (CoreGraphics) and draw your radar "by hand" instead of just using images as if they were animated GIFs.
I'm working on an iPad app that has a few thousand particles that the user can manipulate with touches. To produce interesting designs, I want to make it so that when a particle is drawn in a location, that drawing is not cleared on the next frame. This creates a sort of "trails" effect. At the moment I'm doing this by when "trails" is turned on, glClear() is not called each frame, so drawing from each frame is added to the drawing of the previous frame. This works fine in the iPad simulator, but for some reason, when I run this on an actual device, when I turn trails on the particle trails flicker like there's something weird going on with the buffers.
Is there a better way to produce trails / why does this graphics problem only occur in the simulator?
Thanks!
glClear() is called between buffers so that you can begin to draw the next one on a clean slate - you really need to clear the buffer between frames. Its not good practice to continue to fill up the buffer as you can start producing artifacts (as you are noticing).
To produce the trailing effect, you would probably want to use additional particles. Keep track of the particle's position or velocity, and then draw additional particles on the trail.
I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.