i'm work on a buffer for load very large pictures ( screen size) to single surface.
The idea is to animate a lot of pictures ( more than the video memory can store ) frame by frame.
I have create a code for make a buffer but i have a big problem with the loading time of bitmap.
My code work a this :
I load an array of local bitmap files path.
I (think ) i preload my bitmap datas in memory. I'm using a thread for store a CGImageRef in an NSArray for all my picture ( 40 for moment )
In a second thread, the code look another NSArray for determine if is empty of not, if is empty, i bind my cgimageRef to the video memory by creating textures. ( use sharedgroup for this)
This array store the adress of 20 textures names, and it's use directly by openGL for draw the surface. this array is my (buffer)
When i play my animation, i delete old textures from my "buffer" and my thread ( at point 3) load a new texture.
It's work great, but is really slow, and after few second, the animation lack.
Can you help me for optimise my code ?
Depending on device and iOS version glTexImage is just slow.
With iOS 4 performance was improved so that you can expect decent speed on 2nd gen devices too, and with decent I mean one or two texture uploads per frame...
Anyway:
Use glTexSubImage and reuse already created texture-IDs.
Also, when using glTex(Sub)Image, try to use a texture-ID that wasn't used for rendering in that frame. I mean: add some kind of texture-ID-doublebuffering.
I asume you do all your GL stuff in the same thread, if not change it.
Related
TL;DR: From within my MTKView's delegate drawInMTKView: method, part of my rendering pass involves adding an MPSImageBilinearScale performance shader and zero or more MTLBlitCommandEncoder requests for generateMipmapsForTexture. Is that a smart thing to do from within drawInMTKView:, which happens on the main thread? Do either of them block the main thread while running or are they only being encoded and then executed later and entirely on the GPU?
Longer Version:
I'm playing around with Metal within the context of an imaging application. I use Core Image to load an image and apply filters. The output image is displayed as a 2D plane in a metal view with a single texture. This works, but to improve performance I wanted to experiment with Core Image's ability to render out smaller tiles at a time. Each tile is rendered into its own IOSurface.
On each render pass, I check if there are any tiles that have been recently rendered. For each rendered tile (which is now an IOSurface), I create a Metal texture from a CVMetalTextureCache that is backed by the surface.
I think use a scaling MPS to copy from the tile-texture into the "master" texture. If a tile was copied over, then I issue a blit command to generate the mipmaps on the master texture.
What I'm seeing is that if my master texture is quite large, then generate the mipmaps can take "a bit of time". The same is true if I have a lot of tiles. It appears this is blocking the main thread because my FPS drops significantly. (The MTKView is running at the standard 60fps.)
If I play around with tile sizes, then I can improve performance in some areas but decrease it in others. For example, increasing the tile size that Core Image renders it creates less tiles, and thus less calls to generate mipmaps and blits, but at the cost of Core Image taking longer to render a region.
If I decrease the size of my "master" texture, then mipmap generation goes faster since only the dirty textures are updates, but there appears to be a lower bounds on how small I should make the master texture because if I make it too small, then I need to pass in a large number of textures to the fragment shader. (And it looks like that limit might be 128?)
What's not entirely clear to me is how much of this I can move off the main thread while still using MTKView. If part of the rendering pass is going to block the main thread, then I'd prefer to move it to a background through so that UI elements (like sliders and checkboxes) remain fully responsive.
Or maybe this isn't the right strategy in the first place? Is there a better way to display really large images in Metal other than tiling? (i.e.: Images larger than Metal's texture size limit of 16384?)
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
i am having many plist, in my game-for each level.
i am using this to unload the previous frameCache ,
for(int i=1;i<stage;i++)
[[CCSpriteFrameCache sharedSpriteFrameCache] removeSpriteFramesFromFile:[NSString stringWithFormat:#"candys%i.plist",i]];
but after a while it seems that the game becomes a little bit slower.
i am also loading in real time, the images like this :
sprite = [CCSprite spriteWithSpriteFrameName:[NSString stringWithFormat:#"candy%i.png",1]];
where candy1 is a sprite in a spritesheet on the cache and is being loaded
in REAL time- which mean many times a second.
can it be bad ? is it loads the memory to get the sprite from the spriteSheet many times a second ? do i have to pre define it ?
many thanks .
You don't want to load/unload individual sprite frames. A sprite frame references a texture. Usually this will be a texture atlas which many different sprite frames use. The sprite frame itself is maybe 16 Bytes of data. The texture may be up to 16 Megabytes.
So unless you remove the entire texture atlas and all the associated sprite frames, all you'll be getting is reduced performance because you're frequently deallocating and loading sprite frames. If you do that multiple times per second you're wasting a lot of time just to load/unload sprite frames.
Rule of thumb: load your entire scene up front, keep everything in memory until scene ends. Only if the entire scene doesn't fit into memory at once should you consider unloading/reloading of objects and data.
I used to animate my CCSprites by iterating through 30 image files (rather big ones) and on each file I changed the CCSprite's texture to that image file.
Someone told me that was not efficient and I should use spritesheets instead. But, can I ask why is this not efficient exactly?
There are two parts to this question:
Memory.
OpenGL ES requires textures to have width and height's to the power of 2 eg 64x128, 256x1024, 512x512 etc. If the images don't comply, Cocos2D will automatically resize your image to fit the dimensions by adding in extra transparent space. With successive images being loaded in, you are constantly wasting more and more space. By using a sprite sheet, you already have all the images tightly packed in to reduce wastage.
Speed. Related to above, it takes time to load an image and resize it. By only calling the 'load' once, you speed the entire process up.
In my OpenGLES 2 application (on an SGX535 on Android 2.3, not that it matters), I've got a large texture that I need to make frequent small updates to. I set this up as a pair of FBOs, where I render updates to the back buffer, then render the entire back buffer as a texture to the front buffer to "swap" them. The front buffer is then used elsewhere in the scene as a texture.
The updates are sometimes solid color sub-rectangles, but most of the time, the updates are raw image data, in the same format as the texture, e.g., new image data is coming in as RGB565, and the framebuffer objects are backed by RGB565 textures.
Using glTexSubImage2D() is slow, as you might expect, particularly on a deferred renderer like the SGX. Not only that, using glTexSubImage2D on the back FBO eventually causes the app to crash somewhere in the SGX driver.
I tried creating new texture objects for each sub-rectangle, calling glTexImage2D to initialize them, then render them to the back buffer as textured quads. I preserved the texture objects for two FBO buffer swaps before deleting them, but apparently that wasn't long enough, because when the texture IDs were re-used, they retained the dimensions of the old texture.
Instead, I'm currently taking the entire buffer of raw image data and converting it to an array of structs of vertices and colors, like this:
struct rawPoint {
GLfloat x;
GLfloat y;
GLclampf r;
GLclampf g;
GLclampf b;
};
I can then render this array to the back buffer using GL_POINTS. For a buffer of RGB565 data, this means allocating a buffer literally 10x bigger than the original data, but it's actually faster than using glTexSubImage2D()!
I can't keep the vertices or the colors in their native unsigned short format, because OpenGL ES 2 only takes floats in vertex attributes and shader uniforms. I have to submit every pixel as a separate set of coordinates, because I don't have geometry shaders. Finally, I can't use the EGL_KHR_gl_texture_2D_image extension, since my platform doesn't support it!
There must be a better way to do this! I'm burning tons of CPU cycles just to convert image data into a wasteful floating point color format just so the GPU can convert it back into the format it started with.
Would I be better off using EGL Pbuffers? I'm not excited by that prospect, since it requires context switching, and I'm not even sure it would let me write directly to the image buffer.
I'm kind of new to graphics, so take this with a big grain of salt.
Create a native buffer (see ) the size of your texture
Use the native buffer to create an EGL image
eglCreateImageKHR(eglGetCurrentDisplay(),
eglGetCurrentContext(),
EGL_GL_TEXTURE_2D_KHR,
buffer,
attr);
I know this uses EGL_GL_TEXTURE_2D_KHR. Are you sure your platform doesn't support this? I am developing on a platform that uses SGX535 as well, and mine seems to support it.
After that, bind the texture as usual. You can memcpy into your native buffer to update sub rectangles very quickly I believe.
I realize I'm answering a month old question, but if you need to see some more code or something, let me know.