Can I save memory by reducing image size? - objective-c

i've a lot of images as background and other decorations.
If i reduce their resolution and their size, the memory used by all application will be reduced or will i not have advantages?
Thank you.

If the size of the file is reduced it will use less memory. 1 thing you could do is using sprites instead of images, though using a sprite in objective-c is a bit more difficult than it is on the web.
You could also try to replace images with custom drawing code in subclasses of views.

Yes, if you reduce image resolution, there will be less data to keep in memory and total memory usage will go down. But remember the basic rule of optimization and measure first, so that you know for sure that reducing image size will really make a difference given the rest of your application. (As an example, if you have serious leaks in your application, there’s not much to be gained by resampling images.)

Related

Non power of two textures and memory consumption optimization

I read somewhere that XNA framework upscales a texture to nearest power of two size and then sends that to VRAM, which, provided it's how it really works, might be not efficient when loading many small (in my case 150×150) textures, which essentially waste memory with unused texture data resulting from upscaling.
So is there some automatic optimization, or should I make my own implementation of it, like loading all textures, figuring out where the "upscaled" space is big enough to hold some other texture and place it there, remembering sprite positions, thus using one texture instead of two (or more)?
It isn't always handy to do this manually for each texture (placing many small sprites in a single texture), because it's hard to work with later (essentially it becomes less human-oriented), and not always a sprite will be needed in some level of a game, so it would be better if sprites were in a different composition, so it should be done automatically.
There are tools available to create what are known as "sprite sheets" or "texture atlases". This XNA sample does this for you as part of a content pipeline extension.
Note that the padding of textures only happens on devices that do not support non-power-of-two textures. Windows Phone, for example. Modern GPUs won't waste the RAM. However this is still a useful optimisation to allow you to merge batches of sprites (see this answer for details).

Does CGBitmapContextCreate() have a size limit?

I want to make an extremely large bitmap (250,000 pixels on each side, to be eventually written out as BigTIFF). I don't see a memory size or dimensional limit anywhere in the docs, can Core Graphics handle it?
CG is not designed for that kind of workload.
(I'd be surprised if you found any general-purpose graphics framework that is, frankly. If you're pushing images that big, you're going to have to write your own code to get anything done in a reasonable amount of time.)
In my experience, images started to fail once dimensions got over 32767 or so. Not in any organized way, just crashes and hard-to-repro failures; certain parts of the API would work, others wouldn't. Things may be better in 64-bit but I wouldn't count on it.

Best cache size for iOS apps

I'm currently developing an application that loads lots of images from the internet and saves them locally (I'm using SDURLCache). However, old images have get removed from the disk again so I was wondering what the best cache size is.
The advantage of a big cache is obviously that more images get saved which leads to better UX.
The disadvantage is that images need a lot of space and the user will run out of disk space faster. The size I am thinking of is 20MB. It seems so big to me though so I'm asking you what you're opinion is.
The best way to decide on an appropriate cache size is to test. Run the app under Instruments to measure both performance and battery usage. Keep increasing the cache size until you can't discern a difference in performance. That's the largest size you'd need, at least under the test conditions. Once you've established that size, reduce the size until performance is just barely acceptable to determine the smallest acceptable size.
The right size is somewhere between those two sizes, depending on what you think is important. If you can't determine a right size, then either pick a size or add a slider to the app's settings to let the user decide. (I'd avoid making it user-adjustable if you can -- users shouldn't have to think about such things.)
Considering that the smallest iDevices have 8GB of storage, I don't think a 20MB cache is too big, especially if it significantly improves the performance of the app. Also, keep in mind the huge advantage a network cache can have for battery life, since network usage is very expensive in battery time.
Determining the ideal size however is hard without some more information. How often is the same picture accessed? How large is each picture (i.e. how many pictures can 20MB hold). How often will images need to be removed from the cache to add new ones?
If you are constantly changing the images in the cache, it could actually have an adverse effect on the battery life due to the increased disk usage.

Is there anyway to load a bunch of images from Resources?

Hey guys, im getting a little problem here. I am getting a Level 1 Warning from my debug.
So, i think the best solution for it is loading every images of resource, to prevent crash the app like it are doing.
So, whats the best way to do it?
Thanks!
There are several strategies to reduce memory usage if you're working with lots of images. The warning you're getting doesn't necessarily mean that main memory is running out, but you could be running low on video ram.
Reduce the image size before adding them to your project, both by scaling them down and/or compressing image data.
Load only the images you need at a particular time - avoid trying to keep all images in memory.
Load images using (UIImage)imageWithContentsOfFile: rather than (UIImage)imageNamed: (the latter reads an caches images immediately, the former is more "lazy").
Be aggressive in 'release'ing images - don't wait for autorelease to kick in, but send [image release];
Simplify other aspects of your code that use video ram (e.g. remove layer transparency, remove views that aren't currently visible, etc.)
Your question needs more details I think to get a clear answer.
Anyway, the best solution not to get a memory warning while loading "a bunch of images"...
is...
Not loading these images! Only load required images "on demand".
For example if you have images in a table view, make cells loads images only when they are visible.

Planning a 2D tile engine - Performance concerns

As the title says, I'm fleshing out a design for a 2D platformer engine. It's still in the design stage, but I'm worried that I'll be running into issues with the renderer, and I want to avoid them if they will be a concern.
I'm using SDL for my base library, and the game will be set up to use a single large array of Uint16 to hold the tiles. These index into a second array of "tile definitions" that are used by all parts of the engine, from collision handling to the graphics routine, which is my biggest concern.
The graphics engine is designed to run at a 640x480 resolution, with 32x32 tiles. There are 21x16 tiles drawn per layer per frame (to handle the extra tile that shows up when scrolling), and there are up to four layers that can be drawn. Layers are simply separate tile arrays, but the tile definition array is common to all four layers.
What I'm worried about is that I want to be able to take advantage of transparencies and animated tiles with this engine, and as I'm not too familiar with designs I'm worried that my current solution is going to be too inefficient to work well.
My target FPS is a flat 60 frames per second, and with all four layers being drawn, I'm looking at 21x16x4x60 = 80,640 separate 32x32px tiles needing to be drawn every second, plus however many odd-sized blits are needed for sprites, and this seems just a little excessive. So, is there a better way to approach rendering the tilemap setup I have? I'm looking towards possibilities of using hardware acceleration to draw the tilemaps, if it will help to improve performance much. I also want to hopefully be able to run this game well on slightly older computers as well.
If I'm looking for too much, then I don't think that reducing the engine's capabilities is out of the question.
I think the thing that will be an issue is the sheer amount of draw calls, rather than the total "fill rate" of all the pixels you are drawing. Remember - that is over 80000 calls per second that you must make. I think your biggest improvement will be to batch these together somehow.
One strategy to reduce the fill-rate of the tiles and layers would be to composite static areas together. For example, if you know an area doesn't need updating, it can be cached. A lot depends of if the layers are scrolled independently (parallax style).
Also, Have a look on Google for "dirty rectangles" and see if any schemes may fit your needs.
Personally, I would just try it and see. This probably won't affect your overall game design, and if you have good separation between logic and presentation, you can optimise the tile drawing til the cows come home.
Make sure to use alpha transparency only on tiles that actually use alpha, and skip drawing blank tiles. Make sure the tile surface color depth matches the screen color depth when possible (not really an option for tiles with an alpha channel), and store tiles in video memory, so sdl will use hardware acceleration when it can. Color key transparency will be faster than having a full alpha channel, for simple tiles where partial transparency or blending antialiased edges with the background aren't necessary.
On a 500mhz system you'll get about 6.8 cpu cycles per pixel per layer, or 27 per screen pixel, which (I believe) isn't going to be enough if you have full alpha channels on every tile of every layer, but should be fine if you take shortcuts like those mentioned where possible.
I agree with Kombuwa. If this is just a simple tile-based 2D game, you really ought to lower the standards a bit as this is not Crysis. 30FPS is very smooth (research Command & Conquer 3 which is limited to 30FPS). Even still, I had written a remote desktop viewer that ran at 14FPS (1900 x 1200) using GDI+ and it was still pretty smooth. I think that for your 2D game you'll probably be okay, especially using SDL.
Can you just buffer each complete layer into its view plus an additional tile size for all four ends(if you have vertical scrolling), use the buffer again to create a new buffer minus the first column and drawing on a new end column?
This would reduce a lot of needless redrawing.
Additionally, if you want a 60fps, you can look up ways to create frame skip methods for slower systems, skipping every other or every third draw phase.
I think you will be pleasantly surprised by how many of these tiles you can draw a second. Modern graphics hardware can fill a 1600x1200 framebuffer numerous times per frame at 60 fps, so your 640x480 framebuffer will be no problem. Try it and see what you get.
You should definitely take advantage of hardware acceleration. This will give you 1000x performance for very little effort on your part.
If you do find you need to optimise, then the simplest way is to only redraw the areas of the screen that have changed since the last frame. Sounds like you would need to know about any animating tiles, and any tiles that have changed state each frame. Depending on the game, this can be anywhere from no benefit at all, to a massive saving - it really depends on how much of the screen changes each frame.
You might consider merging neighbouring tiles with the same texture into a larger polygon with texture tiling (sort of a build process).
What about decreasing the frame rate to 30fps. I think it will be good enough for a 2D game.