recommended limit for memory management in Cocos2d? - objective-c

is there a recommended limit for the images in Cocos2d, whether there are too big and take too much memory? Are there some rules, in dimensions or in Kb, to avoid slowing the game down? (for the background image, or the graphics of my characters (even if i use a batch node)?)
Thanks for your answer

First of all, memory usage has very, very, very little to do with performance. You can fill up the entire memory with textures, the game won't care. It's when you render them where there will be a difference. And then it only matters how much of the screen area you're filling with textures, how heavily they're overlayed, batched, rotated, scaled, shaded and alpha-blended. Those are the main factors in texture rendering performance. Memory usage plays a very insignificant role.
You may be interested in the cocos2d sprite-batch performance test I did and the general cocos2d performance analysis. Both come with test projects.
As for the maximum texture sizes have a look at the table from my Learn Cocos2D book:
Note that iPhone and iPhone 3G devices have a 24 MB texture memory limit. 3rd generation (iPhone 3GS) and newer devices don't have that limit anymore. Also keep in mind that while a device may have 256 MB of memory installed, significantly less memory will be available for use by apps.
For example, on the iPad (1st gen) it is recommended not to use more than 100 MB of memory, with a maximum available memory peaking at around 125 MB and memory warning starting to show as early as around 80-90 MB memory usage.
With iOS 5.1 Apple also increased the maximum texture size of the iPad 2. The safest and most commonly usable texture size is 2048x2048 for Retina textures, and 1024x1024 for standard resolution textures.
Not in the table are iPod touch devices because they're practically identical to the iPhone models of the same generation, but not as easily identifiable. For example the iPod touch 3rd generation includes devices with 8, 16 and 32GB of flash memory, but the 8GB model is actually 2nd generation hardware.

The dimensional size of images and textures depends on the device you are supporting. Older devices supported small layers, I think 2048x2048 in size. I don't think such limitation exists on current devices.
For large images, you definitely want to use batch nodes as they have been tested to demonstrate the largest performance gain when dealing with large images. Though it is a good idea to use them for as much as possible in general.
As for how much you can load, it really depends on the device. The new iPad has 1 GB of memory and is designed to have much more available memory for large images. A first-gen iPad has 1/4 this amount of memory, and in my experience I start to see an app crash when it gets around 100 MB of memory used (as confirmed using Instruments).
The trick is to use only as much memory as you need for the current app's operation, then release it when you move to a new scene or new set of images/sprites/textures. You could for example have very large tiled textures where only the tiles nearest the viewport are loaded into memory. You could technically have an infinite sized view that stretches forever if you remove from memory those parts of the view that are not visible onscreen.
And of course when dealing with lots of resources, make sure your app delegate responds appropriately to its memory warnings.

As per my knowledge.. A batch node of size 1024x1024 takes around 4 MB of space which is only texture memory.. And an application has maximum limit of 24 MB. So game slows down as you reach this 24 MB space and crashes after that. To avoid slowness I used maximum of 4 Batch Nodes at one time i.e.16 MB. Rest 8 MB was left for variables and other data. Before using more batch node I used to clean memory and remove unused batch nodes.. I don't know about memory limit in 4s but in case of iPhone 4 this was what I learnt.
Using this logic in mind I used to run my game smoothly.

Related

How to properly assign huge heap space for JVM

Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.

UIImage Memory Problems

In my app I am returned from an API the urls of images, which I want to display in the app. This is all well and good, except I started to notice that when I am given, and load, very high-res images my app memory usage spikes 200+mb, often causing it to crash, which is unacceptable.
In one particular example, I am given an image that is of the dimensions 8100*5400 pixels. When the app loaded this image it crashed.
While I first thought the problem was a memory leak I created, but after doing some research, it seems like an unavoidable issue related to the size of the image -- since the image is 43,740,000 pixels and each pixel uses 4 bytes, the memory usage of the image will be a minimum of 174,960,000 bytes, or 174.96 megabytes.
The problem is i cannot control the size of the images sent by the api - they may be any resolution, possibly even larger. Obviously a UIImage will not work for my purposes.
Is there any other way I can display an image with a potentially massive resolution without causing app-crashing memory usage?
Instead of downloading the image as data into memory, which will crash your app, download it as data to disk, which will not.
You can then use the Image I/O framework to load a smaller version of the image which won't take up so much memory.
(Note that you should never attempt to display an image larger than the actual display size that you need, since that's a massive waste of memory. So, even if you can't help downloading the large image, you can at least load and display a version that is the actual much smaller size you need.)

Order Core-Image to process images only on GPU

I found some info in the internet that Core-Image process images on CPU if any of it's size bigger than 2048 (width or height or both). And it looks to be true because applying CIFilter even on 3200x2000 image is very slow. If I do the same on 2000x2000 image it is much faster. Is it possible to tell Core-Image to process all images on GPU always? Or maybe information I found was incorrect?
Processing on the GPU is not always faster, because your image data first has to be loaded to the GPU memory, processed, and then transferred back.
You can use kCIContextUseSoftwareRenderer to force software rendering (on the CPU) but there is no constant to force rendering on a GPU, I'm afraid. Also, software rendering does not work in the Simulator.
The maximum size depends on the device you're working on. For iPhone 3GS/4 & iPad 1, it's 2048*2048. For later iPhone/iPad, it's 4096*4096. On OSX, it would depend on your graphic card and/or OS version (2, 4, 8, or 16K²).
One possible way around the limit is to tile your image into pieces below the limit, and process each tile separately. Then you'd have to put the pieces back together after.

OpenGL power of two texture performance [duplicate]

I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures.
I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out?
Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two?
Writing a video player you should update your texture content using glTexSubImage2D(). This function allows you to supply arbitarily sized images, that will be placed somewhere in the target texture. So you can initialize the texture first with a call of glTexImage() with the data pointer being NULL, then fill in the data.
The performance gain of pure power of 2 textures strongly depends on the hardware used, but in extreme cases it may be up to 300%.

MFMailComposeViewController uses too much memory

When I try to send images as attachements(summar size ~4mb) by using MFMailCompose, activity monitor says, what 100(+-2)mb of memory is used. After sending or cancelling memory is freeing ~20 megabytes, but what happened to the remaining 80 megabytes if shared item with images is deallocated?
Thanks!:)
An image's file size and the amount of memory it consumes when displayed are two completely different things.
Images such as JPEGs and PNGs are compressed. When they are drawn to the screen they are uncompressed.
A quick rule of thumb to figure out how much memory an image will consume when displayed is
memory consumed = (width * height) * 4
Example, a image that is 2 KB on disk, but is 62 x 52 pixels will actually consume 12,896 bytes or 12 KB. I imagine an image that is 4 MB on disk will consume a lot more than 4 MBs.
The problem is that MFMailComposer displays the images in it's compose view when you add them as attachments and as a result they are decompressed, consuming memory. So your 4 MBs of images actually consume a lot more than you think they do.
Perhaps try sending only one image at a time. You also need to be conscious that you're releasing the images and the MFMailComposeViewController when you're done with them, or that would surely be the source of the leak.
Also be aware of how you initially load your images. UIImage's imageNamed: method actually caches images. Cached images are only purged in low memory situations, so they can hang around for a while if you're not hitting the limits.
Finally, you've noted that you're seeing the memory consumption in Instruments, but have you actually verified that it is actually a problem? Are you experiencing app crashes due to low memory when testing your app while it is not connected to Instruments or the debugger?
No body is perfect - and that goes for Apple too. There have been documented cases in the past where Apple's frameworks have shown memory leaks (UIImage's caching leaked in iOS 2.x), but I wouldn't be so quick to blame the frameworks when you notice a spike in memory consumption. If the leaks instrument is not showing any leaks, and the analyser isn't showing any issues, the likeliest scenario is that it's just simply memory consumption and not a leak.
Its important to remember that iOS devices don't have gigabytes of RAM like computers do. You need to be conservative with the memory that you use. If that means not sending XX MBs of images at the same time, then that's the way it has to be.