Order Core-Image to process images only on GPU - objective-c

I found some info in the internet that Core-Image process images on CPU if any of it's size bigger than 2048 (width or height or both). And it looks to be true because applying CIFilter even on 3200x2000 image is very slow. If I do the same on 2000x2000 image it is much faster. Is it possible to tell Core-Image to process all images on GPU always? Or maybe information I found was incorrect?

Processing on the GPU is not always faster, because your image data first has to be loaded to the GPU memory, processed, and then transferred back.
You can use kCIContextUseSoftwareRenderer to force software rendering (on the CPU) but there is no constant to force rendering on a GPU, I'm afraid. Also, software rendering does not work in the Simulator.

The maximum size depends on the device you're working on. For iPhone 3GS/4 & iPad 1, it's 2048*2048. For later iPhone/iPad, it's 4096*4096. On OSX, it would depend on your graphic card and/or OS version (2, 4, 8, or 16K²).
One possible way around the limit is to tile your image into pieces below the limit, and process each tile separately. Then you'd have to put the pieces back together after.

Related

Advanced GPU Control needed for browserwindows

I would like to set how much GPU RAM is being used as to prevent the app from overflowing on windows. Does anyone have any expert advice?
I am using electron to build an automatic player for windows. This player plays a mix of video's (h264encoded mp4), html and jpeg based on a schedule (sort of like a presentation).
I tested the app on several windows devices, the results vary greatly!
All devices are tiny computers by Asus. In general I noticed 2 distinct differences:
On devices that have no hardware acceleration the chromium gpu process uses up about 30MB of shared RAM, this number never changes, regardless of the content played. The CPU however has all the load here, meaning it is decoding the mp4's (h264) with software instead of hardware.
On devices with hardware acceleration the cpu load is of course less, but the RAM memory used up by the chromium gpu-process varies greatly. While displaying jpeg or html the RAM is about 0.5GB, when mp4'skick in the RAM memory easily goes up to 2GB and more.
On the stronger devices without hardware acceleration this is not a big issue, they have 8GB of shared memory or more and don't crash. However some of the other devices have only 4GB of shared memory and can run out of memory quite easily.
The result of this lack of memory is either the app crashes completely (message with memory overflow is displayed) or the app just hangs (keeps running but doesn't do anything anymore, usually just displays a white screen).
I know that I can pass certain flags to browserwindow using app.commandLine.appendSwitch.
These are a few of the flags that I tried and the effect they had, I found a list of them here:
--force-gpu-mem-available-mb=600 ==> no effect whatsoever, process behaves as before and still surpasses 2GB of RAM.
--disable-gpu ==> This one obviously worked but is undesirable because it disabled hardware acceleration completely
--disable-gpu-memory-buffer-compositor-resources ==> no change
--disable-gpu-memory-buffer-video-frames ==> no change
--disable-gpu-rasterization ==> no change
--disable-gpu-sandbox ==> no change
Why are some of these command line switches not having any effect on the GPU behaviour? All devices have onboard GPU and shared RAM. I know the Command Line Switches are being used on startup because when I check the processes in the windows task manager I can see the switches have been past to the processes (using the the command line tab in the task manager). So the switches are loaded but still appear to be ignored.
I would like to set how much GPU RAM is being used as to prevent the app from overflowing on windows. Does anyone have any expert advice?

UIImage Memory Problems

In my app I am returned from an API the urls of images, which I want to display in the app. This is all well and good, except I started to notice that when I am given, and load, very high-res images my app memory usage spikes 200+mb, often causing it to crash, which is unacceptable.
In one particular example, I am given an image that is of the dimensions 8100*5400 pixels. When the app loaded this image it crashed.
While I first thought the problem was a memory leak I created, but after doing some research, it seems like an unavoidable issue related to the size of the image -- since the image is 43,740,000 pixels and each pixel uses 4 bytes, the memory usage of the image will be a minimum of 174,960,000 bytes, or 174.96 megabytes.
The problem is i cannot control the size of the images sent by the api - they may be any resolution, possibly even larger. Obviously a UIImage will not work for my purposes.
Is there any other way I can display an image with a potentially massive resolution without causing app-crashing memory usage?
Instead of downloading the image as data into memory, which will crash your app, download it as data to disk, which will not.
You can then use the Image I/O framework to load a smaller version of the image which won't take up so much memory.
(Note that you should never attempt to display an image larger than the actual display size that you need, since that's a massive waste of memory. So, even if you can't help downloading the large image, you can at least load and display a version that is the actual much smaller size you need.)

Check Device Type For GPUImage

GPUImage requires, for iPhone 4 and below, images smaller than 2048 pixels. The 4S and above can handle much larger. How can I check to see which device my app is currently running on? I haven't found anything in UIDevice that does what I'm looking for. Any suggestions/workarounds?
For this, you don't need to check device type, you simply need to read the maximum texture size supported by the device. Luckily, there is a built-in method within GPUImage that does this for you:
GLint maxTextureSize = [GPUImageContext maximumTextureSizeForThisDevice];
The above will give you the maximum texture size for the device you're running this on. That will determine the largest image size that GPUImage can work with on that device, and should be future-proof against whatever iOS devices come next.
This method works by caching the results of this OpenGL ES query:
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
if you're curious.
I should also note that you can provide images larger than the max texture size to the framework, but they get scaled down to the largest size supported by the GPU before processing. At some point, I may complete my plan for tiling subsections of these images in processing so that larger images can be supported natively. That's a ways off, though.
This is among the best device-detection libraries I've come across: https://github.com/erica/uidevice-extension
EDIT: Although the readme seems to suggest that the more up-to-date versions are in her "Cookbook" sources. Perhaps this one is more current.
Here is a useful class that I have used several times in the past that is very simple and easy to implement,
https://gist.github.com/Jaybles/1323251

recommended limit for memory management in Cocos2d?

is there a recommended limit for the images in Cocos2d, whether there are too big and take too much memory? Are there some rules, in dimensions or in Kb, to avoid slowing the game down? (for the background image, or the graphics of my characters (even if i use a batch node)?)
Thanks for your answer
First of all, memory usage has very, very, very little to do with performance. You can fill up the entire memory with textures, the game won't care. It's when you render them where there will be a difference. And then it only matters how much of the screen area you're filling with textures, how heavily they're overlayed, batched, rotated, scaled, shaded and alpha-blended. Those are the main factors in texture rendering performance. Memory usage plays a very insignificant role.
You may be interested in the cocos2d sprite-batch performance test I did and the general cocos2d performance analysis. Both come with test projects.
As for the maximum texture sizes have a look at the table from my Learn Cocos2D book:
Note that iPhone and iPhone 3G devices have a 24 MB texture memory limit. 3rd generation (iPhone 3GS) and newer devices don't have that limit anymore. Also keep in mind that while a device may have 256 MB of memory installed, significantly less memory will be available for use by apps.
For example, on the iPad (1st gen) it is recommended not to use more than 100 MB of memory, with a maximum available memory peaking at around 125 MB and memory warning starting to show as early as around 80-90 MB memory usage.
With iOS 5.1 Apple also increased the maximum texture size of the iPad 2. The safest and most commonly usable texture size is 2048x2048 for Retina textures, and 1024x1024 for standard resolution textures.
Not in the table are iPod touch devices because they're practically identical to the iPhone models of the same generation, but not as easily identifiable. For example the iPod touch 3rd generation includes devices with 8, 16 and 32GB of flash memory, but the 8GB model is actually 2nd generation hardware.
The dimensional size of images and textures depends on the device you are supporting. Older devices supported small layers, I think 2048x2048 in size. I don't think such limitation exists on current devices.
For large images, you definitely want to use batch nodes as they have been tested to demonstrate the largest performance gain when dealing with large images. Though it is a good idea to use them for as much as possible in general.
As for how much you can load, it really depends on the device. The new iPad has 1 GB of memory and is designed to have much more available memory for large images. A first-gen iPad has 1/4 this amount of memory, and in my experience I start to see an app crash when it gets around 100 MB of memory used (as confirmed using Instruments).
The trick is to use only as much memory as you need for the current app's operation, then release it when you move to a new scene or new set of images/sprites/textures. You could for example have very large tiled textures where only the tiles nearest the viewport are loaded into memory. You could technically have an infinite sized view that stretches forever if you remove from memory those parts of the view that are not visible onscreen.
And of course when dealing with lots of resources, make sure your app delegate responds appropriately to its memory warnings.
As per my knowledge.. A batch node of size 1024x1024 takes around 4 MB of space which is only texture memory.. And an application has maximum limit of 24 MB. So game slows down as you reach this 24 MB space and crashes after that. To avoid slowness I used maximum of 4 Batch Nodes at one time i.e.16 MB. Rest 8 MB was left for variables and other data. Before using more batch node I used to clean memory and remove unused batch nodes.. I don't know about memory limit in 4s but in case of iPhone 4 this was what I learnt.
Using this logic in mind I used to run my game smoothly.

OpenGL power of two texture performance [duplicate]

I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures.
I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out?
Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two?
Writing a video player you should update your texture content using glTexSubImage2D(). This function allows you to supply arbitarily sized images, that will be placed somewhere in the target texture. So you can initialize the texture first with a call of glTexImage() with the data pointer being NULL, then fill in the data.
The performance gain of pure power of 2 textures strongly depends on the hardware used, but in extreme cases it may be up to 300%.