Objective-c : Load part of an image file - objective-c

I searched in the CG API but i did not found any way to load only a subset of pixels in a given image.
I need to load a really big image in openGL AT RUNTIME (the requirement is that i can't resize it at compile time). The texture size is too big ( > GL_MAX_TEXTURE_SIZE) so i subdivide it in other smaller images so openGL doesn't complains.
Right now this is what i do to load the big image:
NSData *texData = [[NSData alloc] initWithContentsOfFile:textureFilePath];
UIImage *srcImage = [[UIImage alloc] initWithData:texData];
And then i use CG to subdivide the image using CGImageCreateWithImageInRect() ... and its ready to be sent to openGL.
The problem is that on iPod touch the app crashed because its taking too much memory after loading the big image. I would like to load only the pixels of interest without having to create a huge peak of memory, then i can release the memory and load the next chunk that i need. Someone knows if it is possible?

Related

Objective C improve CIImage filter speed

I wrote the following code to apply a Sepia filter to an image:
- (void)applySepiaFilter {
// Set previous image
NSData *buffer = [NSKeyedArchiver archivedDataWithRootObject: self.mainImage.image];
[_images push:[NSKeyedUnarchiver unarchiveObjectWithData: buffer]];
UIImage* u = self.mainImage.image;
CIImage *image = [[CIImage alloc] initWithCGImage:u.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, image,
#"inputIntensity", #0.8, nil];
CIImage *outputImage = [filter outputImage];
self.mainImage.image = [self imageFromCIImage:outputImage];
}
- (UIImage *)imageFromCIImage:(CIImage *)ciImage {
CIContext *ciContext = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
}
When I run this code it seems to lag for 1-2 seconds. I heard that core image is faster than core graphics but I am unimpressed with the rendering time. I was wondering if this would be faster processing in CoreGraphics or even OpenCV(which is being used elsewhere in the project)? If not is there any way I can optimize this code to run faster?
I can almost guarantee it will be slower in Core Graphics than using Core Image, depending on the size of the image. If the image is small, Core Graphics may be fine, but if you are doing a lot of processing, it will be much slower than rendering using the GPU.
Core Image is very fast, however, you have to be very conscious of what is going on. Most of the performance hit with Core Image is due to setting up of the context, and copying images to/from Core Image. In addition to just copying bytes, Core Image may be converting between image formats as well.
Your code is doing the following every time:
Creating a CIContext. (slow)
Taking bytes from a CGImage and creating a CIImage.
Copying image data to GPU (slow).
Processing Sepia filter (fast).
Copying result image back to CGImage. (slow)
This is not a recipe for peak performance. Bytes from CGImage will typically live in CPU memory, but Core Image wants to use the GPU for its processing.
An excellent reference for performance considerations are provided in Getting the Best Performance documentation for Core Image:
Don’t create a CIContext object every time you render.
Contexts store a lot of state information; it’s more efficient to reuse them.
Evaluate whether you app needs color management. Don’t use it unless you need it. See Does Your App Need Color Management?.
Avoid Core Animation animations while rendering CIImage objects with a GPU context.
If you need to use both simultaneously, you can set up both to use the CPU.
Make sure images don’t exceed CPU and GPU limits. (iOS)
Use smaller images when possible.
Performance scales with the number of output pixels. You can have Core Image render into a smaller view, texture, or framebuffer. Allow Core Animation to upscale to display size.
Use Core Graphics or Image I/O functions to crop or downsample, such as the functions CGImageCreateWithImageInRect or CGImageSourceCreateThumbnailAtIndex.
The UIImageView class works best with static images.
If your app needs to get the best performance, use lower-level APIs.
Avoid unnecessary texture transfers between the CPU and GPU.
Render to a rectangle that is the same size as the source image before applying a contents scale factor.
Consider using simpler filters that can produce results similar to algorithmic filters.
For example, CIColorCube can produce output similar to CISepiaTone, and do so more efficiently.
Take advantage of the support for YUV image in iOS 6.0 and later.
If you demand real-time processing performance, you will want to use an OpenGL view that CoreImage can render its output to, and read your image bytes directly into the GPU instead of pulling it from a CGImage. Using a GLKView, and overriding drawRect: is a fairly simple solution to get a view that Core Image can render directly to. Keeping data on the GPU is the best way to get peak performance out of Core Image.
Try to reuse as much as possible. Keep a CIContext around for subsequent renders (like the doc says). If you end up using an OpenGL view, these are also things you may want to re-use as much as possible.
You may also be able to get better performance by using software rendering. Software rendering would avoid a copy to/from GPU. [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer: #(YES)}] However, this will have performance limitations in the actual render, since the CPU render is usually slower than a GPU render.
So, you can choose your level of difficulty to get maximum performance. The best performance can be more challenging, but a few tweaks may get you to "acceptable" performance for your use case.

Fast animation with big images

I have a task to make simple animation for iPad2 like here:http://www.subaru.jp/legacy/b4/index2.html
User can simply slide to left and right and object visually rotates by it's vertical axis. I think simpliest way to do this is to use a UIImage or CCSprite from cocos2d, set array with images and to change images depends on touches. The size of images planned to be 1024x768(full screen)and at least 15-20 images per second for smoother animation. Question is: is it possible to do this really smooth this way? What is the real limit for iPad2 for such a thing. And if it's behind the bounds how can I realize this behavior other way?
Ok, let's run the math:
15 times 1024x768 images per second. If you use 4096x4096 texture atlases you can put them all into a single texture atlas. That covers 1 second.
That means you need to load another texture atlas every second. At most you can have 2-3 such texture atlases in memory (conservatively each uses 64 MB memory).
Really the only way to make this feasible is to use .PVR.CCZ texture atlases to increase load times and reduce memory usage. You'd still have to load/unload texture atlases frequently (within a few seconds). So you should do a test how fast loading the 4k .PVR.CCZ texture is and whether that will impact speed.
If that's too slow (which I suspect it will be) you'll have to use 1024x1024 .pvr.ccz textures (single frames) and keep caching 4 or more of them ahead of time using the CCTextureCache async methods (and uncache the texture you're currently replacing) so that the loading of new textures occurs in the background and doesn't affect animation speed.
Since this is about rotation, you'd have to ensure that at least one, better two frames to either direction are in the cache. Since rotation can happen at various speeds, the user might still experience delays regardless.
You should further reduce color bit depth of the textures as much as possible until it affects image quality too much.
If you apply every trick in the book, I'm sure it's doable. But it's not going to be as simple as "play animation" and be done with it. If that's what you wanted to know.
I've did something like this before, but using a JS library with UIWebView control, the library name is UIZE, check this example. I've used it with around 100 image with size 1024 × 655 and it's so smooth.
Download the library from the size, organize the folders as the following:
rotation3d
example
3d-rotation-viewer.html
images
Images files
js
The library files.
In your objective-C class, use the following code to load the html page in the UIWebView:
NSString *path = [[NSBundle mainBundle]
pathForResource:#"3d-rotation-viewer"
ofType:#"html"
inDirectory:#"rotation3d/example" ];
NSURL *urls = [NSURL fileURLWithPath:path];
NSString *theAbsoluteURLString = [urls absoluteString];
NSString *queryStrings = #"?param1=something";//Parameters to pass to your html page
NSString *absoluteURLwithQueryString = [theAbsoluteURLString stringByAppendingString: queryStrings];
NSURL *finalURL = [NSURL URLWithString: absoluteURLwithQueryString];
NSURLRequest *request = [NSURLRequest requestWithURL:finalURL cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:(NSTimeInterval)10.0 ];
[webViews loadRequest:request];

Animating retina images

I'm trying to animate some images. The images are working well on non-retina iPads but their retina counterparts are slow and the animations will not cycle through at the specified rate. The code i'm using is below with the method called every 1/25th second. This method appears to perform better than UIViewAnimations.
if (counter < 285) {
NSString *file = [[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Animation HD1.2 png sequence/file_HD1.2_%d", counter] ofType:#"png"];
#autoreleasepool {
UIImage *someImage = [UIImage imageWithContentsOfFile:file];
falling.image = someImage;
}
counter ++;
} else {
NSLog(#"Timer invalidated");
[timer invalidate];
timer = nil;
counter = 1;
}
}
I realise there are a lot of images but the performance is the same for animations with less frames. Like i said, the non-retina animations work well. Each image above is about 90KB. Am i doing something wrong or is this simply a limitation of the iPad? To be honest, i find it hard to believe that it couldn't handle something like this when it can handle the likes of complex 3D games so i imagine i'm doing something wrong. Any help would be appreciated.
EDIT 1:
From the answers below, I have edited my code but to no avail. Executing the code below results in the device crashing.
in viewDidLoad
NSString *fileName;
myArray = [[NSMutableArray alloc] init];
for(int i = 1; i < 285; i++) {
fileName = [NSString stringWithFormat:#"Animation HD1.2 png sequence/HD1.2_%d.png", i];
[myArray addObject:[UIImage imageNamed:fileName]];
NSLog(#"Loaded image: %d", i);
}
falling.userInteractionEnabled = NO;
falling.animationImages = humptyArray;
falling.animationDuration = 11.3;
falling.animationRepeatCount = 1;
falling.contentMode = UIViewContentModeCenter;
the animation method
-(void) triggerAnimation {
[falling startAnimating];
}
First of all, animation performance on the retina iPad is notoriously choppy. That said, there are a few things you could do to make sure your getting the best performance for your animation (in no particular order).
Preloading the images - As some others have mentioned, your animation speed suffers when you have to wait for the reading of your image before you draw it. If you use UIImageView's animation properties this preloading will be taken care of automatically.
Using the right image type - Despite the advantage in file size, using JPEGs instead of PNGs will slow your animation down significantly. PNGs are less compressed and are easier for the system to decompress. Also, Apple has significantly optimized the iOS system for reading and drawing PNG images.
Reducing Blending - If at all possible, try and remove any transparency from your animation images. Make sure there is no alpha channel in your images even if it seems completely opaque. You can verify by opening the image in Preview and opening the inspector. By reducing or removing these transparent pixels, you eliminate extra rendering passes the system has to do when displaying the image. This can make a significant difference.
Using a GPU backed animation - Your current method of using a timer to animate the image is not recommended for optimal performance. By not using UIViewAnimation or CAAnimation you are forcing the CPU to do most of the animation work. Many of the animation techniques of Core Animation and UIViewAnimation are optimized and backed by OpenGL which using the GPU to process images and animate. Graphics processing is what the GPU is made for and by utilizing it you will maximize your animation performance.
Avoiding pixel misalignment - Make sure your animation images are at the right size on screen when displaying them. If you are stretching your image while animating or using an incorrect frame, the system has to do more work to process each frame. Also, using whole numbers for any frame or point values will keep from anti-aliasing when the system tries to position an image on a fractional pixel.
Be wary of shadows and rounded corners - CALayer has lots of easy ways to create shadows and rounded corners, but if you are moving these layers in animations, often times the system will redraw the layer in each frame of the animation. This is the case when specifying a shadow using the shadowOffset property (using UILabel's shadow properties will not render every frame). Also, borders and using maskToBounds and clipToBounds will be more performance intensive rather than just using an image editor to crop the actual asset.
There are a few things to notice here:
If "falling" is UIImageView, make sure it's content mode says something like "center" and not some sort of scaling (make sure your images fit it, of course).
Other than that, as #FogleBird said, test if your device have enough memory to preload all images, if not, try to at least preload the data by creating NSData objects with the image files.
Your use of #autorelease pool is not very useful, you end up creating an auto release object that does a single thing - remove a reference to an already retained object - no memory gain, but performance loss.
If anything, you should have wrapped the file name formatter code, and considering this method is called by an NSTimer, it is already wrapped in an autorelease pool.
just wanted to point out - when you are creating the NSString with the image name - what is the "Animation HD1.2 png sequence/HD1.2_%d.png" ?
It looks likey you are trying to put a path there, try just the image name - eg. "HD1.2_%d.png".

Objective - C, fastest way to show sequence of images in UIImageView [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to efficiently show many Images? (iPhone programming)
I have hundreds of images, which are frame images of one animation (24 images per second). Each image size is 1024x690.
My problem is, I need to make smooth animation iterating each image frame in UIImageView.
I know I can use animationImages of UIImageView. But it crashes, because of memory problem.
Also, I can use imageView.image = [UIImage imageNamed:#""] that would cache each image, so that the next repeat animation will be smooth. But, caching a lot of images crashed app.
Now I use imageView.image = [UIImage imageWithContentsOfFile:#""], which does not crash app, but doesn't make animation so smooth.
Maybe there is a better way to make good animation of frame images?
Maybe I need to make some preparations, in order to somehow achieve better result. I need your advices. Thank you!
You could try caching say 10 images at a time in memory (you may have to play around with the correct limit -- i doubt it's 10). Everytime you change the image of the imageView you could do something like this:
// remove the image that is currently displayed from the cache
[images removeObjectAtIndex:0];
// set the image to the next image in the cache
imageView.image = [images objectAtIndex:0];
// add a new image to the end of the FIFO
[images addObject:[UIImage imageNamed:#"10thImage.png"]];
You can find your answer here: How to efficiently show many Images? (iPhone programming)
To summarize what is said in that link, you get better performance when showing many images if you use low level API's like Core Animation and OpenGL, as oppose to UIKit.
You could create a buffer of several images using an array. This buffer array of images can be loaded/populated using imageWithContentsOfFile from a background thread (a concurrent GCD asynchronous, for example).

about fast swapping texture on IOS opengl ES

i'm work on a buffer for load very large pictures ( screen size) to single surface.
The idea is to animate a lot of pictures ( more than the video memory can store ) frame by frame.
I have create a code for make a buffer but i have a big problem with the loading time of bitmap.
My code work a this :
I load an array of local bitmap files path.
I (think ) i preload my bitmap datas in memory. I'm using a thread for store a CGImageRef in an NSArray for all my picture ( 40 for moment )
In a second thread, the code look another NSArray for determine if is empty of not, if is empty, i bind my cgimageRef to the video memory by creating textures. ( use sharedgroup for this)
This array store the adress of 20 textures names, and it's use directly by openGL for draw the surface. this array is my (buffer)
When i play my animation, i delete old textures from my "buffer" and my thread ( at point 3) load a new texture.
It's work great, but is really slow, and after few second, the animation lack.
Can you help me for optimise my code ?
Depending on device and iOS version glTexImage is just slow.
With iOS 4 performance was improved so that you can expect decent speed on 2nd gen devices too, and with decent I mean one or two texture uploads per frame...
Anyway:
Use glTexSubImage and reuse already created texture-IDs.
Also, when using glTex(Sub)Image, try to use a texture-ID that wasn't used for rendering in that frame. I mean: add some kind of texture-ID-doublebuffering.
I asume you do all your GL stuff in the same thread, if not change it.