I want to scale images to 400x400 (I am creating thumbnails). I am using the Scriptable Image Processing System (SIPS) in a Cocoa application, but the problem is poor efficiency. SIPS takes 70-90% CPU while converting 300 images in 20 seconds. Should I use the CIImage class (CIImage is the type required to use the various GPU-optimized Core Image filters) or NSImage class? Can anyone suggest a better method?
A very simple and fast way to generate thumbnails on OS X is to use QLThumbnailImageCreate.
It's just one line of code so you can easily try out how it compares to SIPS & Core Image.
I tried thumbnail genration using NSImage , CIImage and sips. All are taking same CPU (70-90%) usage but sips is faster.
Related
I wrote the following code to apply a Sepia filter to an image:
- (void)applySepiaFilter {
// Set previous image
NSData *buffer = [NSKeyedArchiver archivedDataWithRootObject: self.mainImage.image];
[_images push:[NSKeyedUnarchiver unarchiveObjectWithData: buffer]];
UIImage* u = self.mainImage.image;
CIImage *image = [[CIImage alloc] initWithCGImage:u.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CISepiaTone"
keysAndValues: kCIInputImageKey, image,
#"inputIntensity", #0.8, nil];
CIImage *outputImage = [filter outputImage];
self.mainImage.image = [self imageFromCIImage:outputImage];
}
- (UIImage *)imageFromCIImage:(CIImage *)ciImage {
CIContext *ciContext = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [ciContext createCGImage:ciImage fromRect:[ciImage extent]];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
}
When I run this code it seems to lag for 1-2 seconds. I heard that core image is faster than core graphics but I am unimpressed with the rendering time. I was wondering if this would be faster processing in CoreGraphics or even OpenCV(which is being used elsewhere in the project)? If not is there any way I can optimize this code to run faster?
I can almost guarantee it will be slower in Core Graphics than using Core Image, depending on the size of the image. If the image is small, Core Graphics may be fine, but if you are doing a lot of processing, it will be much slower than rendering using the GPU.
Core Image is very fast, however, you have to be very conscious of what is going on. Most of the performance hit with Core Image is due to setting up of the context, and copying images to/from Core Image. In addition to just copying bytes, Core Image may be converting between image formats as well.
Your code is doing the following every time:
Creating a CIContext. (slow)
Taking bytes from a CGImage and creating a CIImage.
Copying image data to GPU (slow).
Processing Sepia filter (fast).
Copying result image back to CGImage. (slow)
This is not a recipe for peak performance. Bytes from CGImage will typically live in CPU memory, but Core Image wants to use the GPU for its processing.
An excellent reference for performance considerations are provided in Getting the Best Performance documentation for Core Image:
Don’t create a CIContext object every time you render.
Contexts store a lot of state information; it’s more efficient to reuse them.
Evaluate whether you app needs color management. Don’t use it unless you need it. See Does Your App Need Color Management?.
Avoid Core Animation animations while rendering CIImage objects with a GPU context.
If you need to use both simultaneously, you can set up both to use the CPU.
Make sure images don’t exceed CPU and GPU limits. (iOS)
Use smaller images when possible.
Performance scales with the number of output pixels. You can have Core Image render into a smaller view, texture, or framebuffer. Allow Core Animation to upscale to display size.
Use Core Graphics or Image I/O functions to crop or downsample, such as the functions CGImageCreateWithImageInRect or CGImageSourceCreateThumbnailAtIndex.
The UIImageView class works best with static images.
If your app needs to get the best performance, use lower-level APIs.
Avoid unnecessary texture transfers between the CPU and GPU.
Render to a rectangle that is the same size as the source image before applying a contents scale factor.
Consider using simpler filters that can produce results similar to algorithmic filters.
For example, CIColorCube can produce output similar to CISepiaTone, and do so more efficiently.
Take advantage of the support for YUV image in iOS 6.0 and later.
If you demand real-time processing performance, you will want to use an OpenGL view that CoreImage can render its output to, and read your image bytes directly into the GPU instead of pulling it from a CGImage. Using a GLKView, and overriding drawRect: is a fairly simple solution to get a view that Core Image can render directly to. Keeping data on the GPU is the best way to get peak performance out of Core Image.
Try to reuse as much as possible. Keep a CIContext around for subsequent renders (like the doc says). If you end up using an OpenGL view, these are also things you may want to re-use as much as possible.
You may also be able to get better performance by using software rendering. Software rendering would avoid a copy to/from GPU. [CIContext contextWithOptions:#{kCIContextUseSoftwareRenderer: #(YES)}] However, this will have performance limitations in the actual render, since the CPU render is usually slower than a GPU render.
So, you can choose your level of difficulty to get maximum performance. The best performance can be more challenging, but a few tweaks may get you to "acceptable" performance for your use case.
I wrote code that loops through a video feed of a computer screen and recognizes certain PNG images by looping through pixels. I get 60fps with 250% CPU usage(1280x800 video feed). The code is a blend of Objective-C and C++.
I'm trying to find a faster alternative. Can Core Image detect instances of an image within another image and give me the pixel location? If not, is OpenCV fast enough to do that kind of processing at 60fps?
If Core Image and OpenCV aren't the correct tools, is there another tool that would be better suited?
(I haven't found any documentation showing Core Image can do what I need, I am trying to get a OpenCV demo working to benchmark)
I load some images from the web in my app and draw them using UIImage drawImage. I want to keep using the same images for the retina display but smooth them with interpolation. How can we accomplish this?
I suppose I'm fine with either saving (in memory) with double resolution ahead of time or scaling at render time. It depends how much it affects performance to scale during render time.
Core Graphics does this automatically for you, there is absolutely no need to store upscaled bitmaps in your app, this would just be a waste of storage space. You can influence the interpolation quality of a graphics context a little bit with the CGContextSetInterpolationQuality function.
I'm building an image gallery type of application and i want to know what is the best image size and format (.png or .jpg) to be stored in the app.
I also want to know what is the best and most efficient way to store and load images natively.
PNGs are optimized for the OS when they are added to your app bundle. To improve performance of your app you want to:
Make sure the images have no alpha channel in them, otherwise the OS will try and blend them.
Resize your images to be the size they will end up in the UI. This will reduce the read bandwidth for the drawing, saving you memory and boosting performance.
Images in the app bundle can be read into UIImageViews:
UIImageView *newImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"myimage.png"]];
This will cache the image for you, which is good for lots of small images re-used a lot. For larger infrequently used images you want to use +imageFromContentsOfFile: instead of +imageNamed:
As far as I know iOS and xCode optimizes PNG formats and performs better with them.
You may want to check out this :
http://bjango.com/articles/pngcompression/
I need to take some images from the iPhone / iPad photo library from within my app and store them in a Core Data entity, and display them as small thumbnail images (48x48 pixels) in a UITableViewCell, and about 80x80 pixels in a detail UIView. I've followed the Recipes sample app, where they use UIImageJPEGRepresentation(value, 0.1) to convert to NSData and store the bytes inside Core Data, and it doesn't end up taking much space, which is good. But when retrieve the data, using UIImage *uiImage = [[UIImage alloc] initWithData:value]; and display it as a thumbnail image with "Aspect Fit", it looks terrible and grainy. I tried changing the image quality variable in the JPEG compression, but even setting it to 0.9 doesn't help.
Is that normal? Is there a better way to compress the image that doesn't cause so much grainee-ness? Since I just want to show a small thumbnail, and then a slightly bigger thumbnail, I feel Core Data would be great for storing this, since it should (theoretically) also support iCloud. But if it's going to look terrible, then I'll have to reconsider.
Two things, are you resizing the image to the right size? Have you tried UIImagePNGRepresentation()? That should compress it without losing quality.
If UIImagePNGRepresentation (which is lossless) is giving you bad images, then the problem is in your image resizing code. Core Data is just giving you what you back what you put in, so if you get bad images out, it's because you put bad images in.
Is one of your iPhone/iPad retina and the other isn't? If so, perhaps the problem is that you don't really want 48x48 pixel images, you want 48x48 point (which means you'll need 2x images 96x96 for retina quality display).