What I'm doing is merge 2 images into single image.
Here is the code.
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
[image1 drawInRect:CGRectMake(0, 0, 512, 768)];
[image2 drawInRect:CGRectMake(512, 0, 512, 768)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But drawInRect seems too slow.
Is there faster way to merge images?
Option 0 - Make sure you draw only when you need to draw, and only what you need to draw.
Option 1
If you know the destination size (perhaps self.frame.size?), you can create one image from the two source images (flatten) at the destination size and avoid interpolation. So that could:
Reduce the memory you need
Reduce the number of images you must draw
Avoid interpolation (High CPU if you want it to look good)
Look better - the composite can use High Quality interpolation.
Of course, this only makes sense when the composite varies at a frequency lower than it must be drawn.
Option 2
Even if you want two images and you know their sizes will not change, just resize them to the size they must be drawn at (well, monitor your memory usage if you are enlarging them).
Option 3
If that's not an option, you could alter the CGContext's state, and reduce interpolation quality. If you're used to similar CALayer transformations, you would probably be satisfied with low quality or no interpolation.
One thing you could do is not implement -drawRect: at all, but instead have two CALayers with your two images as their contents, then put one in front of the other. Not sure that would be faster, but I think it's likely (since CA could then handle the drawing asynchronously, on the GPU).
Related
I am trying to save the view with its subview, but the saved image is little bit blurry (especially the label's text)
I tried all the solutions given in stackoverflow - no use.
Can anyone help me on the same?
I am using the below code
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
And getting the blurred text, also the picture quality is low.
You could try a higher resolution image. It should be fine if you compress a high resolution image to down, but scaling up a low resolution image to a larger size will generally blur the image contents, as it stretches everything.
The preferred approach is [UIView snapshotViewAfterScreenUpdates:]. You should only use drawViewHierarchyInRect:afterScreenUpdates: if you plan to apply additional effects.
That said, there are several likely causes, depending on how you're manipulating or saving the image. For example, saving text in JPEG format will cause blurriness. Rotating or scaling the image without great care can make the text blurry. Drawing the image incorrectly (for instance, failing to pixel-align it) can make the text blurry. You should simplify your problem if you're making multiple steps, and validate the quality at each step. To discuss it further on StackOverflow, you need to provide details on how you're manipulating and displaying the image, not just how you generate it.
Text is extremely susceptible to artifacts. If you must take pictures of it (something you generally should avoid if at all possible), you should make sure to manipulate it as little as possible. It is always better to manipulate the text before it's drawn rather than after.
I have two instances of a CALayer subclass.
The only difference between them is this line:
[self setTransform:CATransform3DMakeScale(2, 2, 2)];
What else do I need so that the large layer looks good at scale 2x ?
PS: (to avoid any confusion) The layers also include a few control buttons, shadows and rounded corner to mimic the look of windows in a windowing system, but those are not NSWindows instances.
The short answer is, don't use transforms. Transforms scale the layer by magnifying it, without re-rendering.
You could get a very similar effect by using a CAShapeLayer and animating changes to the path. That would give you sharp rendering, however, because it path animation does re-render the pixels.
I say "similar" effect because CAShapeLayers use a lineWidth property for the whole layer. You can animate the line width between values, and use fractional values, but you'll have to do some fine-tuning to get the line thickness to animate up and down in proportion to the size of the shape. Another consideration is that the graphics system uses anti-aliasing to draw fractional width paths, so when the line width is not an integer value they will look slightly soft. You could turn off antialiasing, but then they would look really jaggy.
I am using UIImageView to display thumbnails of images that can then be selected to be viewed at full size. The UIImageView has its content mode set to aspect fit.
The images are usually scaled down from around 500px x 500px to 100px x 100px. On the retina iPad they display really well while on the iPad2 they are badly aliased until the size gets closer to the native image size.
Examples:
Original Image
Retina iPad rendering at 100px x 100px
iPad 2 rendering at 100px x 100px
The difference between iPad 2 and new iPad might just be the screen resolution or could be that the GPU is better equipped to scale images. Either way, the iPad 2 rendering is very poor.
I have tried first reducing the image size by creating a new context, setting the interpolation quality to high and drawing the image into the context. In this case, the image looks fine on both iPads.
Before I continue down the image copy/resize avenue, I wanted to check there wasn't something simpler I was missing. I appreciate that UIImage isn't there to be scaled but I was under the impression UIImageView was there to handle scaling but at the moment it doesn't seem to be doing a good job scaling down. What (if anything) am I missing?
Update: Note: The drop shadow on the rendered / resized images is added in code. Disabling this made no difference to the quality of the scaling.
Another approach I've tried that does seem to be improving things is to set the minificationFilter:
[imageView.layer setMinificationFilter:kCAFilterTrilinear]
The quality is certainly improved and I haven't noticed a performance hit.
Applying a small minification filter bias can help out with this if you don't want to resample the image yourself:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.minificationFilterBias = 0.1
The left image has no filtering applied to it. The right image has a 0.1 filter bias.
Note that no explicit rasterization is required.
Playing around with very small values, you can usually come up with a value that smooths out the scaling artifacts just enough, and it's a lot easier than resizing the bitmap yourself. Certainly, you lose detail as the bias increases, so values even less than 0.1 are probably sufficient, though it all depends on the size the image view's frame that's displaying the image.
Just realize that trilinear filtering effectively enables mipmapping on the layer, which basically means it generates extra copies of the bitmap at progressively smaller scales. It's a very common technique used in rendering to increase render speed and also reduce scaling aliasing. The tradeoff is that it requires more memory, though the memory usage for successive downsampled bitmaps reduces exponentially.
Another potential advantage to this technique, though I have not tried it myself, is that you can animate minificationFilterBias. So if you're going to be scaling an image view down quite a lot as part of an animation, consider also animating the filter bias from 0.0 to whatever small value you've determined is appropriate for the scaled down size.
Finally, as others have noted, if your source image is very large, this technique isn't appropriate if overused, because Core Animation will always keep around the original bitmap. It's better to resize the image then discard the source image instead of using mipmapping in most cases, but for one-offs or cases where your image views are going to be deallocated quickly enough, this is fine.
if you just put the large image in a small imageview it will look real bad.
the solution is to properly resize the image... i'll add an example function that does the trick:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
CGContextDrawImage(context, newRect, imageRef);
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
this function might take some time .. so you might want to save the result to a cache file.
If you're not afraid of wasting memory and know what you're doing for a particular case, this works beautifully.
myView.layer.shouldRasterize = YES;
myView.layer.rasterizationScale = 2;
The resulting quality is much better than setMinificationFilter.
I am using images that are 256x256 and scaling them to something like 48 px. Obviously a saner solution here would be to downscale the images to the exact destination size.
Next helped to me:
imageView.layer.minificationFilter = kCAFilterTrilinear
imageView.layer.shouldRasterize = true
imageView.layer.rasterizationScale = UIScreen.mainScreen().scale
Keep an eye on performance if used in scroll lists.
I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath in a fullscreen custom UIView and contains at most 320 data-points. The data is frequently updated and the graph needs to be redrawn accordingly – a refresh rate of 10/sec would be nice.
So far so easy. It seems however, that my approach takes a lot of CPU time. Refreshing the graph with 320 segments at 10 times per second results in 45% CPU load for the process on an iPhone 4S.
Maybe I underestimate the graphics-work under the hood, but to me the CPU load seems a lot for that task.
Below is my drawRect() function that gets called each time a new set of data is ready. N holds the number of points and points is a CGPoint* vector with the coordinates to draw.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
// set attributes
CGContextSetStrokeColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextSetLineWidth(context, 1.f);
// create path
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, N+1);
// stroke path
CGContextAddPath(context, path);
CGContextStrokePath(context);
// clean up
CGPathRelease(path);
}
I tried rendering the path to an offline CGContext first before adding it to the current layer as suggested here, but without any positive result. I also fiddled with an approach drawing to the CALayer directly but that too made no difference.
Any suggestions how to improve performance for this task? Or is the rendering simply more work for the CPU that I realize? Would OpenGL make any sense/difference?
Thanks /Andi
Update: I also tried using UIBezierPath instead of CGPath. This post here gives a nice explanation why that didn't help. Tweaking CGContextSetMiterLimit et al. also didn't bring great relief.
Update #2: I eventually switched to OpenGL. It was a steep and frustrating learning curve, but the performance boost is just incredible. However, CoreGraphics' anti-aliasing algorithms do a nicer job than what can be achieved with 4x-multisampling in OpenGL.
This post here gives a nice explanation why that didn't help.
It also explains why your drawRect: method is slow.
You're creating a CGPath object every time you draw. You don't need to do that; you only need to create a new CGPath object every time you modify the set of points. Move the creation of the CGPath to a new method that you call only when the set of points changes, and keep the CGPath object around between calls to that method. Have drawRect: simply retrieve it.
You already found that rendering is the most expensive thing you're doing, which is good: You can't make rendering faster, can you? Indeed, drawRect: should ideally do nothing but rendering, so your goal should be to drive the time spent rendering as close as possible to 100%—which means moving everything else, as much as possible, out of drawing code.
Depending on how you make your path, it may be that drawing 300 separate paths is faster than one path with 300 points. The reason for this is that often the drawing algorithm will be looking to figure out overlapping lines and how to make the intersections look 'perfect' - when perhaps you only want the lines to opaquely overlap each other. Many overlap and intersection algorithms are N**2 or so in complexity, so the speed of drawing scales with the square of the number of points in one path.
It depends on the exact options (some of them default) that you use. You need to try it.
tl;dr: You can set the drawsAsynchronously property of the underlying CALayer, and your CoreGraphics calls will use the GPU for rendering.
There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the drawsAsynchronously property of the underlying CALayer switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.
I ran a few performance comparisons (involving several different CG calls, including CGContextDrawRadialGradient, CGContextStrokePath, and CoreText rendering using CTFrameDraw), and for larger render targets there was a massive performance increase of over 10x.
As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.
Have you tried using UIBezierPath instead? UIBezierPath uses CGPath under-the-hood, but it'd be interesting to see if performance differs for some subtle reason. From Apple's Documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'd would also try setting different properties on the CGContext, in particular different line join styles using CGContextSetLineJoin(), to see if that makes any difference.
Have you profiled your code using the Time Profiler instrument in Instruments? That's probably the best way to find where the performance bottleneck is actually occurring, even when the bottleneck is somewhere inside the system frameworks.
I am no expert on this, but what I would doubt first is that it could be taking time to update 'points' rather than rendering itself. In this case, you could simply stop updating the points and repeat rendering the same path, and see if it takes nearly the same CPU time. If not, you can improve performance focusing on the updating algorithm.
If it IS truly the problem of the rendering, I think OpenGL should certainly improve performance because it will render all 320 lines at the same time in theory.
This does exactly what it needs to, except that it takes about 400 milliseconds, which is 350 milliseconds too much:
- (void) updateCompositeImage { //blends together the background and the sprites
UIGraphicsBeginImageContext(CGSizeMake(480, 320));
[bgImageView.image drawInRect:CGRectMake(0, 0, 480, 320)];
for (int i=0;i<numSprites;i++) {
[spriteImage[spriteType[i]] drawInRect:spriteRect[i] blendMode:kCGBlendModeScreen alpha:spriteAlpha[i]];
}
compositeImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
The images are fairly small, and there are only three of them (the for loop only iterates twice)
Is there any way of doing this faster? While still being able to use kCGBlendModeScreen and alpha?
you can:
get the UIImages' CGImages
then draw them to a CGBitmapContext
produce an image from that
using CoreGraphics in itself may be faster. the other bonus is that you can perform the rendering on a background thread. also consider how you can optimize that loop and profile using Instruments.
other considerations:
can you reduce the interpolation quality?
are the source images resized in any way (it can help if you resize them)
drawInRect is slow. Period. Even in small images it's grossly inefficient.
If you are doing a lot of repeat drawing, then have a look at CGLayer, which is designed to facilitate repeat-rendering of the same bits