crop image from certain portion of screen in iphone programmatically - objective-c

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this

Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/

Related

coordinate computation of the image thumbnail

This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.

Using cornerRadius on a UIImageView in a UITableViewCell

I'm using a UIImageView for each of my UITableViewCells, as thumbnails. My code uses SDWebImage to asynchronously grab those images from my backend and load them in, and then caching them. This is working fine.
My UIImageView is a 50x50 square, created in Interface Builder. Its background is opaque, white color (same as my UITableViewCell background color, for performance). However, I'd like to use smooth corners for better aesthetics. If I do this:
UIImageView *itemImageView = (UIImageView *)[cell viewWithTag:100];
itemImageView.layer.cornerRadius = 2.5f;
itemImageView.layer.masksToBounds = NO;
itemImageView.clipsToBounds = YES;
My tableview drops around 5-6 frames immediately, capping about 56 FPS during fast scrolling (which is okay), but when I drag and pull the refresh control, it lags a bit and drops to around 40 FPS. If I remove the cornerRadius line, all is fine and no lag. This has been tested on an iPod touch 5G using Instruments.
Is there any other way I could have a rounded UIImageView for my cells and not suffer a performance hit? I'd already optimized my cellForRowAtIndexPath and I get 56-59 FPS while fast scrolling with no cornerRadius.
Yes, that's because cornerRadius and clipToBounds requires offscreen rendering, I suggest you to read these answer from one of my question. I also quote two WWDC session thatyou should see. The best thing you can do is grab the image right after is downloaded and on another thread dispatch a method that round the images. Is preferable that you work on the image instead of the imageview.
// Get your image somehow
UIImage *image = [UIImage imageNamed:#"image.jpg"];
// Begin a new image that will be the new image with the rounded corners
// (here with the size of an UIImageView)
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, NO, 1.0);
// Add a clip before drawing anything, in the shape of an rounded rect
[[UIBezierPath bezierPathWithRoundedRect:imageView.bounds
cornerRadius:10.0] addClip];
// Draw your image
[image drawInRect:imageView.bounds];
// Get the image, here setting the UIImageView image
imageView.image = UIGraphicsGetImageFromCurrentImageContext();
// Lets forget about that we were drawing
UIGraphicsEndImageContext();
Method grabbed here
You can also subclass the tableviewcell and override the drawRect method.
The dirty but very effective way is draw a mask in photoshop with inside alpha and around the matching color of the background of the cell and add another imageView, not opaque with clear background color, on the one with images.
There is another good solution for this. I did it a few times in my projects. If you want to create a rounded corners or something else you could just use a cover image in front of your main image.
For example, you want to make rounded corners. In this case you need a square image layer with a cut out circle in the center.
By using this method you will get 60fps on scrolling inside UITableView or UICollectionView. Because this method not required offscreen rendering for customizing UIImageView's (like avatars and etc.).
Non blocking solution
As a follow up to Andrea's response, here is a function that will run his code in the background.
+ (void)roundedImage:(UIImage *)image
completion:(void (^)(UIImage *image))completion {
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Begin a new image that will be the new image with the rounded corners
// (here with the size of an UIImageView)
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
CGRect rect = CGRectMake(0, 0, image.size.width,image.size.height);
// Add a clip before drawing anything, in the shape of an rounded rect
[[UIBezierPath bezierPathWithRoundedRect:rect
cornerRadius:image.size.width/2] addClip];
// Draw your image
[image drawInRect:rect];
// Get the image, here setting the UIImageView image
UIImage *roundedImage = UIGraphicsGetImageFromCurrentImageContext();
// Lets forget about that we were drawing
UIGraphicsEndImageContext();
dispatch_async( dispatch_get_main_queue(), ^{
if (completion) {
completion(roundedImage);
}
});
});
}
Swift 3 version of thedeveloper3124's answer
func roundedImage(image: UIImage, completion: #escaping ((UIImage?)->(Void))) {
DispatchQueue.global().async {
// Begin a new image that will be the new image with the rounded corners
// (here with the size of an UIImageView)
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
// Add a clip before drawing anything, in the shape of an rounded rect
UIBezierPath(roundedRect: rect, cornerRadius: image.size.width/2).addClip()
// Draw your image
image.draw(in: rect)
// Get the image, here setting the UIImageView image
guard let roundedImage = UIGraphicsGetImageFromCurrentImageContext() else {
print("UIGraphicsGetImageFromCurrentImageContext failed")
completion(nil)
return
}
// Lets forget about that we were drawing
UIGraphicsEndImageContext()
DispatchQueue.main.async {
completion(roundedImage)
}
}
}

Issue with "renderincontext" with opengl views

I have a problem, with openGL views. I have two openGL views. The second view is added as a subview to the mainview. The two opengl views are drawn in two different opengl contexts. I need to capture the screen with the two opengl views.
The issue is that if I try to render one CAEAGLLayer in a context as below:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 1*(self.frame.size.width*0.5), 1*(self.frame.size.height*0.5));
CGContextScaleCTM(context, 3, 3);
CGContextTranslateCTM(context, abcd, abcd);
CAEAGLLayer *eaglLayer = (CAEAGLLayer*) self.myOwnView.layer;
[eaglLayer renderInContext:context];
it does not work. If I see the context (given the output as an image), The contents in the opengl layer are missing. But I find the toolbar and 2d images attached to the view, in the output image. I am not sure of the problem. Please help.
I had a similar problem and found a much more elegant solution. Basically, you subclass CAEAGLLayer, and add your own implementation of renderInContext that simply asks the OpenGL view to render the contents using glReadPixels. The beauty is that now you can call renderInContext on any layer in the hierarchy, and the result is a fully composed, perfect looking screenshot that includes your OpenGL views in it!
Our renderInContext in the subclassed CAEAGLLayer is:
- (void)renderInContext:(CGContextRef)ctx
{
[super renderInContext: ctx];
[self.delegate renderInContext: ctx];
}
Then, in the OpenGL view we replace layerClass so that it returns our subclass instead of the plain vanilla CAEAGLLayer:
+ (Class)layerClass
{
return [MyCAEAGLLayer class];
}
We add a method in the view to actually render the contents of the view into the context. Note that this code MUST run after your GL view has been rendered, but before you call presentRenderbuffer so that the render buffer will contain your frame. Otherwise the resulting image will most likely be empty (you may see different behavior between the device and the simulator on this particular issue).
- (void) renderInContext: (CGContextRef) context
{
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
}
Finally, in order to grab a screenshot you use renderInContext in the usual fasion. Of course the beauty is that you don't need to grab the OpenGL view directly. You can grab one of the superviews of the OpenGL view and get a composed screenshot that includes the OpenGL view along with anything else next to it or on top of it:
UIGraphicsBeginImageContextWithOptions(superviewToGrab.bounds.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[superviewToGrab.layer renderInContext: context]; // This recursively calls renderInContext on all the sublayers, including your OpenGL layer(s)
CGImageRef screenShot = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
This question has already been settled, but I wanted to note that Idoogy's answer is actually dangerous and a poor choice for most use cases.
Rather than subclass CAEAGLLayer and create a new delegate object, you can use the existing delegate methods which accomplish exactly the same thing. For example:
- (void) drawLayer:(CALayer *) layer inContext:(CGContextRef)ctx;
is a great method to implement in your GL-based views. You can implement it in much that same way he suggests, using glReadPixels: just make sure to set the Retained-Backing property on your view to YES, so that you can call the above method anytime without having to worry about it having been invalidated by presentation for display.
Subclassing CAEAGL layer messes with the existing UIView / CALayer delegate relationship: in most cases, setting the delegate object on your custom layer will result in your UIView being excluded from the view hierarchy. Thus, code like:
customLayerView = [[CustomLayerView alloc] initWithFrame:someFrame];
[someSuperview addSubview:customLayerView];
will result in a weird, one-way superview-subview relationship, since the delegate methods that UIView relies on won't be implemented. (Your superview will still have the sublayer from your custom view, though).
So, instead of subclassing CAEAGLLayer, just implement some of the delegate methods. Apple lays it out for you here: https://developer.apple.com/library/ios/documentation/QuartzCore/Reference/CALayerDelegate_protocol/Reference/Reference.html#//apple_ref/doc/uid/TP40012871
All the best,
Sam
I think http://developer.apple.com/library/ios/#qa/qa1704/_index.html provides what you want.

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.

Creating thumbnail for an image grid

I'm building an app like the photo app by apple in the iPad. I have large full-screen image and I show them using a scrollView for managing zooming and paging. The main problem happen when I try to create a grid with the thumbnail of the images. I create them as UIImageView overlapped on a UIButton. All works great, but when I try the app on the iPad, it requires a lot of memory, I suppose it depend on the rescaling of the Image. There's a way to create a UIImageView with the little image,rescaling the larger image, without using so much memory?
You can use UIGraphics to create a thumbnail. Here's this code to do it:
UIGraphicsBeginImageContext(CGSizeMake(length, length));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextClipToRect( currentContext, clippedRect);
CGFloat scaleFactor = length/sideFull;
if (widthGreaterThanHeight) {
//a landscape image – make context shift the original image to the left when drawn into the context
CGContextTranslateCTM(currentContext, -((mainImage.size.width - sideFull) / 2) * scaleFactor, 0);
}
else {
//a portfolio image – make context shift the original image upwards when drawn into the context
CGContextTranslateCTM(currentContext, 0, -((mainImage.size.height - sideFull) / 2) * scaleFactor);
}
//this will automatically scale any CGImage down/up to the required thumbnail side (length) when the CGImage gets drawn into the context on the next line of code
CGContextScaleCTM(currentContext, scaleFactor, scaleFactor);
[mainImageView.layer renderInContext:currentContext];
UIImage* thumbnail = UIGraphicsGetImageFromCurrentImageContext();