I am making a pdf in an iPad app. Now i can make the pdf however want to add a picture with a rounded corner border. For example to achieve the effect i want on the border on a simple view item i use the following code.
self.SaveButtonProp.layer.cornerRadius=8.0f;
self.SaveButtonProp.layer.masksToBounds=YES;
self.SaveButtonProp.layer.borderColor=[[UIColor blackColor]CGColor];
self.SaveButtonProp.layer.borderWidth= 1.0f;
With the pdf i am using the following method to add the picture with the border to the pdf.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage * demoImage = [UIImage imageWithData : Image];
UIColor *borderColor = [UIColor blackColor];
CGRect rectFrame = CGRectMake(20, 125, 200, 200);
[demoImage drawInRect:rectFrame];
CGContextSetStrokeColorWithColor(currentContext, borderColor.CGColor);
CGContextSetLineWidth(currentContext, 2);
CGContextStrokeRect(currentContext, rectFrame);
How do i round the corners?
Thanks
While drawing you can set clipping masks. For example, it's relatively easy to create a Bezier path with the shape of a rounded rectangle and apply that as clipping mask to your graphics context. Everything subsequently drawn will be clipped.
If you want remove the clipping mask later (for example because you have an image with rounded corners but follow that by other elements) you'll have to save the graphic state first, then apply your clipping mask and restore the graphics state when you're done with your rounded corners.
You can see actual code that comes pretty close to what I think you need here:
UIImage with rounded corners
You can use a method to get any UIView/UIImageView to PDF NSData:
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
NSData *data = [self makePDFfromView:imageView];
Method:
- (NSData *)makePDFfromView:(UIView *)view
{
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, view.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
return pdfData;
}
Maybe you can change or use this code to help you with your problem.
Related
The following code draws some text over an image:
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
NSString *stamp = #"Internal Use Only";
[stamp drawAtPoint:CGPointMake(10, 10) withFont:[UIFont systemFontOfSize:32]];
UIImage *stampedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If the code is executed again using a different string, both strings are combined and not legible. How can I make this so drawAtPoint() overlays the previous string?
You have to make copy of original image every time you want to draw a text on it. In other words you must keep the original version of your image.
I was able to make this work by rendering a UITextView. These know how to draw themselves via their layer property and can display multiple lines (added bonus!):
CGContextRef context = UIGraphicsGetCurrentContext();
NSString *stamp = #"Internal Use Only";
CGRect frame = CGRectMake(0, 0, 120, 80);
textview.frame = frame;
textview.text = stamp;
textview.font = [UIFont systemFontOfSize:32];
[textview.layer renderInContext:context];
I have a sequence of off screen NSViews in a Cocoa application, which are used to compose a PDF for printing. The views are not in an NSWindow, or visible in any way.
I'd like to be able to generate thumbnail images of that view, exactly as the PDF would look, but scaled down to fit a certain pixel size (constrained to a width or height). This needs to be as fast as possible, so I'd like to avoid rendering to PDF, then converting to raster and scaling - I'd like to go direct to the raster.
At the moment I'm doing:
NSBitmapImageRep *bitmapImageRep = [pageView bitmapImageRepForCachingDisplayInRect:pageView.bounds];
[pageView cacheDisplayInRect:pageView.bounds toBitmapImageRep:bitmapImageRep];
NSImage *image = [[NSImage alloc] initWithSize:bitmapImageRep.size];
[image addRepresentation:bitmapImageRep];
This approach is working well, but I can't work out how to apply a scaling to the NSView before rendering the bitmapImageRep. I want to avoid using scaleUnitSquareToSize, because as I understand it, that only changes the bounds, not the frame of the NSView.
Any suggestions on the best way of doing this?
This is what I ended up doing, which works perfectly. We draw directly into an NSBitmapImageRep, but scale the context explicitly using CGContextScaleCTM beforehand. graphicsContext.graphicsPort gives you the handle on the CGContextRef for the NSGraphicsContext.
NSView *pageView = [self viewForPageIndex:pageIndex];
float scale = width / pageView.bounds.size.width;
float height = scale * pageView.bounds.size.height;
NSRect targetRect = NSMakeRect(0.0, 0.0, width, height);
NSBitmapImageRep *bitmapRep;
bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:targetRect.size.width
pixelsHigh:targetRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * targetRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *graphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext setCurrentContext:graphicsContext];
CGContextScaleCTM(graphicsContext.graphicsPort, scale, scale);
[pageView displayRectIgnoringOpacity:pageView.bounds inContext:graphicsContext];
[NSGraphicsContext restoreGraphicsState];
NSImage *image = [[NSImage alloc] initWithSize:bitmapRep.size];
[image addRepresentation:bitmapRep];
return image;
How about using scaleUnitSquareToSize: and then passing in a smaller rect in to bitmapImageRepForCachingDisplayInRect: and cacheDisplayInRect:toBitmapImageRep:?
So, if you downscale it by a factor of 2, you'd pass a rect to with half with bounds and height.
Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.
I have a UITableViewCell, and I want to use the image property to fill it with an image. When it is first in a grouped UITableView, I want it to have the standard rounded corners.
Unfortnately, the image fills the rounded corners as well.. Is there any way to retain them without using a transparent image?
You can round the corners of any view programmatically by using its layer property. If you play about with the cornerRadius property of the layer you should be able to achieve the results you want.
#include <QuartzCore/QuartzCore.h>
UIImage *myImage = [UIImage imageNamed:#"image.png"];
UIImageView *imgView = [[UIImageView alloc] initWithImage:myImage];
imgView.layer.cornerRadius = 10.0;
If you just want to round some of the corners, you should look at the UIBezierPath API and use the path to mask your image. This isn't tested but it should point you in the right direction:
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:cell.bounds
byRoundingCorners:UIRectCornerTopLeft | UIRectCornerTopRight
cornerRadii:CGSizeMake(10.0, 10.0)];
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = imageView.frame;
maskLayer.path = path;
imageView.layer.mask = maskLayer;
I just made my non-transparent images have the rounded corner. Take a screen shot to get the dimensions of the rounded corner then use that as the basis for your image.
I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.