The following code draws some text over an image:
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
NSString *stamp = #"Internal Use Only";
[stamp drawAtPoint:CGPointMake(10, 10) withFont:[UIFont systemFontOfSize:32]];
UIImage *stampedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If the code is executed again using a different string, both strings are combined and not legible. How can I make this so drawAtPoint() overlays the previous string?
You have to make copy of original image every time you want to draw a text on it. In other words you must keep the original version of your image.
I was able to make this work by rendering a UITextView. These know how to draw themselves via their layer property and can display multiple lines (added bonus!):
CGContextRef context = UIGraphicsGetCurrentContext();
NSString *stamp = #"Internal Use Only";
CGRect frame = CGRectMake(0, 0, 120, 80);
textview.frame = frame;
textview.text = stamp;
textview.font = [UIFont systemFontOfSize:32];
[textview.layer renderInContext:context];
Related
I am making a pdf in an iPad app. Now i can make the pdf however want to add a picture with a rounded corner border. For example to achieve the effect i want on the border on a simple view item i use the following code.
self.SaveButtonProp.layer.cornerRadius=8.0f;
self.SaveButtonProp.layer.masksToBounds=YES;
self.SaveButtonProp.layer.borderColor=[[UIColor blackColor]CGColor];
self.SaveButtonProp.layer.borderWidth= 1.0f;
With the pdf i am using the following method to add the picture with the border to the pdf.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage * demoImage = [UIImage imageWithData : Image];
UIColor *borderColor = [UIColor blackColor];
CGRect rectFrame = CGRectMake(20, 125, 200, 200);
[demoImage drawInRect:rectFrame];
CGContextSetStrokeColorWithColor(currentContext, borderColor.CGColor);
CGContextSetLineWidth(currentContext, 2);
CGContextStrokeRect(currentContext, rectFrame);
How do i round the corners?
Thanks
While drawing you can set clipping masks. For example, it's relatively easy to create a Bezier path with the shape of a rounded rectangle and apply that as clipping mask to your graphics context. Everything subsequently drawn will be clipped.
If you want remove the clipping mask later (for example because you have an image with rounded corners but follow that by other elements) you'll have to save the graphic state first, then apply your clipping mask and restore the graphics state when you're done with your rounded corners.
You can see actual code that comes pretty close to what I think you need here:
UIImage with rounded corners
You can use a method to get any UIView/UIImageView to PDF NSData:
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
NSData *data = [self makePDFfromView:imageView];
Method:
- (NSData *)makePDFfromView:(UIView *)view
{
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, view.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
return pdfData;
}
Maybe you can change or use this code to help you with your problem.
Generate PDF from UIView by rendering looses quality iOS
I have a custom UIView called TTT_WantsToBeRazorSharpView.
This view does nothing, but draw a text with
NSString*txtPleaseHelp = NSLocalizedString(#"Hello, am I blurry again?",#"");
CGContextShowTextAtPoint(ctx, 10, 50, [txtPleaseHelp cStringUsingEncoding:NSMacOSRomanStringEncoding], [txtPleaseHelp length]);
Now the View is drawn three times to an UIViewController (one time in IB) and two times in code with these lines (compare to image below):
TTT_WantsToBeRazorSharpView *customViewSharp = [[TTT_WantsToBeRazorSharpView alloc] initWithFrame:CGRectMake(10,250, 300, 80)];
[self.view addSubview:customViewSharp];
TTT_WantsToBeRazorSharpView *customViewBlurryButIKnowWhy = [[TTT_WantsToBeRazorSharpView alloc] initWithFrame:CGRectMake(361.5, 251.5, 301.5, 80.5)];
[self.view addSubview:customViewBlurryButIKnowWhy];
the first code drawn view is razorsharp, while the second isn't, but that is ok, because of rect's comma values (361.5, 251.5, 301.5, 80.5).
See this picture:
But my problem is now, if I render the view into an pdf document it is blurry!
And don't know why, see here:
and the PDF file itself Test.pdf
https://raw.github.com/florianbachmann/GeneratePDFfromUIViewButDontWantToLooseQuality/master/Test.pdf
and the lines to render the view into the pdf:
//this is blurry, but why? it can't be the comma coordinates
CGRect customFrame1 = CGRectMake(10,50, 300, 80);
TTT_WantsToBeRazorSharpView *customViewSharp = [[TTT_WantsToBeRazorSharpView alloc] initWithFrame:customFrame1];
CGContextSaveGState(context);
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, (int)customFrame1.origin.x, (int)customFrame1.origin.y);
[customViewSharp.layer renderInContext:context];
CGContextRestoreGState(context);
So why is the renderInContext:context text blurry?
I appreciate all your hints and help, I even made an GitHub project for the brave of you (with source): https://github.com/florianbachmann/GeneratePDFfromUIViewButDontWantToLooseQuality
Try this:
- (void)initValues {
UIColor *gray = [UIColor colorWithRed:0.0f green:0.0f blue:0.0f alpha:0.3f];
CALayer *l = [self layer];
l.masksToBounds = YES;
l.cornerRadius = 10.0f;
l.borderWidth = 3.0f;
self.backgroundColor = gray;
l.backgroundColor = gray.CGColor;
l.borderColor = gray.CGColor;
}
- (void)drawRect:(CGRect)rect {
NSLog(#"TTT_WantsToBeRazorSharpView drawRect[%3.2f,%3.2f][%3.2f,%3.2f]",rect.origin.x,rect.origin.y,rect.size.width,rect.size.height);
// get the graphic context
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetShouldSmoothFonts(ctx,YES);
CGColorRef colorWhite = CGColorRetain([UIColor redColor].CGColor);
CGContextSetFillColorWithColor(ctx, colorWhite);
CGContextSelectFont(ctx, "Helvetica-Bold", 24, kCGEncodingMacRoman);
CGAffineTransform tranformer = CGAffineTransformMakeScale(1.0, -1.0);
CGContextSetTextMatrix(ctx, tranformer);
//why is this sucker blurry?
NSString*txtPleaseHelp = NSLocalizedString(#"Hello, am I blurry again?",#"");
CGContextShowTextAtPoint(ctx, 10, 50, [txtPleaseHelp cStringUsingEncoding:NSMacOSRomanStringEncoding], [txtPleaseHelp length]);
CGColorRelease(colorWhite);
}
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
if (!layer.shouldRasterize && !CGRectIsEmpty(UIGraphicsGetPDFContextBounds())) {
[self drawRect:self.bounds];
}
else {
[super drawLayer:layer inContext:ctx];
}
}
I had to change the text color to red to make it visible. Because the text is written first, then the transparent grey button is placed over it, it would make the white text disappear. When adding the drawLayer method, everything that does not need to be rasterized is written as vector to pdf, making the text also selectable.
The text in the view is rendered as bitmap in the PDF and not as vector graphics and this causes the blurry appearance. My guess is iOS rendering engine draws the text on an intermediary bitmap (for color composition purposes I think) and then this bitmap is rendered on the target context.
I have used the following command to fix blurry text:
self.frame = CGRectIntegral(self.frame);
The reason for blurry text was that alignment was not ideal, i.e. fraction values in the point coordinates, so the text appear blurry due to excessive antialiasing.
I'm having something really weird going, I'm adding caption to an image and everything works fine on simulator but on the device itself the font is really small...
I have a problem when using this code :
-(UIImage*) drawTextOnPic:(NSString*) bottomText
inImage:(UIImage*) image
{
UIFont *font = [UIFont boldSystemFontOfSize:48];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
CGRect = CGRectMake(10.0f,10.0f,image.size.width-20.0f, image.size.height);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetShadow(context, CGSizeMake(2.5f, 2.5f), 5.0f);
[[UIColor whiteColor] set];
[bottomText drawInRect:CGRectIntegral(rect) withFont:font lineBreakMode:UILineBreakModeClip alignment:UITextAlignmentCenter];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Does someone know why it works only on simulator as I expected? BTW, I'm using the SDK for iOS6, Can it be something there?
Is it something with the retina display? How to fix that?
When you call UIGraphicsBeginImageContext, you should be using the extended call to pass in screen scale, like this:
UIGraphicsBeginImageContextWithOptions(image.size, YES, [[UIScreen mainScreen] scale]);
That will create the image context at the scale of your screen. Note that the second argument, here YES, is whether or not you need the image to be opaque. If there's transparency in your image, set this to NO.
I choose the default font size (on my case it's 48) and then checked the image size according to some ref size (in my case 640x480), I did that by sqrtf(area(a)/area(ref)) then multiply this result on font size.
That's the only way it works for me.
Hope that would help others...
// Drawing text on image
+ (UIImage*) drawText:(NSString*)text withColor:(UIColor *) textColor inImage:(UIImage*)image atPoint:(CGPoint)point {
UIFont *font = [UIFont boldSystemFontOfSize:12];
UIGraphicsBeginImageContext(image.size);
[image drawInRect:CGRectMake(0,0,image.size.width,image.size.height)];
CGRect rect = CGRectMake(point.x, point.y, image.size.width, image.size.height);
[textColor set];
[text drawInRect:CGRectIntegral(rect) withFont:font];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I want to create a gradient for the fill color of my text. Currently I am doing it by setting the color of a UILabel's text as
UIImage *image = [UIImage imageNamed:#"GradientFillImage.png"];
myLabel.textColor = [UIColor colorWithPatternImage:image];
Where GradientFillImage.png is a simple image file with a linear gradient painted on it.
This works fine until I want to resize the font. Since the image file is of constant dimensions and does not resize when I resize the font, the gradient fill for the font gets messed up.
How do I create a custom size pattern image and apply it as a fill pattern for text?
I've just finished a UIColor class extension that makes this a 1 line + block thing.
https://github.com/bigkm/UIColor-BlockPattern
CGRect rect = CGRectMake(0.0,0.0,10.0,10.0);
[UIColor colorPatternWithSize:rect.size andDrawingBlock:[[^(CGContextRef c) {
UIImage *image = [UIImage imageNamed:#"FontGradientPink.png"];
CGContextDrawImage(context, rect, [image CGImage]);
} copy] autorelease]];
Ok, I figured it out. Basically, we can override drawRectInText and use our own pattern to color the fill. The advantage of doing this is that we can resize the image into our pattern frame.
First we create a CGPattern object and define a callback to draw the pattern. We also pass the size of the label as a parameter in the callback. We then use the pattern that is drawn in the callback and set it as the fill color of the text:
- (void)drawTextInRect:(CGRect)rect
{
//set gradient as a pattern fill
CGRect info[1] = {rect};
static const CGPatternCallbacks callbacks = {0, &drawImagePattern, NULL};
CGAffineTransform transform = CGAffineTransformMakeScale(1.0, -1.0);
CGPatternRef pattern = CGPatternCreate((void *) info, rect, transform, 10.0, rect.size.height, kCGPatternTilingConstantSpacing, true, &callbacks);
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGFloat alpha = 1.0;
CGColorRef patternColorRef = CGColorCreateWithPattern(patternSpace, pattern, &alpha);
CGColorSpaceRelease(patternSpace);
CGPatternRelease(pattern);
self.textColor = [UIColor colorWithCGColor:patternColorRef];
self.shadowOffset = CGSizeZero;
[super drawTextInRect:rect];
}
The callback draws the image into the context. The image is resized as per the frame size that is passed into the callback.
void drawImagePattern(void *info, CGContextRef context)
{
UIImage *image = [UIImage imageNamed:#"FontGradientPink.png"];
CGImageRef imageRef = [image CGImage];
CGRect *rect = info;
CGContextDrawImage(context, rect[0], imageRef);
}
I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.