NSBackgroundColorAttributeName-like attribute in NSAttributedString on iOS? - objective-c

I was planning on using NSAttributedString to highlight portions of strings with the matching query of a user's search. However, I can't find an iOS equivalent of NSBackgroundColorAttributeName—there's no kCTBackgroundColorAttributeName. Does such a thing exist, similar to the way NSForegroundColorAttributeName becomes kCTForegroundColorAttributeName?

No, such an attribute doesn't exist in Core Text, you'll have to draw your own rectangles underneath the text to simulate it.
Basically, you'll have to figure out which rectangle(s) to fill for a given range in the string. If you do your layout with a CTFramesetter that produces a CTFrame, you need to get its lines and their origins using CTFrameGetLines and CTFrameGetLineOrigins.
Then iterate over the lines and use CTLineGetStringRange to find out which lines are part of the range you want to highlight. To get the rectangles to fill, use CTLineGetTypographicBounds (for the height) and CTLineGetOffsetForStringIndex (for the horizontal offset and width).

NSBackgroundColorAttributeName is available in iOS 6, and you can use it by following way:
[_attributedText addAttribute: NSBackgroundColorAttributeName value:[UIColor yellowColor] range:textRange];
[_attributedText drawInRect:rect];
drawInRect: will support NSBackgroundColorAttributeName and all NS*AttributeNames supported by iOS 6.
For CTFrameDraw() there is no support for background text color.
Code:
- (void)drawRect:(CGRect)rect {
// First draw selection / marked text, then draw text
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[_attributedText drawInRect:rect];
CGContextRestoreGState(context);
// CTFrameDraw(_frame, UIGraphicsGetCurrentContext());
}

Related

NSObliquenessAttributeName ignored by CTFrameDraw()

Something strange is going on. I was working with NSAttributedString for some formatting, including slants and skews. NSObliquenessAttributeName did the trick. But then I wanted to expand into CoreText to take control of the frame the text is actually rendered in. Even before figuring it all out, I notice my NSObliquenessAttributeName is not being rendered. All my other attributes are still rendered so I'm a bit confused.
- (void)drawSlanted
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
[[UIColor blackColor] setFill];
NSAttributedString *text = [[NSAttributedString alloc] initWithString:#"This isn't slanted... but is stroked" attributes:#{NSObliquenessAttributeName: #10.0,
NSStrokeWidthAttributeName: #2.0}];
// Flip Coordinates
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0.0, CGRectGetHeight(self.bounds));
CGContextScaleCTM(context, 1.0, -1.0);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)text);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, text.length), [UIBezierPath bezierPathWithRect:self.bounds].CGPath, NULL);
CTFrameDraw(frame, context);
CFRelease(frame);
CGContextRestoreGState(context);
}
In some sense, NSAttributedString supports arbitrary attributes. That is, you can put any attribute key-value pair you like in an attributes dictionary and NSAttributedString will dutifully store it for you. That includes attributes you make up.
However, NSAttributedString will not make use of attributes it doesn't understand to format or lay out the string. It only understands Cocoa's predefined attributes.
The same is true of Core Text. It only understands certain attributes. Unfortunately, the set of attributes that Core Text understands is not the same as the set that Cocoa and NSAttributedString understand. The set that Core Text understands is documented in the Core Text String Attributes Reference. It doesn't include an obliqueness attribute.
I'm not sure, but I think you need to use a font created with a transformation matrix to get oblique glyphs. (Of course, you should prefer proper italics unless you have a reason not to.)

kCGTextStroke's Fill and Stroke aren't positioned correctly

So I'm using the code below to apply a stroke (and fill) to text in a UILabel, and it's coming out like the image below. The stroke is heavier on one side than the other (look at the top of the letters compared to the bottom, and the right compared to the left. The period at the end makes it very noticeable, too, looks like a googly eye). I've not got any shadowing turned on at all, so I don't think it's the shadow interfering with the stroke.
What could be causing this?
- (void) drawTextInRect: (CGRect) rect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetTextDrawingMode(c, kCGTextFillStroke);
CGContextSaveGState(c);
CGContextSetRGBFillColor(c, 1.0, 0.0, 0.0, 1.0);
CGContextSetRGBStrokeColor(c, 0.0, 1.0, 0.0, 1.0);
[super drawTextInRect: rect];
CGContextRestoreGState(c);
}
EDIT: So, for kicks, I took at look at the label with only the fill, and only the stroke. Turning off the stroke creates a perfectly normal-looking piece of text, as if I'd just coloured it in Interface Builder. Turning off the fill, however, shows only the stroke, which doesn't look heavier on any side than any other. **This leads me to believe that the issue is where the fill is positioned in relation to the stroke, and that neither the fill or stroke themselves are at fault. Any other thoughts on this? How can I get the fill directly centred in the stroke?
You should probably just use kCGTextFillStroke as the drawing mode and only draw once (with separate stroke and fill colors set).
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0); // any color you want (this is red)
CGContextSetRGBStrokeColor(context, 0.0, 1.0, 0.0, 1.0); // any color you want (this is green)
CGContextSetTextDrawingMode(context, kCGTextFillStroke);
[self.text drawInRect:rect withFont:self.font];
Alternatively you could just stroke afterwards. Strokes are usually drawn from the center which means that half of the width is inwards and half is outwards. That would mean that if you fill after you stroke some of the stroke is going to get covered up by the fill.
A possibility is that the overridden method translates the CTM using CGContextTranslateCTM or similar functions. The CTM is part of the state of a context and specifies a transform for all following draw calls.
You should try to save the context before the call to the overridden method and restore it afterwards:
CGContextSaveGState(c);
[super drawTextInRect: rect];
CGContextRestoreGState(c);

Adding a tint to an image

I'm creating an app which uses UIImagePickerController to present a camera to the user with a custom overlay which includes one of two grids/patterns over the camera "view" itself.
The grids themselves are .png files in a UIImageView which is added to the overlay, they're quite complex so I would really like to steer away from drawing the grid in code, even though that would present I nice clean and simple answer to my question.
I would like to be able to offer the grids in a variety of colours. The obvious solution is create more .png images in different colours, but for each colour there would have to be four separate images (regular and retina for each of the grids) so that would quickly add up to a lot of assets.
The solution which, I think, would be ideal, would be for me to just create the grids in white/gray and then apply a tint to it to colour it appropriately.
Is that possible? Or do I need to seek an alternative solution?
With thanks to Ananth for pointing me to iPhone - How do you color an image?
I've added this method to my code as suggested in the question, with the modification in willc2's answer:
-(UIImage *)colorizeImage:(UIImage *)baseImage color:(UIColor *)theColor {
UIGraphicsBeginImageContext(baseImage.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect area = CGRectMake(0, 0, baseImage.size.width, baseImage.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSaveGState(ctx);
CGContextClipToMask(ctx, area, baseImage.CGImage);
[theColor set];
CGContextFillRect(ctx, area);
CGContextRestoreGState(ctx);
CGContextSetBlendMode(ctx, kCGBlendModeMultiply);
CGContextDrawImage(ctx, area, baseImage.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
...and I'm getting exactly what I'm after.

How does CGContextFillPath in Objective C work?

I have the following code which is supposed to draw a stroked and filled rectangle but the fill won't show up.
[[UIColor greenColor] setStroke];
[[UIColor brownColor] setFill];
CGContextBeginPath(context);
CGContextMoveToPoint(context, right, bottom);
CGContextAddLineToPoint(context, right,top);
CGContextAddLineToPoint(context,left, top);
CGContextAddLineToPoint(context, left, bottom);
CGContextAddLineToPoint(context, right, bottom);
CGContextStrokePath(context);
CGContextFillPath(context);
The stroke works and I get a nice green rectangle with no fill (or a white fill). This is within a UIView for iOS. Seems very simple and it's driving me nuts!
The right way to do this is to set the drawing mode to include both a fill and a path.
CGPathDrawingMode mode = kCGPathFillStroke;
CGContextClosePath( context ); //ensure path is closed, not necessary if you know it is
CGContextDrawPath( context, mode );
You can use CGContextFillPath() to simply do a fill, but it's basically just CGContextDrawPath() that automatically calls CGContextClosePath() and uses kCGPathFill as the CGPathDrawingMode. You might as well always use CGContextDrawPath() and put in your own parameters to get the type of fill/stroke that you want. You can also put holes in convex paths by using one of the Even-Odd drawing modes.
From the docs on CGContextStrokePath:
Quartz uses the line width and stroke color of the graphics state to
paint the path. As a side effect when you call this function, Quartz
clears the current path.
Probably you should call CGContextClosePath before you call CGContextFillPath

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.