NSObliquenessAttributeName ignored by CTFrameDraw() - drawrect

Something strange is going on. I was working with NSAttributedString for some formatting, including slants and skews. NSObliquenessAttributeName did the trick. But then I wanted to expand into CoreText to take control of the frame the text is actually rendered in. Even before figuring it all out, I notice my NSObliquenessAttributeName is not being rendered. All my other attributes are still rendered so I'm a bit confused.
- (void)drawSlanted
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
[[UIColor blackColor] setFill];
NSAttributedString *text = [[NSAttributedString alloc] initWithString:#"This isn't slanted... but is stroked" attributes:#{NSObliquenessAttributeName: #10.0,
NSStrokeWidthAttributeName: #2.0}];
// Flip Coordinates
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0.0, CGRectGetHeight(self.bounds));
CGContextScaleCTM(context, 1.0, -1.0);
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((CFAttributedStringRef)text);
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, text.length), [UIBezierPath bezierPathWithRect:self.bounds].CGPath, NULL);
CTFrameDraw(frame, context);
CFRelease(frame);
CGContextRestoreGState(context);
}

In some sense, NSAttributedString supports arbitrary attributes. That is, you can put any attribute key-value pair you like in an attributes dictionary and NSAttributedString will dutifully store it for you. That includes attributes you make up.
However, NSAttributedString will not make use of attributes it doesn't understand to format or lay out the string. It only understands Cocoa's predefined attributes.
The same is true of Core Text. It only understands certain attributes. Unfortunately, the set of attributes that Core Text understands is not the same as the set that Cocoa and NSAttributedString understand. The set that Core Text understands is documented in the Core Text String Attributes Reference. It doesn't include an obliqueness attribute.
I'm not sure, but I think you need to use a font created with a transformation matrix to get oblique glyphs. (Of course, you should prefer proper italics unless you have a reason not to.)

Related

Mavericks Style Tagging

I'm quite new to cocoa and I'm trying to find out how I can create something similar to the new tagging UI in Mavericks:
I assume, I'll have to overwrite NSTokenFieldCell to get the coloured dots or an icon on the tags. But how does this popup list work?
Thanks for your help!
Sadly, you'll have to roll your own. Almost all of the drawing taking place in NSTokenFieldCell is private, so adding any kind of ornamental elements would have to be done by you. If I remember correctly, NSTokenFieldCell uses an NSTokenTextView instead of the window's standard field editor. I'm not sure what's different about it, but I think it's mostly to deal with the specialized nature of "tokenizing" attributed strings. I think they just use NSAttachmentCell objects for the graphical tokens, and when the cell receives a -mouseDown: event, they show the menu.
The menu part would actually be pretty easy because you can add images to menu items like so:
NSMenuItem *redItem = [[NSMenuItem alloc] initWithTitle:#"Red"
action:#selector(chooseColorMenuItem:)
keyEquivalent:#""];
// You could add an image from your app's Resources folder:
NSImage *redSwatchImage = [NSImage imageNamed:#"red-menu-item-swatch"];
// ----- or -----
// You could dynamically draw a color swatch and use that as its image:
NSImage *redSwatchImage = [NSImage imageWithSize:NSMakeSize(16.0, 16.0)
flipped:NO
drawingHandler:^BOOL(NSRect dstRect) {
NSRect pathRect = NSInsetRect(dstRect, 0.5, 0.5); // Aligns border to integral values
NSBezierPath *path = [NSBezierPath bezierPathWithOvalInRect:pathRect];
NSColor *fillColor = [NSColor redColor];
NSColor *strokeColor = [fillColor shadowWithLevel:0.5];
[fillColor setFill];
[path fill];
[strokeColor setStroke];
[path stroke];
return YES;
}];
redItem.image = redImage;
With respect to the token drawing stuff, take my info with a grain of salt because Apple's documentation on this stuff is pretty lacking, so everything I'm telling you is from personal struggles, cursing, and head-banging. Anyway, I'm sorry I couldn't bring you better news, but I guess, it is what it is. Good luck.

Divide UIImage into two parts along a UIBezierPath

How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.

NSBackgroundColorAttributeName-like attribute in NSAttributedString on iOS?

I was planning on using NSAttributedString to highlight portions of strings with the matching query of a user's search. However, I can't find an iOS equivalent of NSBackgroundColorAttributeName—there's no kCTBackgroundColorAttributeName. Does such a thing exist, similar to the way NSForegroundColorAttributeName becomes kCTForegroundColorAttributeName?
No, such an attribute doesn't exist in Core Text, you'll have to draw your own rectangles underneath the text to simulate it.
Basically, you'll have to figure out which rectangle(s) to fill for a given range in the string. If you do your layout with a CTFramesetter that produces a CTFrame, you need to get its lines and their origins using CTFrameGetLines and CTFrameGetLineOrigins.
Then iterate over the lines and use CTLineGetStringRange to find out which lines are part of the range you want to highlight. To get the rectangles to fill, use CTLineGetTypographicBounds (for the height) and CTLineGetOffsetForStringIndex (for the horizontal offset and width).
NSBackgroundColorAttributeName is available in iOS 6, and you can use it by following way:
[_attributedText addAttribute: NSBackgroundColorAttributeName value:[UIColor yellowColor] range:textRange];
[_attributedText drawInRect:rect];
drawInRect: will support NSBackgroundColorAttributeName and all NS*AttributeNames supported by iOS 6.
For CTFrameDraw() there is no support for background text color.
Code:
- (void)drawRect:(CGRect)rect {
// First draw selection / marked text, then draw text
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
[_attributedText drawInRect:rect];
CGContextRestoreGState(context);
// CTFrameDraw(_frame, UIGraphicsGetCurrentContext());
}

Odd problem with NSImage -lockFocusFlipped:

I'm using NSImage's -lockFocusFlipped: method to do some drawing into an image. My code looks like this:
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(256, 256)];
[image lockFocusFlipped:YES];
NSShadow *shadow = [[NSShadow alloc] init];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:6.0];
[shadow setShadowOffset:NSMakeSize(0, 3)];
[shadow set];
NSRect shapeRect = NSMakeRect(0, 0, 256, 100);
[[NSColor redColor] set];
NSRectFill(shapeRect);
[image unlockFocus];
This code works to a certain point. I can confirm that the context is indeed flipped because [[NSGraphicsContext currentContext] isFlipped] returns YES, and also because shapeRect is drawn at the right position (using the top left corner as the origin). That said, the NSShadow does not seem to respect the flipped status of the context. Setting the shadow offset to (0, 3) should move the shadow down when the context is flipped, but it actually moves it up (which is what would happen in a standard non-flipped context).
This problem seems specific to -lockFocusFlipped, because when I'm drawing using this same code into a CALayer with a flipped coordinate system, the shadow is drawn just fine (respecting the flip). Documentation on -lockFocusFlipped also seems to be quite vague. This is all it says in the NSImage class documentation:
Prepares the image to receive drawing commands using the specified flipped state.
And I also found this note in the Snow Leopard AppKit Release Notes:
There are cases, for example drawing directly via NSLayoutManager, that require a flipped context. To cover this case, we add
- (void)lockFocusFlipped:(BOOL)flipped;
This doesn't alter the state of the image itself, only the context on which focus is locked. It means that (0,0) is at the top left and positive along the Y-axis is down in the locked context.
None of the docs seem to explain NSShadow's behaviour in this case. And through further testing, it seems NSGradient does not seem to respect the flipped state of the drawing context used by NSImage either.
Any insight is greatly appreciated :-)
From the NSShadow class reference:
Shadows are always drawn in the default user coordinate space, regardless of any transformations applied to that space. This means that rotations, translations and other transformations of the current transformation matrix (the CTM) do not affect the resulting shadow.
And that's what flipping ultimately is: Translate up, scale back the other way.
There's no such statement for NSGradient, so I'd suggest filing a bug about that one.

NSImageRep confusion

I have an NSImage that came from a PDF, so it has one representation, of type NSPDFImageRep. I do an image setDataRetained:YES; to make sure that it remains a NSPDFImageRep. Later, I want to change the page, so I get the rep, and set the current page. This is fine.
The problem is that when I draw the image, only the 1st page comes out.
My impression is that when I draw an NSImage, it picks a representation, and draws that representation. Now, the image only has one rep, so that's the one that is being drawn, and that's the PDFrep. So, why when I draw the image, is it not drawing the correct page?
HOWEVER, when I draw the representation itself, I get the correct page.
What am I missing?
NSImage does a caching of the NSImageRep, when first displayed. In the case of NSPDFImageRep, the "setCacheMode:" message has no effect. Thus, the page that will be displayed will always be the first page. See this guide for more information.
You have then two solutions:
Drawing the representation directly.
Call the "recache" message on the NSImage to force the rasterization of the selected page.
An alternative mechanism to draw a PDF is to use the CGPDF* functions. To do this, use CGPDFDocumentCreateWithURL to create a CGPDFDocumentRef object. Then, use CGPDFDocumentGetPage to get a CGPDFPageRef object. You can then use CGContextDrawPDFPage to draw the page into your graphics context.
You may have to apply a transform to ensure that the document ends up sized like you want. Use a CGAffineTransform and CGContextConcatCTM to do this.
Here is some sample code pulled out of one of my projects:
// use your own constants here
NSString *path = #"/path/to/my.pdf";
NSUInteger pageNumber = 14;
CGSize size = [self frame].size;
// if we're drawing into an NSView, then we need to get the current graphics context
CGContextRef context = (CGContextRef)([[NSGraphicsContext currentContext] graphicsPort]);
CFURLRef url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)path, kCFURLPOSIXPathStyle, NO);
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL(url);
CGPDFPageRef page = CGPDFDocumentGetPage(document, pageNumber);
// in my case, I wanted the PDF page to fill in the view
// so we apply a scaling transform to fir the page into the view
double ratio = size.width / CGPDFPageGetBoxRect(page, kCGPDFTrimBox).size.width;
CGAffineTransform transform = CGAffineTransformMakeScale(ratio, ratio);
CGContextConcatCTM(context, transform);
// now we draw the PDF into the context
CGContextDrawPDFPage(context, page);
// don't forget memory management!
CGPDFDocumentRelease(document);