CGContextShowTextAtPoint renders upside down - objective-c

I am trying to draw some text via Quartz onto an NSView via CGContextShowTextAtPoint(). This worked well until I overrode (BOOL)isFlipped to return YES in my NSView subclass in order to position the origin in the upper-left for drawing. The text draws in the expected area but the letters are all inverted. I also tried the (theoretically, at least) equivalent of flipping my CGContext and translating by the context's height.
e.x.
// drawRect:
CGContextScaleCTM(theContext, 1, -1);
CGContextTranslateCTM(theContext, 0, -dirtyRect.size.height);
This yields the same result.
Many suggestions to similar problems have pointed to modifying the text matrix. I've set the text matrix to the identity matrix, performed an additional inversion on it, and done both, respectively. All these solutions have lead to even stranger rendering of the text (often just a fragment shows up.)
Another suggestion I saw was to simply steer clear of this function in favor of other means of drawing text (e.x. NSString's drawing methods.) However, this is being done amongst mostly C++ / C and I'd like to stay at those levels if possible.
Any suggestions are much appreciated and I'd be happy to post more code if needed.
Thanks,
Sam

This question has been answered here.
Basically it's because the coordinate system on iOS core graphics is fliped (x:0, y:0 in the top left) opposed to the one on the Mac (where x:0, y:0 is bottom left). The solution for this is setting the text transform matrix like this:
CGContextSetTextMatrix(context, CGAffineTransformMake(1.0,0.0, 0.0, -1.0, 0.0, 0.0));

You need to use the view's bounds rather than the dirtyRect and perform the translation before the scale:
CGContextTranslateCTM(theContext, 0, -NSHeight(self.bounds));
CGContextScaleCTM(theContext, 1, -1);

Turns out the answer was to modify the text matrix. The weird "fragments" that were showing up instead of the text was because the font size (set via CGContextSelectFont()) was too small when the "default" text matrix was replaced. The initial matrix had, for some reason, a large scale transform so smaller text sizes looked fine when the matrix was unmodified; when replaced with a inverse scale (1, -1) or an identity matrix, however, they would become unreadably small.

Related

How can I rotate and move a UIView at the same time?

I'm trying to make a UIView move and rotate at the same time.
Here's my code:
_viewToDrag.frame = CGRectMake(x, y, size, width);
_viewToDrag.transform = CGAffineTransformMakeRotation(-M_PI_2 * multiplier));
As the multiplier increases (0.0 .. 1.0) the view stretches beyond any logic.
This post seems to answer my question:
Rotating and Moving a UIImageView (CocoaTouch)
But, since I'm code-dyslexic and a bit retarded can some explain what this translates to code:
Change the "center" property instead.
I would have made a comment but my reputation doesn't allow it.
Always consult the documentation. Apple says, in a big box with an exclamation mark graphic next to it, in the section on frame:
Warning: If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.
Underneath centre it says:
The center is specified within the coordinate system of its superview
and is measured in points. Setting this property changes the values of
the frame properties accordingly.
So the answer you link to is incorrect. Given its one-sentence nature with no reference to sources, probably the author tried it once, on the version of iOS he happened to have installed, it looked like it worked and he jumped straight to an unsupported conclusion.
The other answer is correct. You need to build the translation into your transform. Likely by throwing in a use of CGAffineTransformTranslate.
EDIT: so, to set a transform with a translation of (1000, 0) and a rotation of -M_PI/2:
_viewToDrag.transform = CGAffineTransformRotate(
CGAffineTransformMakeTranslation(1000.0, 0.0),
-M_PI_2);
The frame has a center property as well as origin and size.
The center is a CGPoint just like origin except it marks the center of the frame instead of the upper lefthand corner.

How to force a CALayer to redraw at a higher resolution?

I have two instances of a CALayer subclass.
The only difference between them is this line:
[self setTransform:CATransform3DMakeScale(2, 2, 2)];
What else do I need so that the large layer looks good at scale 2x ?
PS: (to avoid any confusion) The layers also include a few control buttons, shadows and rounded corner to mimic the look of windows in a windowing system, but those are not NSWindows instances.
The short answer is, don't use transforms. Transforms scale the layer by magnifying it, without re-rendering.
You could get a very similar effect by using a CAShapeLayer and animating changes to the path. That would give you sharp rendering, however, because it path animation does re-render the pixels.
I say "similar" effect because CAShapeLayers use a lineWidth property for the whole layer. You can animate the line width between values, and use fractional values, but you'll have to do some fine-tuning to get the line thickness to animate up and down in proportion to the size of the shape. Another consideration is that the graphics system uses anti-aliasing to draw fractional width paths, so when the line width is not an integer value they will look slightly soft. You could turn off antialiasing, but then they would look really jaggy.

Why is line width in CoreGraphics on retina display rendered half width?

My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.
My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.

Can someone explain the CALayer contentsRect property's coordinate system to me?

I realize that the contentsRect property of CALayer (documentation here) allows one to define how much of the layer to use for drawing, but I do not understand how the coordinate system works, I think.
It seems that when the width/height are smaller, the area used for content is bigger and vice versa. Similarly, negative x,y positions seem to move the content area down and to the right which is the opposite of my intuition.
Can someone explain why this is? I'm sure there is a good reason, but I assume I'm missing some graphics programming background.
the contentsRect property of CALayer (documentation here) allows one to define how much of the layer to use for drawing
No, you're thinking about it incorrectly.
The contentsRect specifies which part of the contents image will be displayed in the layer.
That part is then arranged in the layer according to the contentsGravity property.
If this is kCAGravityResize, the default, this will cause the part to be resized to fit the layer. That would explain the counterintuitive behavior you're seeing -- you make contentsRect smaller, but the layer appears to be the same size, and it appears to "zoom in" on the selected part of the image. You might find it easier to understand if you set contentsGravity to kCAGravityCenter, which won't resize.
Most of the time, you would set the contentsRect to some sub-rect of the identity rect { {0, 0}, {1, 1} }, so you choose to see only part of the contents.
(Think of these as percentages if you like -- if contentsRect has a size of {0.5, 0.5}, you're choosing 50% of the contents.)
If part of the contentsRect goes outside the identity rect, then CA will extend the edge pixels of the contents outwards. This is handy in some cases, but it's not something you'd use on its own -- you'd use it in combination with a mask or with some other layers to achieve some effect.
The contentsRect property is measured in unit coordinates.
Unit coordinates are specified in the range 0 to 1, and are relative values (as opposed to absolute values like points and pixels). In this case, they are relative to the backing image’s dimensions. The default contentsRect is {0, 0, 1, 1}, which means that the entire backing image is visible by default. If we specify a smaller rectangle, the image will be clipped.
It is actually possible to specify a contentsRect with a negative origin or with dimensions larger than {1, 1}. In this case, the outermost pixels of the image will be stretched to fill the remaining area.
You can find more information in Nick Lockwood's book "iOS CoreAnimation Advanced Techniques"

iOS Quartz/CoreGraphics drawing feathered stroke

I am drawing a path into a CGContext following a set of points collected from the user. There seems to be some random input jitter causing some of the line edges to look jagged. I think a slight feather would solve this problem. If I were using OpenGL ES I would simply apply a feather to the sprite I am stroking the path with; however, this project requires me to stay in Quartz/CoreGraphics and I can't seem to find a similar solution.
I have tried drawing 5 lines with each line slightly larger and more transparent to approximate a feather. This produces a bad result and slows performance noticeably.
This is the line drawing code:
CGContextMoveToPoint(UIGraphicsGetCurrentContext(),((int)lastPostionDrawing1.x), (((int)lastPostionDrawing1.y)));
CGContextAddCurveToPoint(UIGraphicsGetCurrentContext(), ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, lastPostionDrawing2.x, lastPostionDrawing2.y;
[currentPath addCurveToPoint:CGPointMake(lastPostionDrawing2.x-((int)furthestLeft.x)+((int)penSize), lastPostionDrawing2.y controlPoint1:CGPointMake(ctrl1_x, ctrl1_y) controlPoint2:CGPointMake(ctrl2_x, ctrl2_y)];
I'm going to go ahead and assume that your CGContext still has anti-aliasing turned on, but if not, then that's the obvious first think to try, as #Davyd's comment suggests: CGContextSetShouldAntialias is the function of interest.
Assuming that's not the problem, and the line is being anti-aliased by the context, but you're still wanting something 'softer.' I can think of a couple of ways to do this that should hopefully be faster than stroking 5 times.
First, you can try getting the stroked path (i.e. a path that describes the outline of the stroke of the current path) using CGContextReplacePathWithStrokedPath you can then fill this path with a gradient (or whatever other fill technique gives the desired results.) This will work well for straight lines, but won't be straightforward for curved paths (since the gradient is filling the area of the stroked path, and will be either linear or radial.)
Another perhaps less obvious option, might be to abuse CG's shadow drawing for this purpose. The function you want to look up is: CGContextSetShadowWithColor Here's the method:
Save the GState: CGContextSaveGState
Get the bounding box of the original path
Copy the path, translating it away from itself by 2.0 * bbox.width using CGPathCreateCopyByTransformingPath (note: use the X direction only, that way you don't need to worry about flips in the context)
Clip the context to the original bbox using CGContextClipToRect
Set a shadow on the context with CGContextSetShadowWithColor:
Some minimal blur (Start with 0.5 and go from there. The blur parameter is non-linear, and IME it's sort of a guess and check operation)
An offset equal to -2.0 * bbox width, and 0.0 height, scaled to base space. (Note: these offsets are in base space. This will be maddening to figure out, but assuming you're not adding your own scale transforms, the scale factor will either be 1.0 or 2.0, so practically speaking, you'll be setting an offset.width of either -2.0*bbox.width or -4.0*bbox.width)
A color of your choosing.
Stroke the translated-away path.
Pop the GState CGContextRestoreGState
This should leave you with "just" the shadow, which you can hopefully tweak to achieve the results you want.
All that said, CG's shadow drawing performance is, IME, less than completely awesome, and less than completely deterministic. I would expect it to be faster than stroking the path 5 times with 5 different strokes, but not overwhelmingly so.
It'll come down to how much achieving this effect is worth to you.