I have two instances of a CALayer subclass.
The only difference between them is this line:
[self setTransform:CATransform3DMakeScale(2, 2, 2)];
What else do I need so that the large layer looks good at scale 2x ?
PS: (to avoid any confusion) The layers also include a few control buttons, shadows and rounded corner to mimic the look of windows in a windowing system, but those are not NSWindows instances.
The short answer is, don't use transforms. Transforms scale the layer by magnifying it, without re-rendering.
You could get a very similar effect by using a CAShapeLayer and animating changes to the path. That would give you sharp rendering, however, because it path animation does re-render the pixels.
I say "similar" effect because CAShapeLayers use a lineWidth property for the whole layer. You can animate the line width between values, and use fractional values, but you'll have to do some fine-tuning to get the line thickness to animate up and down in proportion to the size of the shape. Another consideration is that the graphics system uses anti-aliasing to draw fractional width paths, so when the line width is not an integer value they will look slightly soft. You could turn off antialiasing, but then they would look really jaggy.
Related
I'm writing an app that could make good use of the Apple Watch's fitness tracker design, here:
So far, I've created the basic outline which is just a CAShapeLayer with a CGPath of an ellipse. I use strokeStart and strokeEnd to animate the progress. My problem comes when applying a gradient to the outline. How do I apply a gradient like above to the stroke of a CGPath?
The cleanest way to do this without having to drop down to Core Graphics or GL is to create a layer containing the angle gradient that you want the ring filled with, mask it with a CAShapeLayer containing your circular path (with the appropriate line width and cap settings), then, as you’re currently doing, use the shape layer’s strokeEnd property to set the “fill” percentage. Note that there isn’t a built-in way to create an angle gradient—you can use one of the suggestions in this answer for that.
edit: Also, you’ll need a pair of semicircular “cap” images, one at each end of the ring—as the fill percentage gets close to 100%, the region at the top will reveal the discontinuity between the start and end color. In your example image above, you’d need a red semicircle oriented like this ( at the start, and a pink one oriented like this ) with a translation/rotation transform tracking the end.
additional edit: Also also, since the end-cap semicircle will be moving along the gradient, you’ll need it to change color, interpolating from the start color to the end color as the fill amount goes from 0% to 100%. Best way to do that is with a shape layer with a semicircular path, since you can set the fillColor of that without having to redraw image contents.
We did this for an iOS app.. but quickly stopped as it gets bogged down quickly.
I think Apple is using images.. as they do in the Lister example
I am developing a Mac app, I want to draw a view like a radar, I find no method to draw gradient color along the arc. The existing method only draw gradient towards one direction.
What you want is an angle gradient. I've written a Core Image filter that generates an angle gradient; give it an opaque color (e.g., green) for the start color and the completely-transparent version of that color for the middle and end colors.
(The filter's output is actually infinite in extent and centered at the origin, so you'll need to mask it out to a circle and use an affine transform at one level or another to get it into the right position.)
Extra credit: In the kernel code for that filter (near the start of the .m), there's a line that starts the gradient at straight-up (90°) rather than straight-right. You could change the code, both of the filter and of the kernel, to make this a parameter (like inputStartColor et al) that you could vary over time, using a CABasicAnimation or something similar.
You'll need to use a combination of axial and radial gradients and probably also some clipping paths. You can find all of this documented here.
You can also use colorWithPatternImage: to stroke any lines you're drawing.
My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.
My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.
I've got my layer hosted workspace working so that using CATiledLayers for hundreds of images works nicely when the workspace is zoomed out substantially. All the images use lower resolution representations, and my application is much more responsive when panning and zooming large numbers of images.
However, within my application I also provide the user the ability to resize layers with a resize handle. Before I converted image layers to use CATiledLayers I was doing layer resizes by manipulating the bounds of the image layer according to the resize delta (mouse drag), and it worked well. But now with CATiledLayers in place, this is causing CATiledLayers to get confused when I mix resizing of layers through bounds manipulation and zooming/unzooming the workspace through scale transforms.
Specifically, if I resize a CATiledLayer to half the width/height size (1/4 the area), the image inside it will suddenly scale to a further 1/2 the resized frame leaving 3/4 of the frame empty. This seems to be exactly when the inner CATiledLayer logic gets invoked to provide a lower resolution image representation. It works fine if I don't touch the resize handler and just zoom/unzoom the workspace.
Is there a way to make zooming/resizing play nice together with CATiledLayers, or am I going to have to convert my layer resize logic to use scale transforms instead of bounds manipulations?
I ended up solving this by converting my layer resize logic to use scale transforms by overriding the setBounds: method for my custom image layer class to scale it's containing CATiledLayer, and repositioning accordingly. Also it is important to make sure the CATiledLayer's autoresizingMask is set to kCALayerNotSizable since we are handling resizes manually in setBounds:.
Note: be sure to call the superclass's implementation of setBounds:.
Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect?
I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?
The problem with drawing a 1-pixel path is that Quartz draws paths on an exact point grid, starting from {0,0}. This means that if you stroke a vertical path starting at {10,10} with a 1-point width, half of that line will render in the pixel to the left of the coordinate and half in the pixel to the right, causing a blurring effect.
You should therefore shift your drawing by {0.5,0.5} if you want lines to draw on exact pixels.
You can definitely draw what you want with Quartz.
Apple uses images for the tab elements.