Why is line width in CoreGraphics on retina display rendered half width? - core-graphics

My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.

My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.

Related

Parallax effect jitter in libGDX

Basically I'm using this ParallaxCamera class to create the effect in my game, but upon movement, the layers "wiggle". This is especially noticeable when the camera is moving slowly.
I have fixed the timestep and use interpolated smoothing. I use scaled up pixel art. Camera centered on player, updated after movement. Disabling the effect makes moving the camera smooth.
What I guess the issues might be:
the layers move at different paces which means they move at different
times
rounding to display makes the layers assume slightly different positions each frame when moving the camera
Thanks for any help
For low-resolution pixel art, this is the strategy I've used. I draw to a small FrameBuffer at 1:1 resolution and then draw that to the screen. That should take care of jittering.
If your Stage is also at the same resolution, then you have to use a bit of a hack to get input to be processed properly. The one I've used it to use a StretchViewport, but I manually calculate the world width and height to not stretch the world, so I'm basically doing the same calculation that ExtendViewport does behind the scenes. You also have to round to an integer for the width and height. You should do this in resize and apply the width and height using viewport.setWorldWidth() and setWorldHeight(). So in this case it doesn't matter what world size you give to the constructor since it will be changed in update().
When you call update on the viewport in resize, you need to do it within the context of the FrameBuffer you are drawing to. Otherwise it will mess up the screen's frame buffer dimensions.
public void resize(int width, int height) {
int worldWidth = Math.round((float)WORLD_HEIGHT / (float)height * (float)width);
viewport.setWorldWidth(worldWidth);
viewport.setWorldHeight(worldHeight);
frameBuffer.begin();
viewport.update(width, height, true); // the actual screen dimensions
frameBuffer.end();
}
You can look up examples of using FrameBuffer in LibGDX. You should do all your game drawing in between frameBuffer.begin() and end(), and then draw the frameBuffer's color buffer to the screen like this:
stage.act();
frameBuffer.begin();
//Draw game
stage.draw();
frameBuffer.end();
spriteBatch.setProjectionMatrix(spriteBatch.getProjectionMatrix().idt());
spriteBatch.begin();
spriteBatch.draw(frameBuffer.getColorBufferTexture(), -1, 1, 2, -2);
spriteBatch.end();
In my case, I do a more complicated calculation of world width and world height such that they are a whole number factor of the actual screen dimensions. This prevents the big pixels from being different sizes on the screen, which might look bad. Alternatively, you can change the filtering of the FrameBuffer's texture to linear and use an upscaling shader when drawing it.

How to force a CALayer to redraw at a higher resolution?

I have two instances of a CALayer subclass.
The only difference between them is this line:
[self setTransform:CATransform3DMakeScale(2, 2, 2)];
What else do I need so that the large layer looks good at scale 2x ?
PS: (to avoid any confusion) The layers also include a few control buttons, shadows and rounded corner to mimic the look of windows in a windowing system, but those are not NSWindows instances.
The short answer is, don't use transforms. Transforms scale the layer by magnifying it, without re-rendering.
You could get a very similar effect by using a CAShapeLayer and animating changes to the path. That would give you sharp rendering, however, because it path animation does re-render the pixels.
I say "similar" effect because CAShapeLayers use a lineWidth property for the whole layer. You can animate the line width between values, and use fractional values, but you'll have to do some fine-tuning to get the line thickness to animate up and down in proportion to the size of the shape. Another consideration is that the graphics system uses anti-aliasing to draw fractional width paths, so when the line width is not an integer value they will look slightly soft. You could turn off antialiasing, but then they would look really jaggy.

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

XCode Coordinates for iPad Retina Displays

I just noticed an interesting thing while attempting to update my app for the new iPad Retina display, every coordinate in Interface Builder is still based on the original 1024x768 resolution.
What I mean by this is that if I have a 2048x1536 image to have it fit the entire screen on the display I need to set it's size to 1024x768 and not 2048x1536.
I am just curious is this intentional? Can I switch the coordinate system in Interface Builder to be specific for Retina? It is a little annoying since some of my graphics are not exactly 2x in either width or height from their originals. I can't seem to set 1/2 coordinate numbers such as 1.5 it can either be 1 or 2 inside of Interface Builder.
Should I just do my interface design in code at this point and forget interface builder? Keep my graphics exactly 2x in both directions? Or just live with it?
The interface on iOS is based on points, not pixels. The images HAVE to be 2x the size of the originals.
Points Versus Pixels In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the
underlying device. When using native drawing technologies such as
Quartz, UIKit, and Core Animation, you specify coordinate values using
a logical coordinate space, which measures distances in points. This
logical coordinate system is decoupled from the device coordinate
space used by the system frameworks to manage the pixels on the
screen. The system automatically maps points in the logical coordinate
space to pixels in the device coordinate space, but this mapping is
not always one-to-one. This behavior leads to an important fact that
you should always remember:
One point does not necessarily correspond to one pixel on the screen.
The purpose of using points (and the logical coordinate system) is to
provide a consistent size of output that is device independent. The
actual size of a point is irrelevant. The goal of points is to provide
a relatively consistent scale that you can use in your code to specify
the size and position of views and rendered content. How points are
actually mapped to pixels is a detail that is handled by the system
frameworks. For example, on a device with a high-resolution screen, a
line that is one point wide may actually result in a line that is two
pixels wide on the screen. The result is that if you draw the same
content on two similar devices, with only one of them having a
high-resolution screen, the content appears to be about the same size
on both devices.
In your own drawing code, you use points most of the time, but there
are times when you might need to know how points are mapped to pixels.
For example, on a high-resolution screen, you might want to use the
extra pixels to provide extra detail in your content, or you might
simply want to adjust the position or size of content in subtle ways.
In iOS 4 and later, the UIScreen, UIView, UIImage, and CALayer classes
expose a scale factor that tells you the relationship between points
and pixels for that particular object. Before iOS 4, this scale factor
was assumed to be 1.0, but in iOS 4 and later it may be either 1.0 or
2.0, depending on the resolution of the underlying device. In the future, other scale factors may also be possible.
From http://developer.apple.com/library/ios/#documentation/2DDrawing/Conceptual/DrawingPrintingiOS/GraphicsDrawingOverview/GraphicsDrawingOverview.html
This is intentional on Apple's part, to make your code relatively independent of the actual screen resolution when positioning controls and text. However, as you've noted, it can make displaying graphics at max resolution for the device a bit more complicated.
For iPhone, the screen is always 480 x 320 points. For iPad, it's 1024 x 768. If your graphics are properly scaled for the device, the impact is not difficult to deal with in code. I'm not a graphic designer, and it's proven a bit challenging to me to have to provide multiple sets of icons, launch images, etc. to account for hi-res.
Apple has naming standards for some image types that minimize the impact on your code:
https://developer.apple.com/library/ios/#DOCUMENTATION/UserExperience/Conceptual/MobileHIG/IconsImages/IconsImages.html
That doesn't help you when you're dealing with custom graphics inline, however.

How To Draw More Precise Lines using Core Graphics and CALayer

Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect?
I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?
The problem with drawing a 1-pixel path is that Quartz draws paths on an exact point grid, starting from {0,0}. This means that if you stroke a vertical path starting at {10,10} with a 1-point width, half of that line will render in the pixel to the left of the coordinate and half in the pixel to the right, causing a blurring effect.
You should therefore shift your drawing by {0.5,0.5} if you want lines to draw on exact pixels.
You can definitely draw what you want with Quartz.
Apple uses images for the tab elements.