How To Draw More Precise Lines using Core Graphics and CALayer - objective-c

Hello I am having a hard time making this UI element look the way I want (see screenshot). Notice the image on the right--how the line width and darkness looks inconsistent compared to the image on the left (which happens to be a screen grab from safari) where the border width is more consistent. How does apple make their lines so perfect?
I'm using a CALayer and the Core Graphics API to draw the image on the right. Is it possible to draw such perfect lines with the standard apis?

The problem with drawing a 1-pixel path is that Quartz draws paths on an exact point grid, starting from {0,0}. This means that if you stroke a vertical path starting at {10,10} with a 1-point width, half of that line will render in the pixel to the left of the coordinate and half in the pixel to the right, causing a blurring effect.
You should therefore shift your drawing by {0.5,0.5} if you want lines to draw on exact pixels.
You can definitely draw what you want with Quartz.

Apple uses images for the tab elements.

Related

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

Why is line width in CoreGraphics on retina display rendered half width?

My process looks like this:
define a rectangle I want to draw in, using point dimensions.
define CGFloat scale = [[UIScreen mainsScreen] scale]
Multiply the rectangle's size by the scale
Create an image context of the rectangle size using CGBitmapContextCreate
Draw within the image context
call CGBitmapContextCreateImage
call UIImage imageWithCGImage:scale:orientation: with the appropriate scale.
I had thought this has always resulted in perfect images on both retina and and older screens, but haven't been paying close attention to the line contrast/thickness. Generally, the strokes have a high contrast to the fill so I didn't paid attention until now, with low contrast between a line and fill.
I think perhaps I'm misunderstanding the user space, but I thought it was simply a direct conversion through the scaling, and transforms applied. There are no scaling and transforms applied in my particular case except for the retina screen double scaling.
Trying to render a 2-pixel line rather than 1-pixel is easier to explain: when I call
UIContextSetLineWidth(context, 2), the line is rendered as 1 pixel thick on the retina simulator. 1 pixel! But this should be two pixels, on a retina display.
UIContextSetLineWidth(context, 2 * scale) produces a line that is two pixels wide on a retina screen, but I'm expecting it to be 4 pixels.
UIContextSetLineWidth(context, 1) produces a 1-pixel wide line that is partly transparent. I understand about the stroke straddling the path, so I prefer talking in terms of 2-pixel-wide strokes and the paths being on pixel boundaries.
I need to understand why the rendered line width is being divided in half.
My fault. 99% of my own bugs I solve on my own just after I post publicly about it.
The drawing code includes CGContextClip after constructing and copying a path. After that, a fill may be applied, gradient or otherwise, then the line drawn, so everything is nice and tidy. I was focusing on the math and specific drawing code, and did not notice the clipping line, but that would effectively halve the stroke width. Normally I catch logic bugs like this immediately, but because it was posted to SO, it's appropriate the answer is here too.

Create an irregular shaped frame

I've created a canvas within which I display an image that is clipped when it goes over the edges. I can do this fine with a square shaped frame, however the frame I want to use is the one below. Is there any way I can clip the image inside the frame without having to add a non transparent square border around the image, i.e. just using the black line that I've already drawn? (on iPad)
You'll need to use Core Graphics and Quartz to handle this sort of clipping/graphics manipulation.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001066
If you're using UIBezierPath, you may be able to achieve the clipping you're after using the following process
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-TPXREF101
Convert your UIBezierPath to a CGPath
Get your image into a CGContext
Add your CGPath to the context via CGContextAddPath
Clip your context using CGContextClip
Alternatively, if you don't want to be messing with paths (and depending on whether this technique is suitable for your situation, your description of the issue makes it hard to tell), it might be worth using image masking to achieve the effect you're after. See the first link and look under "Bitmap Images and Image Masks".

App graphic making (transparent and no extra spaces)

I am a coder but not a graphic maker. I can decently produce graphics that meet the quality standards visually although I cannot produce graphics that will technically "work." This is what I mean:
I am using CGRectIntersectsRect for colliding images. My image has SOME extra space which I have made completely transparent using Adobe PhotoShop but even if this extra transparent space is not visible, when the two images collide, it will look like you will be hitting nothing as this extra invisible transparent space is PART of the image and when CGRectIntersectsRect is called it detects touch between two images. So if the other image touches the transparent space, CGRectIntersectsRect is called and my code is executed. I only want my code to be executed if it hits the actual COLOR space of the image. Here is two things that could help me through that, they follow through with questions.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
Thank you guys!
Regarding your question #1, it depends. All images are rectangular, all. So, if your sprite is rectangular, you can crop it in Photoshop to just the rectangular area. But if you want to handle, say, a circle ball, then you can't do such thing as "remove extra space". Your circle ball will always be stored in a rectangular image, with transparent space on the corners.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
You can manually select an area using the Rectangular Marquee Tool and Image > Crop or automatically trim the image based on an edge pixel color using Image > Trim.
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
You can use pixel-perfect collisions or create better bounding shapes for your game objects. For example, instead of using pixel-perfect collision for a spaceship like this one, you could use a triangle for the wings, a rectangle for the body, and a triangle for the head.
Pixel-perfect collision
One way you could implement it would be to
Have an blank image in memory.
Draw visible pixels from one image in blue (#0000ff).
Draw visible pixels from the other image in red (#ff0000).
If there's any purple pixels in the image (#ff00ff), then there's an intersection.
Alternative collision detection solution
If your game is physics-based, then you can use a physics engine like Box2D. You can use circles, rectangles, and polygons to represent all of your game objects and it'll give you accurate results without unnecessary overhead.
For collision detection for non-rectangular shapes, you should look into one of the many game and/or physics libraries available for iOS. Cocos2d coupled with Box2d or chipmunk are popular choices.
If you want to do it yourself, you'll need to start with something like a custom CGPath tracing the actual shape of each object, then use a function like CGPathContainsPoint (that's from memory, it may be wrong). But it is not a simple job. Angry birds uses box2d, AFAIK.

regarding rotating the map in iOS

I am facing this problem while trying to rotate the map in my iPhone app
The view gets clipped and rotation also happens. I want to avoid the clipping. Any tips ?
heres the code:
viewToRotate.layer.transform = CATransform3DMakeRotation(0.8, 0., 0., 1.);
You need your map rotated in 3D ? If not (which is what I think you need), then just use CGAffineTransformMakeRotation (be careful, as it requires the angle in radians).
Also, if you don't want your map to be clipped, you need to make your map bigger, like in the image below (open in new tab to see it bigger)
http://img593.imageshack.us/img593/4498/calculatemapboundswhenr.png
First, you need to calculate the diagonal of the rectangle (your visible map) as instructed in the image above (which I call "radius" because that would be the radius of the smallest circle bigger than your rectangle).
Second, using the radius, you need to calculate the (smallest) square that will allow your map to be seen without clipping. This square will be used to set the bounds of your map (caution: NOT the frame - Apple specifies that, when using rotation, you should not use frame - just bounds and / or center).
Make sure this square is centered on the center of your visible map rectangle (i.e. the square should have X pixels above AND below the small rectangle ... and Y pixels left AND right of the small rectangle).
Hope it helps !
Did you ever figure out the solution?
The only way I could do it was to make the MapView in Interface Builder much bigger than the actual size of the screen area its supposed to cover, then I centered the MapView such that its center was in the center of the narrower viewable area.
Rotation seemed to work similarly to how it works in the built-in Maps app.
My guess is that you have to do this so that the image tiles streaming in from Google cover a wide enough area to "fill in the blanks" so to speak, even if they're not always visable.
If you apply a little math, you could probably programmatically size and position the MapView such that you void clipping, but don't require more tiles than is absolutely necessary.