I have a custom status item view where I draw a string using NSString's drawAtPoint:withAttributes:. When comparing to the system clock with the same text, it seems that my text is missing subpixel smoothing and looks sort of grainy. I found an advice to shift drawing point from (x,y) to (x+0.5,y+0.5), but it did not help. Default view and setTitle: produce the same result.
That's how it looks:
Seems that system clock has some light gray border below, but I could not imitate it by drawing a string the second time with light gray color.
I don't see any "font smoothing" in the system's rendering, or in any menu titles on my machine. If it was turned on, you'd see red and blue tinted pixels at the edges of the characters, instead of just gray. The difference is quite obvious when zoomed in.
You may want to experiment with turning subpixel positioning and quantization on or off, using
CGContextSetShouldSubpixelPositionFonts, CGContextSetShouldSubpixelQuantizeFonts, etc.
Otherwise, the main difference really is that faint white shadow in the system's rendering. Try setting the context's shadow to an offset of {0,1} (or maybe {0,-1} if your context is flipped?), blur of 0 or 1, and a color of 100% white at 30% alpha -- that looks pretty close to me.
Try:
CGContextRef ctx = [NSGraphicsContext currentContext].graphicsPort;
CGContextSetShouldSmoothFonts(ctx, true);
I was searching for that a long time, too. Got it from a WWDC video (I think it was "Best Practices for Cocoa Animation")
I guess you use
UIGraphicsBeginImageContext(<#CGSize size#>);
for context initialization. On retina displays it leads to blured renderings of fonts. Use
UIGraphicsBeginImageContextWithOptions(myFrameSize, NO, 0.0f);
instead to fix it.
Also, it seems if you use a Layer, set yourself as the layer delegate and do your drawing in:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
NSGraphicsContext *nsGraphicsContext;
nsGraphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:ctx
flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:nsGraphicsContext];
// drawing here...
[NSGraphicsContext restoreGraphicsState];
}
It will also work.
Related
- (void)drawRect:(CGRect)rect{
float sliceSize = rect.size.width / imagesShownAtOnce;
//Apply our clipping region and fill it with black
[clippingRegion addClip];
[clippingRegion fill];
//Draw the 3 images (+1 for inbetween), with our scroll amount.
CGPoint loc;
for (int i=0;i<imagesShownAtOnce+1;i++){
loc = CGPointMake(rect.origin.x+(i*sliceSize)-imageScroll, rect.origin.y);
[[buttonImages objectAtIndex:i] drawAtPoint:loc];
}
//Draw the text region background
[[UIColor blackColor] setFill];
[textRegion fillWithBlendMode:kCGBlendModeNormal alpha:0.4f];
//Draw the actual text.
CGRect textRectangle = CGRectMake(rect.origin.x+16,rect.origin.y+rect.size.height*4/5.6,rect.size.width/1.5,rect.size.height/3);
[[UIColor whiteColor] setFill];
[buttonText drawInRect:textRectangle withFont:[UIFont fontWithName:#"Avenir-HeavyOblique" size:22]];
}
clippingRegion and textRegion are UIBezierPaths to give me the rounded rectangles I want (First for a clipping region, 2nd as an overlay for my text)
The middle section is drawing 3 images and letting them scroll along, which im updating every 2 refreshes from a CADisplayLink, and that invalidates the draw region by calling [self setNeedsDisplay], and also increasing my imageScroll variable.
Now that that background information is done, here is my issue:
It runs, and even runs smoothly. But it is using up an absolutely high amount of CPU time (80%+)!! How do I push this off to the GPU on the phone instead? Someone told me about CALayers but I've never dealt with them before
Draw each component of your drawing once into something (a view or layer) and let it hold the cached the drawing. Then you just move or transform each component, and exactly as you say, it's all done by the GPU.
You could do this with individual views or with individual layers, but that doesn't really matter (a view is a layer, under the hood). The point is that there is no need to be constantly redrawing from scratch when all you really want is to move the same persistent pieces around.
Learning about CALayer would be a good idea, as it is in fact the basis of all drawing on iOS. What could be more important to know about than that?
In a project I'm working on, I have 3 images: top, middle, and bottom. Top and bottom are fixed height, and middle should be repeated in between the two. (The window size will be changing.) They all are tinted with a color from the user preferences, then need to have their alpha set using a value from the preferences.
I can do pretty much everything. The part I get stuck at is drawing the middle. I decided using [NSColor +colorWithPaternImage:] would be the easiest thing to use. There's a lot of code that makes the actually images and colors, so just assume they exist and are not nil.
int problem; // just to help explain things
float alpha;
NSImage *middleTinted;
NSRect drawRect = [self bounds];
drawRect.size.height = self.bounds.size.height - topTinted.size.height - bottomTinted.size.height;
drawRect.origin.y = topTinted.size.height;
NSColor* colorOne = [NSColor colorWithPatternImage:middleTinted];
NSColor* colorTwo = [colorOne colorWithAlphaComponent:alpha];
if(problem == 1)
{
[colorOne set];
}
else if(problem == 2)
{
[colorTwo set];
}
[NSBezierPath fillRect:drawRect];
Assuming problem == 1, it draws the correct image, in the correct location and with the correct size, but no alpha. (Obviously, since I didn't specify one.)
When problem == 2, I'd expect it to do the same thing, but have the correct alpha value. Instead of this, I get a black box.
Is there a solution that will repeat the image with the correct alpha? I figure I could just draw the image manually in a loop, but I'd prefer a more reasonable solution if one exists.
I expect the problem is that pattern colors don't support -colorWithAlphaComponent:.
NSCell.h contains a method called NSDrawThreePartImage that does the work of drawing end caps and a tiled image in between. It also has an alphaFraction parameter that should meet your needs.
If that doesn't work for you, then you might get the pattern color approach to work by re-rendering your middleTinted image into a new NSImage, using the desired alpha value. (See NSImage's draw... methods.)
I'm using NSImage's -lockFocusFlipped: method to do some drawing into an image. My code looks like this:
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(256, 256)];
[image lockFocusFlipped:YES];
NSShadow *shadow = [[NSShadow alloc] init];
[shadow setShadowColor:[NSColor blackColor]];
[shadow setShadowBlurRadius:6.0];
[shadow setShadowOffset:NSMakeSize(0, 3)];
[shadow set];
NSRect shapeRect = NSMakeRect(0, 0, 256, 100);
[[NSColor redColor] set];
NSRectFill(shapeRect);
[image unlockFocus];
This code works to a certain point. I can confirm that the context is indeed flipped because [[NSGraphicsContext currentContext] isFlipped] returns YES, and also because shapeRect is drawn at the right position (using the top left corner as the origin). That said, the NSShadow does not seem to respect the flipped status of the context. Setting the shadow offset to (0, 3) should move the shadow down when the context is flipped, but it actually moves it up (which is what would happen in a standard non-flipped context).
This problem seems specific to -lockFocusFlipped, because when I'm drawing using this same code into a CALayer with a flipped coordinate system, the shadow is drawn just fine (respecting the flip). Documentation on -lockFocusFlipped also seems to be quite vague. This is all it says in the NSImage class documentation:
Prepares the image to receive drawing commands using the specified flipped state.
And I also found this note in the Snow Leopard AppKit Release Notes:
There are cases, for example drawing directly via NSLayoutManager, that require a flipped context. To cover this case, we add
- (void)lockFocusFlipped:(BOOL)flipped;
This doesn't alter the state of the image itself, only the context on which focus is locked. It means that (0,0) is at the top left and positive along the Y-axis is down in the locked context.
None of the docs seem to explain NSShadow's behaviour in this case. And through further testing, it seems NSGradient does not seem to respect the flipped state of the drawing context used by NSImage either.
Any insight is greatly appreciated :-)
From the NSShadow class reference:
Shadows are always drawn in the default user coordinate space, regardless of any transformations applied to that space. This means that rotations, translations and other transformations of the current transformation matrix (the CTM) do not affect the resulting shadow.
And that's what flipping ultimately is: Translate up, scale back the other way.
There's no such statement for NSGradient, so I'd suggest filing a bug about that one.
I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...
I have a window with an subclass of NSView in it. Inside the view, I put an NSImage.
I want to be able to rotate the image by 90 degrees, keeping the (new) upper left corner of the image in the upper left corner of the view. Of course, I will have to rotate the image, and then translate it to put the origin back into place.
In Carbon, I found CGContextRotateCTM which does what I want . However, I can't find the right call in ObjC. setFrameCenterRotation doesn't seem to do anything, and in setFrameRotation, I can't seem to figure out where the origin is, so I can approprately translate.
It seems to move. When I resize the window it puts the image (or part of it, I seem to have a strange clipping issue as wel) and when I scroll, it jumps to a different (and not always the saem) location.
Does this make sense to anyone?
thanks
I rotate text on the screen for an app I work on and the Cocoa (I assume you mean Cocoa and not ObjC in your question) way of doing this is to use NSAffineTransform.
Here's a snippet that should get you started
double rotateDeg = 90;
NSAffineTransform *rotate = [[NSAffineTransform alloc] init];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
[context saveGraphicsState];
[rotate rotateByDegrees:rotateDeg];
[rotate concat];
/* Your drawing code [NSImage drawAtPoint....]for the image goes here
Also, if you need to lock focus when drawing, do it here. */
[rotate release];
[context restoreGraphicsState];
The mathematics on the rotation can get a little tricky here because what the above does is to rotate the coordinate system that you are drawing into. My rotation of 90 degrees is a counter-clockwise rotation.