In a project I'm working on, I have 3 images: top, middle, and bottom. Top and bottom are fixed height, and middle should be repeated in between the two. (The window size will be changing.) They all are tinted with a color from the user preferences, then need to have their alpha set using a value from the preferences.
I can do pretty much everything. The part I get stuck at is drawing the middle. I decided using [NSColor +colorWithPaternImage:] would be the easiest thing to use. There's a lot of code that makes the actually images and colors, so just assume they exist and are not nil.
int problem; // just to help explain things
float alpha;
NSImage *middleTinted;
NSRect drawRect = [self bounds];
drawRect.size.height = self.bounds.size.height - topTinted.size.height - bottomTinted.size.height;
drawRect.origin.y = topTinted.size.height;
NSColor* colorOne = [NSColor colorWithPatternImage:middleTinted];
NSColor* colorTwo = [colorOne colorWithAlphaComponent:alpha];
if(problem == 1)
{
[colorOne set];
}
else if(problem == 2)
{
[colorTwo set];
}
[NSBezierPath fillRect:drawRect];
Assuming problem == 1, it draws the correct image, in the correct location and with the correct size, but no alpha. (Obviously, since I didn't specify one.)
When problem == 2, I'd expect it to do the same thing, but have the correct alpha value. Instead of this, I get a black box.
Is there a solution that will repeat the image with the correct alpha? I figure I could just draw the image manually in a loop, but I'd prefer a more reasonable solution if one exists.
I expect the problem is that pattern colors don't support -colorWithAlphaComponent:.
NSCell.h contains a method called NSDrawThreePartImage that does the work of drawing end caps and a tiled image in between. It also has an alphaFraction parameter that should meet your needs.
If that doesn't work for you, then you might get the pattern color approach to work by re-rendering your middleTinted image into a new NSImage, using the desired alpha value. (See NSImage's draw... methods.)
Related
I'm new to this and it's hard for me to even ask my question right because I don't know the right terminology. I've done some objective c coding so I'm a little bit beyond beginner except when it comes to working with UIs.
I would like to know the best practices to accomplish this - i.e. the right way.
I have a window with some buttons at the top of it. Below that is a region that will have an image or webview. This will be of variable size so to make it look nice I'd like to have the area behind it have a nice tiled pattern.
I've experimented with a few things that work but everything feels a bit hackish. Is there a control that automatically provides a tiled background and lets me put other controls inside of it? For that matter, is there any kind of control that allows putting other controls inside of it? (I'm used to this in GTK but it doesn't appear to be common in Cocoa)
Also, considering that the image can change sizes based on the buttons above, should I be using core animation and it's layers (I've read about them but not used them)?
One fairly simple way to do this is to use a custom NSView subclass for the background view. In its -drawRect: method, write code to take the image and draw it repeatedly to fill the bounds of the view. The algorithm to do this is pretty simple. Start at the top left (or any corner really), draw the image, then increment the x position by the width of the image, and draw again. When the x position exceeds the maximum x coordinate of the view, increment y by the height of the image and draw the next row, and so on until you've filled the whole thing. This should do the trick:
#interface TiledBackgroundView : NSView
#end
#implementation TiledBackgroundView
- (void)drawRect:(NSRect)dirtyRect
{
NSRect bounds = [self bounds];
NSImage *image = ...
NSSize imageSize = [image size];
// start at max Y (top) so that resizing the window looks to be anchored at the top left
for ( float y = NSHeight(bounds) - imageSize.height; y >= -imageSize.height; y -= imageSize.height ) {
for ( float x = NSMinX(bounds); x < NSWidth(bounds); x += imageSize.width ) {
NSRect tileRect = NSMakeRect(x, y, imageSize.width, imageSize.height);
if ( NSIntersectsRect(tileRect, dirtyRect) ) {
NSRect destRect = NSIntersectionRect(tileRect, dirtyRect);
[image drawInRect:destRect
fromRect:NSOffsetRect(destRect, -x, -y)
operation:NSCompositeSourceOver
fraction:1.0];
}
}
}
}
#end
No control automatically tiles a background for you.
Remember that NSViews (usually subclasses) do all the drawing - so, for instance, that gray area would be a subclass of NSView and you could put the images inside of it.
To actually draw the tiled image (by the NSView subclass), Madsen's method is usable, but not the most convenient. The easiest way is something along the lines of:
NSColor *patternColor = [NSColor colorWithPatternImage:[NSImage imageNamed:#"imageName"]];
[patternColor setFill];
NSRectFill(rectToDraw);
which you should put in the -drawRect: method of your custom view class. It creates an NSColor which represents a tiled image. Note that it can also be a subclass of a scroll/clip view, etc.
I am not too familiar with Core Animation but it is useful for manipulating views, and might be a direction you want to look at concerning the view drawing the image (and that view only).
I have a custom status item view where I draw a string using NSString's drawAtPoint:withAttributes:. When comparing to the system clock with the same text, it seems that my text is missing subpixel smoothing and looks sort of grainy. I found an advice to shift drawing point from (x,y) to (x+0.5,y+0.5), but it did not help. Default view and setTitle: produce the same result.
That's how it looks:
Seems that system clock has some light gray border below, but I could not imitate it by drawing a string the second time with light gray color.
I don't see any "font smoothing" in the system's rendering, or in any menu titles on my machine. If it was turned on, you'd see red and blue tinted pixels at the edges of the characters, instead of just gray. The difference is quite obvious when zoomed in.
You may want to experiment with turning subpixel positioning and quantization on or off, using
CGContextSetShouldSubpixelPositionFonts, CGContextSetShouldSubpixelQuantizeFonts, etc.
Otherwise, the main difference really is that faint white shadow in the system's rendering. Try setting the context's shadow to an offset of {0,1} (or maybe {0,-1} if your context is flipped?), blur of 0 or 1, and a color of 100% white at 30% alpha -- that looks pretty close to me.
Try:
CGContextRef ctx = [NSGraphicsContext currentContext].graphicsPort;
CGContextSetShouldSmoothFonts(ctx, true);
I was searching for that a long time, too. Got it from a WWDC video (I think it was "Best Practices for Cocoa Animation")
I guess you use
UIGraphicsBeginImageContext(<#CGSize size#>);
for context initialization. On retina displays it leads to blured renderings of fonts. Use
UIGraphicsBeginImageContextWithOptions(myFrameSize, NO, 0.0f);
instead to fix it.
Also, it seems if you use a Layer, set yourself as the layer delegate and do your drawing in:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
NSGraphicsContext *nsGraphicsContext;
nsGraphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:ctx
flipped:NO];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:nsGraphicsContext];
// drawing here...
[NSGraphicsContext restoreGraphicsState];
}
It will also work.
Part of my iPad app allows users to draw paths to connect different parts of the screen. They all have the same color (white) and line width. Each path is represented as a UIBezierPath. Besides their locations, they look identical. Since users are only editing one path at a time, I want to make it so that they can visually see which path they are editing.
Is there a way to animate the path, so that the user has a visual queue about which path they are editing? I'm thinking that maybe the current path could glow or have moving dotted lines. I don't want to change the base color, since I use many colors in the other parts of application (pretty much all major colors except white).
I haven't done this in an animated way, but I make my currently drawing paths have dashed lines, and then solid once the drawing ends. I subclassed NSBezierPath, and added a selected property. The setSelected method looks like this:
-(void)setSelected:(BOOL) yes_no {
selected = yes_no;
if (yes_no == YES) {
CGFloat dashArray[2];
dashArray[0] = 5;
dashArray[1] = 2;
[self setLineDash:dashArray count:2 phase:0];
self.pathColor = [self.unselectedColor highlightWithLevel:.5];
} else {
[self setLineDash:nil count:2 phase:0];
self.pathColor = self.unselectedColor;
}
}
I set the property to YES in the mouseDragged: method, and then to NO in mouseUP:
How to divide this UIImage by the black line into two parts. The upper contour set of UIBezierPath.
I need to get two resulting UIImages. So is it possible?
The following set of routines create versions of a UIImage with either only the content inside a path, or only content outside that path.
Both make use of the compositeImage method, which uses CGBlendMode. CGBlendMode is very powerful for masking anything you can draw against anything else you can draw. Calling compositeImage: with other blend modes can have interesting (if not always useful) effects. See the CGContext Reference for all the modes.
The clipping method I described in my comment to your OP does work and is probably faster, but only if you have UIBezierPaths defining all the regions you want to clip.
- (UIImage*) compositeImage:(UIImage*) sourceImage onPath:(UIBezierPath*) path usingBlendMode:(CGBlendMode) blend;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// First draw an opaque path...
[path fill];
// ...then composite with the image.
[sourceImage drawAtPoint:CGPointZero blendMode:blend alpha:1.0];
// With drawing complete, store the composited image for later use.
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return maskedImage;
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaInsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceIn];
}
- (UIImage*) maskImage:(UIImage*) sourceImage toAreaOutsidePath:(UIBezierPath*) maskPath;
{
return [self compositeImage:sourceImage onPath:maskPath usingBlendMode:kCGBlendModeSourceOut];
}
I tested clipping, and in a few different tests it was 25% slower than masking to achieve the same result as the [maskImage: toAreaInsidePath:] method in my other answer. For completeness I include it here, but please don't use it without a good reason.
- (UIImage*) clipImage:(UIImage*) sourceImage toPath:(UIBezierPath*) path;
{
// Create a new image of the same size as the source.
UIGraphicsBeginImageContext([sourceImage size]);
// Clipping means drawing only happens within the path.
[path addClip];
// Draw the image to the context.
[sourceImage drawAtPoint:CGPointZero];
// With drawing complete, store the composited image for later use.
UIImage *clippedImage = UIGraphicsGetImageFromCurrentImageContext();
// Graphics contexts must be ended manually.
UIGraphicsEndImageContext();
return clippedImage;
}
This can be done but it requires some trigonometry. Let's consider the case for the upper image. First, determine the bottommost end point of the UIBezierPath and use UIGraphicsBeginImageContext to get the top part of the image above the line. This will look as follows:
Now, assuming that your line is straight, move pixel by pixel along the line drawing vertical strokes of clearColor (loop for top portion. Proceed on similar lines for bottom portion):
for(int currentPixel_x=0;currentPixel_x<your_ui_image_top.size.width)
UIGraphicsBeginImageContext(your_ui_image_top.size);
[your_ui_image_top drawInRect:CGRectMake(0, 0, your_ui_image_top.size.width, your_ui_image_top.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeClear);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(),[UIColor clearColor].CGColor);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, m*currentPixel_x + c);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPixel_x, your_ui_image_top.size.height);
CGContextStrokePath(UIGraphicsGetCurrentContext());
your_ui_image_top = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Your UIBezierPath will have to be converted to a straight line of the form y = m*x + c. The x in this equation will be currentPixel_x above. Iterate through the width of the image, increasingcurrentPixel_x by 1 each time. next_y_point_on_your_line will be calculated as:
next_y_point_on_your_line = m*currentPixel_x + c
Each vertical stroke will be 1 pixel wide and its height will depend on how you traverse through them. After some iterations, your image will look roughly (please excuse my poor photo-editing skills!) like:
There are multiple ways of how you draw the clear strokes and this is just one way of going about it. You can also have clear strokes that are parallel to the given path if it gives better results.
Another way is to set the alpha of the pixels below the line to 0.
I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...