Adding an image border - objective-c

OK, this is what I'm trying to do :
Get an NSImage containing, let's say a photo (1000+ x 1000+ dimensions).
Get another NSImage containing just a tranparent background and a simple black border (500x500).
"Combine" the 2 images, so that the resulting image is the photo with a border.
This is what I've achieved so far :
NSImage* resultImage = [[[self drop] image] copy];
[resultImage lockFocus];
NSRect newRect = NSMakeRect(0, 0, [[[self drop] image] size].width, [[[self drop] image] size].height);
[[[self drop2] image] drawInRect:newRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[resultImage unlockFocus];
[[self drop] setImage:resultImage];
Where [self drop] is an ImageWell containing the photo, and [self drop2] an ImageWell containing the border.
The thing is that it IS working. However, the resulting image is - quite obviously - showing a somewhat "stretched" border.
How could I resolve that? Given that the original photo should be of ANY dimensions, how could I make it to use a border (of some fixed dimensions) and still avoid stretching?

How about doing the border directly with CALayer, e.g.:
#import <QuartzCore/QuartzCore.h>
CALayer *layer = imageView.layer;
layer.borderColor = [[NSColor blackColor] CGColor];
layer.borderWidth = 10;

I would do this differently. Just size the image as desired and then add the border. You could do this just by having a simple view with black background, or a suitable image (assuming you want to have customizable image borders, like frames), sized to always keep the resulting border constant. Then you can generate a new image from that view, if you need to.

Related

NSTextView not drawing text onto NSImage

I'm attempting to draw some text onto an NSImage, but I've run into some issues.
Originally I was just drawing an attributed string onto the NSImage, but if the string was too long it would run off the image and I couldn't find a way to wrap the text to a newline.
To solve this I figured that I could just make an NSTextView, place the text in there, and then draw it onto the NSImage.
Unfortunately, when I attempt to draw the NSTextView to the NSImage, the text does not appear. The NSTextView's background color does show up though.
When I set a breakpoint before I lock focus on the NSImage, and preview the NSTextView, the text view has text. After I draw the text view onto the NSImage, it looks like the NSTextView is drawn, just without the text.
If there is a better way to throw text onto an NSImage that has the ability to have multiple lines, please let me know how.
Here's the code I've written for reference:
NSTextView *textToDraw = [[NSTextView alloc] initWithFrame:NSMakeRect(0, 0, input.size.width - 16, 243)];
textToDraw.backgroundColor = [NSColor blueColor];
[textToDraw setAlignment:NSTextAlignmentCenter];
[textToDraw setEditable:YES];
// textOnImage is a regular NSString
[textToDraw insertText:textOnImage replacementRange:NSMakeRange(0, textOnImage.length)];
[textToDraw setTextColor:[NSColor blackColor]];
[textToDraw setFont:font];
[textToDraw setEditable:NO];
// input is an NSImage
[input lockFocus];
[textToDraw drawRect:NSMakeRect(8, inputImage.size.height - textToDraw.frame.size.height - 8, textToDraw.frame.size.width, textToDraw.frame.size.height)];
[input unlockFocus];
Instead of drawing textview on image you can provide the bounded rect in which text should be drawn. If the string is too long it wouldn't run off the image and will automatically enter in new line.
Here is code where inputImage is the image on which you want to draw text
NSImage *newImage = [[NSImage alloc] initWithSize:inputImage.size];
[newImage lockFocus];
[inputImage drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[textStr drawInRect:boundedRect withAttributes:attrsDictionary];
[newImage unlockFocus];

Drawing text above the divider in an NSSplitView, the top view will occasionally draw over it

Here's a .swf (pardon, the bad website and swf, that was the only way I could capture what was happening)
http://screencast.com/t/rzJ3b5ihSj
What appears to be happening, is that my divider is occasionally drawn first, and then the top NSView in the NSSplitView draws over it. But it seems inconsistent, because sometimes the divider draws on top.
Here is my -drawDividerInRect method, overridden from NSSplitView
-(void) drawDividerInRect:(NSRect)aRect
{
[[NSColor colorWithRed:10.0/255.0 green:10.0/255.0 blue:10.0/255.0 alpha:0.0] set];
NSRectFill(aRect);
id topView = [[self subviews] objectAtIndex:0];
NSRect topViewFrameRect = [topView frame];
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSFont fontWithName:#"Helvetica" size:26], NSFontAttributeName,[NSColor whiteColor], NSForegroundColorAttributeName, nil];
NSAttributedString * currentText=[[NSAttributedString alloc] initWithString:#"Tool Properties" attributes: attributes];
NSSize stringSize = [currentText size];
CGFloat xOffset = ([topView frame].size.width - stringSize.width)/2;
NSRect textRect = NSMakeRect(topViewFrameRect.origin.x+xOffset, topViewFrameRect.size.height+50, stringSize.width, stringSize.height);
[currentText drawInRect:textRect];
}
How can I make it so my divider & text within it draws on top all the time?
So actually the divider, and the text I wanted to draw, were all being done on the NSSplitView itself. The two views on either side of the divider are subviews, and so there's no way to draw in front of them. What I ended up doing was add some padding at the bottom of the top view so that the divider text is always exposed

Resize UIImageView in UITableViewCell

I have a 16x16 pixel image that I want to display in an UIImageView. So far, no problem, however 16x16 is a bit small so I want to resize the image view to 32x32 and thus also scale the image up.
But I can't get it to work, it always shows the image with 16x16, no matter what I try. I googled a lot, and found many snippets here on Stack Overflow, but its still doesn't work.
Here is my code so far:
[[cell.imageView layer] setMagnificationFilter:kCAFilterNearest];
[cell.imageView setAutoresizingMask:UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 32, 32)];
[cell.imageView setBounds:CGRectMake(0, 0, 32, 32)];
[cell.imageView setImage:image];
I don't want to create a new 32x32 pixel image because I already have some memory problems on older devices and creating two images instead of having just one looks like a very bad approach to me (the images can be perfectly scaled and it doesn't matter if they lose quality).
I have successfully made it using CGAffineTransformMakeScale!
cell.imageView.image = cellImage;
//self.rowWidth is the desired Width
//self.rowHeight is the desired height
CGFloat widthScale = self.rowWidth / cellImage.size.width;
CGFloat heightScale = self.rowHeight / cellImage.size.height;
//this line will do it!
cell.imageView.transform = CGAffineTransformMakeScale(widthScale, heightScale);
I think you need to set the contentMode:
cell.imageView.contentMode = UIViewContentModeScaleAspectFit;
In context:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:#"slashdot" ofType:#"png"]];
imageView = [[UIImageView alloc] initWithImage:image];
[imageView setBackgroundColor:[UIColor greenColor]];
[imageView setFrame:CGRectMake(x,y,32,32)];
imageView.contentMode = UIViewContentModeScaleAspectFit;
[self.view addSubview:imageView];
Note: I've set a background colour so you can debug the on-screen boundaries of the UIImageView. Also x and y are arbitrary integer coordinates.
Using CGAffineTransformMakeScele as #ahmed said is valid and do not seems to be duck type solution at al! For instance, if you have a large image and put it into a UITableViewCell (say the image is 2x larger than the one that fits into a table cell. If you scale by 0.9 you don't see any result. Only if you scale by less than 0.5 (because 0.5*2.0 = 1.0 that is the size of the cell). So it seems that inside the api, apple is doing exactly that.
You need to override the layoutSubviews method. By default, it's resizing the imageview based on the cell height size.
-(void)layoutSubviews
{
[super layoutSubviews];
self.imageView.frame = CGRectMake(self.imageView.frame.origin.x,
self.imageView.frame.origin.y,
MY_ICON_SIZE,
MY_ICON_SIZE);
}
You'll probably want to recalculate the origin as well so it's vertically centered.

Using NSImage operation to make a crop effect

I have an NSView that display an image, and i'd like to make this view acts like a cropping image effect. Then i make 3 rectangles (imageRect, secRect and IntersectRect), the imageRect is the rect which show an image, secRect is rect which just act to darken whole imageRect, and the intersectRect is a rect which like an observe rect, what i want to do is like make a "hole" on secRect to see directly into imageRect (without the darken). here's my drawRect method :
- (void)drawRect:(NSRect)rect {
// Drawing code here.
NSImage *image = [NSImage imageNamed:#"Lonely_Tree_by_sican.jpg"];
NSRect imageRect = [self bounds];
[image compositeToPoint:NSZeroPoint operation:NSCompositeSourceOver ];
if (NSIntersectsRect([myDrawRect currentRect], [self bounds])) {
//get the intersectionRect
intersectionRect = NSIntersectionRect([myDrawRect currentRect], imageRect);
//draw the imageRect
[image compositeToPoint:imageRect.origin operation:NSCompositeSourceOver];
//draw the secRect and fill it with black and alpha 0.5
NSRect secRect = NSMakeRect(imageRect.origin.x, imageRect.origin.y, imageRect.size.width, imageRect.size.height);
[[NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:0.5] set];
[NSBezierPath fillRect:secRect];
//have no idea for the intersectRect
/*[image compositeToPoint:intersectionRect.origin
fromRect:secLayer
operation:NSCompositeXOR
fraction:1.0];*/
}
//draw the rectangle
[myDrawRect beginDrawing];
}
I have my own class (myDrawRect) to draw a rectangle based on mouse click on [self bounds], so just ignore the beginDrawing command.
Any help would be fine, thanks. Hebbian.
You're doing far more work than you need to, and you're using deprecated methods (the compositeToPoint:operation: and compositeToPoint:fromRect:operation:fraction: methods) to do it.
All you need to do is send the image a single drawInRect:fromRect:operation:fraction: message. The fromRect: parameter is the rectangle you want to crop to; if you don't want to scale the cropped section, then the destination rect (the drawInRect: parameter) should have the same size.
About the only extra work you may need to do is if the image may be bigger than the view and you want to only draw the section that's within the view's bounds: When that happens, you'll need to inset the crop rectangle by the difference in size between the crop rectangle and the view bounds.

NSBitmapImageRep and multi-page TIFFs

I've got a program that can open TIFF documents and display them. I'm using setFlipped:YES.
If I'm just dealing with single page image files, I can do
[image setFlipped: YES];
and that, in addition to the view being flipped, seems to draw the image correctly.
However, for some reason, setting the flipped of the image doesn't seem to affect the flippedness of the individual representations.
This is relevant because the multiple images of a multi-page TIFF seem to appear as different "representations" of the same image. So, if I just draw the IMAGE, it's flipped, but if I draw a specific representation, it isn't flipped. I also can't seem to figure out how to chose which representation is the default one that gets drawn when you draw the NSImage.
thanks.
You shouldn't use the -setFlipped: method to control how the image is drawn. You should use a transform based on the flipped-ness of the context you are drawing into. Something like this (a category on NSImage):
#implementation NSImage (FlippedDrawing)
- (void)drawAdjustedInRect:(NSRect)dstRect fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSGraphicsContext* context = [NSGraphicsContext currentContext];
BOOL contextIsFlipped = [context isFlipped];
if (contextIsFlipped)
{
NSAffineTransform* transform;
[context saveGraphicsState];
// Flip the coordinate system back.
transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(dstRect)];
[transform scaleXBy:1 yBy:-1];
[transform concat];
// The transform above places the y-origin right where the image should be drawn.
dstRect.origin.y = 0.0;
}
[self drawInRect:dstRect fromRect:srcRect operation:op fraction:delta];
if (contextIsFlipped)
{
[context restoreGraphicsState];
}
}
- (void)drawAdjustedAtPoint:(NSPoint)point
{
[self drawAdjustedAtPoint:point fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedInRect:(NSRect)rect
{
[self drawAdjustedInRect:rect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedAtPoint:(NSPoint)aPoint fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSSize size = [self size];
[self drawAdjustedInRect:NSMakeRect(aPoint.x, aPoint.y, size.width, size.height) fromRect:srcRect operation:op fraction:delta];
}
#end
I believe that the answer is that Yes, different pages are separate representations, and the correct way to deal with them is to turn them into images with:
NSImage *im = [[NSImage alloc] initWithData:[representation TIFFRepresentation]];
[im setFlipped:YES];