I've got a program that can open TIFF documents and display them. I'm using setFlipped:YES.
If I'm just dealing with single page image files, I can do
[image setFlipped: YES];
and that, in addition to the view being flipped, seems to draw the image correctly.
However, for some reason, setting the flipped of the image doesn't seem to affect the flippedness of the individual representations.
This is relevant because the multiple images of a multi-page TIFF seem to appear as different "representations" of the same image. So, if I just draw the IMAGE, it's flipped, but if I draw a specific representation, it isn't flipped. I also can't seem to figure out how to chose which representation is the default one that gets drawn when you draw the NSImage.
thanks.
You shouldn't use the -setFlipped: method to control how the image is drawn. You should use a transform based on the flipped-ness of the context you are drawing into. Something like this (a category on NSImage):
#implementation NSImage (FlippedDrawing)
- (void)drawAdjustedInRect:(NSRect)dstRect fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSGraphicsContext* context = [NSGraphicsContext currentContext];
BOOL contextIsFlipped = [context isFlipped];
if (contextIsFlipped)
{
NSAffineTransform* transform;
[context saveGraphicsState];
// Flip the coordinate system back.
transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(dstRect)];
[transform scaleXBy:1 yBy:-1];
[transform concat];
// The transform above places the y-origin right where the image should be drawn.
dstRect.origin.y = 0.0;
}
[self drawInRect:dstRect fromRect:srcRect operation:op fraction:delta];
if (contextIsFlipped)
{
[context restoreGraphicsState];
}
}
- (void)drawAdjustedAtPoint:(NSPoint)point
{
[self drawAdjustedAtPoint:point fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedInRect:(NSRect)rect
{
[self drawAdjustedInRect:rect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
}
- (void)drawAdjustedAtPoint:(NSPoint)aPoint fromRect:(NSRect)srcRect operation:(NSCompositingOperation)op fraction:(CGFloat)delta
{
NSSize size = [self size];
[self drawAdjustedInRect:NSMakeRect(aPoint.x, aPoint.y, size.width, size.height) fromRect:srcRect operation:op fraction:delta];
}
#end
I believe that the answer is that Yes, different pages are separate representations, and the correct way to deal with them is to turn them into images with:
NSImage *im = [[NSImage alloc] initWithData:[representation TIFFRepresentation]];
[im setFlipped:YES];
Related
I have a custom NSView 'MyView' that displays an NSImage that is expensive to create. Ideally, this rendering should happen on a background threat and MyView should update itself when rendering is done.
To achieve this, I followed the suggestion in WWDC 2013, Session 215 (around 4:00).
It works like this: When drawRect is called and the image wasn't created yet, rendering is triggered on a background queue. There, the image is created, stored in an instance variable and setNeedsDisplay is called again (on the main thread) to mark the view as dirty. That will call drawRect a second time where the image is now present and can be drawn:
- (void)drawRect:(NSRect)dirtyRect
{
// Do we have an image?
if( self.image )
{
// Yes, we can draw the image (and invalidate it right away for demo purposes)
[self.image drawInRect:self.bounds];
self.image = nil;
}
else
{
// No, we have to async render the image first and mark the view as dirty afterwards
CGSize imageSize = self.bounds.size;
dispatch_async( dispatch_get_global_queue( QOS_CLASS_USER_INTERACTIVE, 0 ), ^
{
self.image = [self _renderImageWithSize:imageSize];
dispatch_async( dispatch_get_main_queue(), ^
{
[self setNeedsDisplayInRect:dirtyRect];
});
});
}
}
- (NSImage *)_renderImageWithSize:(NSSize)size
{
// Simulate expensive image rendering (just for demo purposes)
NSBitmapImageRep * bitmapRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil pixelsWide:size.width pixelsHigh:size.height bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:32];
NSGraphicsContext * context = [NSGraphicsContext graphicsContextWithBitmapImageRep:bitmapRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
// Draw oval with random hue.
float hue = ( (float)( labs( random() % 100 )) / 100.0 );
[[NSColor colorWithHue:hue saturation:0.5 brightness:1.0 alpha:1.0] setFill];
[[NSBezierPath bezierPathWithOvalInRect:NSMakeRect( 0.0, 0.0, size.width, size.height )] fill];
[NSGraphicsContext restoreGraphicsState];
NSImage * image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
// Simulate super-expensive rendering
sleep( 1 );
return image;
}
That code works fine but it creates an annoying flicker. It seems that the view is cleared in the first call to drawRect. It stays cleared until the second drawRect actually draws the rendered image.
Of course, I could just draw the old image, but I would prefer not to draw stale data unnecessarily.
Is there a way to keep the view from clearing in drawRect?
I have the following objective-C function meant to resize an NSBitmapImageRep to a designated size.
Currently, when working with an image of size 2048x1536 and trying to resize it to 300x225, this function keeps returning an NSBitmapImageRep of size 600x450.
- (NSBitmapImageRep*) resizeImageRep: (NSBitmapImageRep*) anOriginalImageRep toTargetSize: (NSSize) aTargetSize
{
NSImage* theTempImageRep = [[[NSImage alloc] initWithSize: aTargetSize ] autorelease];
[ theTempImageRep lockFocus ];
[NSGraphicsContext currentContext].imageInterpolation = NSImageInterpolationHigh;
NSRect theTargetRect = NSMakeRect(0.0, 0.0, aTargetSize.width, aTargetSize.height);
[ anOriginalImageRep drawInRect: theTargetRect];
NSBitmapImageRep* theResizedImageRep = [[[NSBitmapImageRep alloc] initWithFocusedViewRect: theTargetRect ] autorelease];
[ theTempImageRep unlockFocus];
return theResizedImageRep;
}
Debugging it, I'm finding that theTargetRect is of the proper size, but the call to initWithFocusedRec returns a bitmap of 600x450 pixels (high x wide)
I'm at a complete loss as to why this may be happening. Does anyone have any insight?
Your technique won't produce a resized image. For one thing, the method initWithFocusedViewRect:reads bitmap data from the focused window and is used to create screen grabs.
You should create a new graphics context with a new NSBitmapImageRep or NSImage of the desired size then you draw your image into that context.
Something like this.
NSGraphicsContext* context = [NSGraphicsContext graphicsContextWithBitmapImageRep:theTempImageRep];
if (context)
{
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:context];
[anOriginalImageRep drawAtPoint:NSZeroPoint];
[anOriginalImageRep drawInRect:theTargetRect];
[NSGraphicsContext restoreGraphicsState];
}
// Now your temp image rep should have the resized original.
I am trying to make trivial app, in which I have a sidebar, and I am trying either to get background color based on RGB values from the PSD file, either to use background image as pattern....
I have make attempts in both ways, and nothing work's so far. Any help will be deeply appreciated.
-(void) drawRect:(NSRect)dirtyRect {
CALayer *viewLayer = [CALayer layer];
[viewLayer setBackgroundColor:CGColorCreateGenericRGB(85.0, 179.0, 217.0, 1.0)]; //RGB plus Alpha Channel
[self setWantsLayer:YES]; // view's backing store is using a Core Animation Layer
[self setLayer:viewLayer];
}
This code should show bluish color, result is almost white color...not even close to what I want.
Second code, show black background, even that my png file is in the folder supporting files.
- (void)drawRect:(NSRect)dirtyRect {
NSGraphicsContext* theContext = [NSGraphicsContext currentContext];
[theContext saveGraphicsState];
[[NSGraphicsContext currentContext] setPatternPhase:NSMakePoint(0,[self frame].size.height)];
[self.customBackgroundColour set];
NSRectFill([self bounds]);
[theContext restoreGraphicsState];
}
- (id)initWithFrame:(NSRect)frame {
self = [super initWithFrame:frame];
if (self) {
self.customBackgroundColour = [NSColor colorWithPatternImage:
[NSImage imageNamed:#"buttonBg.png"]];
}
return self;
}
Again, any help will be deeply appreciated.
If I recall correctly CGColorCreateGenericRGB expects a range from 0.0-1.0 and would explain why it is white. This should fix the white issue.
[viewLayer setBackgroundColor:CGColorCreateGenericRGB(85.0/255.0, 179.0/255.0, 217.0/255.0, 1.0)]; //RGB plus Alpha Channel
Hopefully that helps.
OK, this is what I'm trying to do :
Get an NSImage containing, let's say a photo (1000+ x 1000+ dimensions).
Get another NSImage containing just a tranparent background and a simple black border (500x500).
"Combine" the 2 images, so that the resulting image is the photo with a border.
This is what I've achieved so far :
NSImage* resultImage = [[[self drop] image] copy];
[resultImage lockFocus];
NSRect newRect = NSMakeRect(0, 0, [[[self drop] image] size].width, [[[self drop] image] size].height);
[[[self drop2] image] drawInRect:newRect
fromRect:NSZeroRect
operation:NSCompositeSourceOver
fraction:1.0];
[resultImage unlockFocus];
[[self drop] setImage:resultImage];
Where [self drop] is an ImageWell containing the photo, and [self drop2] an ImageWell containing the border.
The thing is that it IS working. However, the resulting image is - quite obviously - showing a somewhat "stretched" border.
How could I resolve that? Given that the original photo should be of ANY dimensions, how could I make it to use a border (of some fixed dimensions) and still avoid stretching?
How about doing the border directly with CALayer, e.g.:
#import <QuartzCore/QuartzCore.h>
CALayer *layer = imageView.layer;
layer.borderColor = [[NSColor blackColor] CGColor];
layer.borderWidth = 10;
I would do this differently. Just size the image as desired and then add the border. You could do this just by having a simple view with black background, or a suitable image (assuming you want to have customizable image borders, like frames), sized to always keep the resulting border constant. Then you can generate a new image from that view, if you need to.
For part of my application I have a need to create an image of a certain view and all of its subviews.
To do this I'm creating a context that wraps a bitmap with the same-size as the view, but I'm unsure how to draw the view hierarchy into it. I can draw a single view just be setting the context and explicitly calling drawRect, but this does not deal with all of the subviews.
I can't see anything in the NSView interface that could help with this so I suspect the solution may lie at a higher level.
I found that writing the drawing code myself was the best way to:
deal with potential transparency issues (some of the other options do add a white background to the whole image)
performance was much better
The code below is not perfect, because it does not deal with scaling issues when going from bounds to frames, but it does take into account the isFlipped state, and works very well for what I used it for. Note that it only draws the subviews (and the subsubviews,... recursively), but getting it to also draw itself is very easy, just add a [self drawRect:[self bounds]] in the implementation of imageWithSubviews.
- (void)drawSubviews
{
BOOL flipped = [self isFlipped];
for ( NSView *subview in [self subviews] ) {
// changes the coordinate system so that the local coordinates of the subview (bounds) become the coordinates of the superview (frame)
// the transform assumes bounds and frame have the same size, and bounds origin is (0,0)
// handling of 'isFlipped' also probably unreliable
NSAffineTransform *transform = [NSAffineTransform transform];
if ( flipped ) {
[transform translateXBy:subview.frame.origin.x yBy:NSMaxY(subview.frame)];
[transform scaleXBy:+1.0 yBy:-1.0];
} else
[transform translateXBy:subview.frame.origin.x yBy:subview.frame.origin.y];
[transform concat];
// recursively draw the subview and sub-subviews
[subview drawRect:[subview bounds]];
[subview drawSubviews];
// reset the transform to get back a clean graphic contexts for the rest of the drawing
[transform invert];
[transform concat];
}
}
- (NSImage *)imageWithSubviews
{
NSImage *image = [[[NSImage alloc] initWithSize:[self bounds].size] autorelease];
[image lockFocus];
// it seems NSImage cannot use flipped coordinates the way NSView does (the method 'setFlipped:' does not seem to help)
// Use instead an NSAffineTransform
if ( [self isFlipped] ) {
NSAffineTransform *transform = [NSAffineTransform transform];
[transform translateXBy:0 yBy:NSMaxY(self.bounds)];
[transform scaleXBy:+1.0 yBy:-1.0];
[transform concat];
}
[self drawSubviews];
[image unlockFocus];
return image;
}
You can use -[NSView dataWithPDFInsideRect:] to render the entire hierarchy of the view you send it to into a PDF, returned as an NSData object. You can then do whatever you wish with that, including render it into a bitmap.
Are you sure you want a bitmap representation though? After all, that PDF could be (at least in theory) resolution-independent.
You can use -[NSBitmapImageRep initWithFocusedViewRect:] after locking focus on a view to have the view render itself (and its subviews) into the given rectangle.
What you want to do is available explicitly already. See the section "NSView Drawing Redirection API" in the 10.4 AppKit release notes.
Make an NSBitmapImageRep for caching and clear it:
NSGraphicsContext *bitmapGraphicsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:cacheBitmapImageRep];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext:bitmapGraphicsContext];
[[NSColor clearColor] set];
NSRectFill(NSMakeRect(0, 0, [cacheBitmapImageRep size].width, [cacheBitmapImageRep size].height));
[NSGraphicsContext restoreGraphicsState];
Cache to it:
-[NSView cacheDisplayInRect:toBitmapImageRep:]
If you want to more generally draw into a specified context handling view recursion and transparency correctly,
-[NSView displayRectIgnoringOpacity:inContext:]