NSAffineTransforms not being used? - objective-c

I have a subclass of NSView, and in that I'm drawing an NSImage. I'm unsing NSAffineTransforms to rotate, translate and scale the image.
Most of it works fine. However, sometimes, the transforms just don't seem to get activated.
For example, when I resize the window, the rotate transform doesn't happen.
When I zoom in on the image, it puts the lower left of the image in the correct place, but doesn't zoom it, but it does zoom the part of the image that would be to the right of the original sized image. If I rotate this, it zooms correctly, but translates wrong. (The transation may be a calculation error on my part)
Here is the code of my drawRect: (sorry for the long code chunk)
- (void)drawRect:(NSRect)rect
{
// Drawing code here.
double rotateDeg = -90* rotation;
NSAffineTransform *afTrans = [[NSAffineTransform alloc] init];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
NSSize sz;
NSRect windowFrame = [[self window] frame];
float deltaX, deltaY;
NSSize superSize = [[self superview] frame].size;
float height, width, sHeight, sWidth;
NSRect imageRect;
if(image)
{
sz = [ image size];
imageRect.size = sz;
imageRect.origin = NSZeroPoint;
imageRect.size.width *= zoom;
imageRect.size.height *= zoom;
height = sz.height * zoom ;
width = sz.width *zoom ;
sHeight = superSize.height;
sWidth = superSize.width;
}
I need to grab the sizes of everything early so that I can use them later when I rotate. I am not sure that I need to protect any of that, but I'm paranoid from years of C...
[context saveGraphicsState];
// rotate
[afTrans rotateByDegrees:rotateDeg];
// translate to account for window size;
deltaX = 0;
deltaY = 0;
// translate to account for rotation
// in 1 and 3, X and Y are reversed because the entire FRAME
// (inculding axes) is rotated!
switch (rotation)
{
case 0:
// NSLog(#"No rotation ");
break;
case 1:
deltaY -= (sHeight - height);
deltaX -= sHeight ;
break;
case 2:
deltaX -= width;
deltaY -= ( 2*sHeight - height);
// it's rotating around the lower left of the FRAME, so,
// we need to move it up two frame hights, and then down
// the hieght of the image
break;
case 3:
deltaX += (sHeight - width);
deltaY -= sHeight;
break;
}
Since I'm rotating around the lower left corner, and I want the image to be locked to the upper left corner, I need to move the image around. When I rotate once, the image is in the +- quadrant, so I need to shift it up one view-height, and to the left a view-height minus an image height. etc.
[afTrans translateXBy:deltaX yBy:deltaY];
// for putting image in upper left
// zoom
[afTrans scaleBy: zoom];
printMatrix([afTrans transformStruct]);
NSLog(#"zoom %f", zoom);
[afTrans concat];
if(image)
{
NSRect drawingRect = imageRect;
NSRect frame = imageRect;
frame.size.height = MAX(superSize.height, imageRect.size.height) ;
[self setFrame:frame];
deltaY = superSize.height - imageRect.size.height;
drawingRect.origin.y += deltaY;
This makes the frame the correct size so that the image is in the upper left of the frame.
If the image is bigger than the window, I want the frame to be big enough so scroll bars appear. If it isn't I want the frame to be big enough that it reaches the top of the window.
[image drawInRect:drawingRect
fromRect:imageRect
operation:NSCompositeSourceOver
fraction:1];
if((rotation %2) )
{
float tmp;
tmp = drawingRect.size.width;
drawingRect.size.width = drawingRect.size.height;
drawingRect.size.height = tmp;
}
This code may be entirely historical, now that I look at it... the idea was to swap height andwidth if I rotated 90 or 270 degs.
}
else
NSLog(#"no image");
[afTrans release];
[context restoreGraphicsState];
}

Why do you use the superview's size? That's something you should almost never need to worry about. You should make the view work on its own without dependencies on being embedded in any specific view.
Scaling the size of imageRect is probably not the right way to go. Generally when calling -drawImage you want the source rect to be the bounds of the image, and scale the destination rect to zoom it.
The problems you're reporting kind of sound like you're not redrawing the entire view after changing the transformation. Are you calling -setNeedsDisplay: YES?
How is this view embedded in the window? Is it inside an NSScrollView? Have you made sure the scroll view resizes along with the window?

Related

CATiledLayers on OS X

This has been driving me crazy.. I have a large image, and need to have a view that is both zoomable, and scrollable (ideally it should also be able to rotate, but I've given up on that part). Since the image is very large, I plan on using CATiledLayer, but I simply can't get it to work.
My requirements are:
I need to be able to zoom (on mouse center) and pan
The image should not change its width:height ratio (shouldn't resize, only zoom).
This should run on Mac OS 10.9 (NOT iOS!)
Memory use shouldn't be huge (although up to like 100 MB should be ok).
I have the necessary image both complete in one file, and also tiled into many (even have it for different zoom levels). I prefer using the tiles, as that should be easier on memory, but both options are available.
Most of the examples online refer to iOS, and thus use UIScrollView for the zoom/pan, but I can't get to copy that behaviour for NSScrollView. The only example for Mac OS X I found is this, but his zoom always goes to the lower left corner, not the middle, and when I adapt the code to use png files instead of pdf, the memory use gets around 400 MB...
This is my best try so far:
#implementation MyView{
CATiledLayer *tiledLayer;
}
-(void)awakeFromNib{
NSLog(#"Es geht los");
tiledLayer = [CATiledLayer layer];
// set up this view & its layer
self.wantsLayer = YES;
self.layer = [CALayer layer];
self.layer.masksToBounds = YES;
self.layer.backgroundColor = CGColorGetConstantColor(kCGColorWhite);
// set up the tiled layer
tiledLayer.delegate = self;
tiledLayer.levelsOfDetail = 4;
tiledLayer.levelsOfDetailBias = 5;
tiledLayer.anchorPoint = CGPointZero;
tiledLayer.bounds = CGRectMake(0.0f, 0.0f, 41*256, 22*256);
tiledLayer.autoresizingMask = kCALayerNotSizable;
tiledLayer.tileSize = CGSizeMake(256, 256);
self.frame = CGRectMake(0.0f, 0.0f, 41*256, 22*256);
self.layer = tiledLayer;
//[self.layer addSublayer:tiledLayer];
[tiledLayer setNeedsDisplay];
}
-(void)drawRect:(NSRect)dirtyRect{
CGContextRef context = [[NSGraphicsContext currentContext] graphicsPort];
CGFloat scale = CGContextGetCTM(context).a;
CGSize tileSize = tiledLayer.tileSize;
tileSize.width /= scale;
tileSize.height /= scale;
// calculate the rows and columns of tiles that intersect the rect we have been asked to draw
int firstCol = floorf(CGRectGetMinX(dirtyRect) / tileSize.width);
int lastCol = floorf((CGRectGetMaxX(dirtyRect)-1) / tileSize.width);
int firstRow = floorf(CGRectGetMinY(dirtyRect) / tileSize.height);
int lastRow = floorf((CGRectGetMaxY(dirtyRect)-1) / tileSize.height);
for (int row = firstRow; row <= lastRow; row++) {
for (int col = firstCol; col <= lastCol; col++) {
NSImage *tile = [self tileForScale:scale row:row col:col];
CGRect tileRect = CGRectMake(tileSize.width * col, tileSize.height * row,
tileSize.width, tileSize.height);
// if the tile would stick outside of our bounds, we need to truncate it so as
// to avoid stretching out the partial tiles at the right and bottom edges
tileRect = CGRectIntersection(self.bounds, tileRect);
[tile drawInRect:tileRect];
}
}
}
-(BOOL)isFlipped{
return YES;
}
But this deforms the image, and doesn't zoom or pan correctly (but at least the tile selection works)...
I can't believe this is so hard, any help would be greatly appreciated. Thanks :)
After a lot of research and tries, I finally managed to get this to work using this example. Decided to post it for future reference. Open the ZIP > CoreAnimationLayers> TiledLayers, there's a good example there. That's how CATiledLayer works with OS X, and since the example there doesn't handle zoom very well, I leave here my zoom code
-(void)magnifyWithEvent:(NSEvent *)event{
[super magnifyWithEvent:event];
if (!isZooming) {
isZooming = YES;
BOOL zoomOut = (event.magnification > 0) ? NO : YES;
if (zoomOut) {
[self zoomOutFromPoint:event.locationInWindow];
} else {
[self zoomInFromPoint:event.locationInWindow];;
}
}
}
-(void)zoomInFromPoint:(CGPoint)mouseLocationInWindow{
if(zoomLevel < pow(2, tiledLayer.levelsOfDetailBias)) {
zoomLevel *= 2.0f;
tiledLayer.transform = CATransform3DMakeScale(zoomLevel, zoomLevel, 1.0f);
tiledLayer.position = CGPointMake((tiledLayer.position.x*2) - mouseLocationInWindow.x, (tiledLayer.position.y*2) - mouseLocationInWindow.y);
}
}
-(void)zoomOutFromPoint:(CGPoint)mouseLocationInWindow{
NSInteger power = tiledLayer.levelsOfDetail - tiledLayer.levelsOfDetailBias;
if(zoomLevel > pow(2, -power)) {
zoomLevel *= 0.5f;
tiledLayer.transform = CATransform3DMakeScale(zoomLevel, zoomLevel, 1.0f);
tiledLayer.position = CGPointMake((tiledLayer.position.x + mouseLocationInWindow.x)/2, (tiledLayer.position.y + mouseLocationInWindow.y)/2);
}
}

Drawing board/grid with Cocoa

I'm writing a small boardgame for Mac OS X using Cocoa. I the actual grid is drawn as follows:
- (void)drawRect:(NSRect)rect
{
for (int x=0; x < GRIDSIZE; x++) {
for (int y=0; y < GRIDSIZE; y++) {
float ix = x*cellWidth;
float iy = y*cellHeight;
NSColor *color = (x % 2 == y % 2) ? boardColors[0] : boardColors[1];
[color set];
NSRect r = NSMakeRect(ix, iy, cellWidth, cellHeight);
NSBezierPath *path = [NSBezierPath bezierPath];
[path appendBezierPathWithRect:r];
[path fill];
[path stroke];
}
}
}
This works great, except that I see some errors in colors between the tiles. I guess this is due to some antialiasing or similar. See screenshots below (hopefully you can also see the same problems... its some black lines where the tiles overlap):
Therefore I have these questions:
Is there any way I can remove these graphical artefacts while still maintaining a resizable/scalable board?
Should I rather use some other graphical library like Core Graphics or OpenGL?
Update:
const int GRIDSIZE = 16;
cellWidth = (frame.size.width / GRIDSIZE);
cellHeight = (frame.size.height / GRIDSIZE);
If you want crisp rectangles you need to align coordinates so that they match the underlying pixels. NSView has a method for this purpose: - (NSRect)backingAlignedRect:(NSRect)aRect options:(NSAlignmentOptions)options. Here's a complete example for drawing the grid:
const NSInteger GRIDSIZE = 16;
- (void)drawRect:(NSRect)dirtyRect {
for (NSUInteger x = 0; x < GRIDSIZE; x++) {
for (NSUInteger y = 0; y < GRIDSIZE; y++) {
NSColor *color = (x % 2 == y % 2) ? [NSColor greenColor] : [NSColor redColor];
[color set];
[NSBezierPath fillRect:[self rectOfCellAtColumn:x row:y]];
}
}
}
- (NSRect)rectOfCellAtColumn:(NSUInteger)column row:(NSUInteger)row {
NSRect frame = [self frame];
CGFloat cellWidth = frame.size.width / GRIDSIZE;
CGFloat cellHeight = frame.size.height / GRIDSIZE;
CGFloat x = column * cellWidth;
CGFloat y = row * cellHeight;
NSRect rect = NSMakeRect(x, y, cellWidth, cellHeight);
NSAlignmentOptions alignOpts = NSAlignMinXNearest | NSAlignMinYNearest |
NSAlignMaxXNearest | NSAlignMaxYNearest ;
return [self backingAlignedRect:rect options:alignOpts];
}
Note that you don't need stroke to draw a game board. To draw pixel aligned strokes you need to remember that coordinates in Cocoa actually point to lower left corners of pixels. To crisp lines you need to offset coordinates by half a pixel from integral coordinates so that coordinates point to centers of pixels. For example to draw a crisp border for a grid cell you can do this:
NSRect rect = NSInsetRect([self rectOfCellAtColumn:column row:row], 0.5, 0.5);
[NSBezierPath strokeRect:rect];
First, make sure your stroke color is not black or gray. (You're setting color but is that stroke or fill color? I can never remember.)
Second, what happens if you simply fill with green, then draw red squares over it, or vice-versa?
There are other ways to do what you want, too. You can use the CICheckerboardGenerator to make your background instead.
Alternately, you could also use a CGBitmapContext that you filled by hand.
First of all, if you don't actually want your rectangles to have a border, you shouldn't call [path stroke].
Second, creating a bezier path for filling a rectangle is overkill. You can do the same with NSRectFill(r). This function is probably more efficient and I suspect less prone to introduce rounding errors to your floats – I assume you realize that your floats must not have a fractional part if you want pixel-precise rectangles. I believe that if the width and height of your view is a multiple of GRIDSIZE and you use NSRectFill, the artifacts should go away.
Third, there's the obvious question as to how you want your board drawn if the view's width and height are not a multiple of GRIDSIZE. This is of course not an issue if the size of your view is fixed and a multiple of that constant. If it is not, however, you first have to clarify how you want the possible remainder of the width or height handled. Should there be a border? Should the last cell in the row or column take up the remainder? Or should it rather be distributed equally among the cells of the rows or columns? You might have to accept cells of varying width and/or height. What the best solution for your problem is, depends on your exact requirements.
You might also want to look into other ways of drawing a checkerboard, e.g. using CICheckerboardGenerator or creating a pattern color with an image ([NSColor colorWithPatternImage:yourImage]) and then filling the whole view with it.
There's also the possibility of (temporarily) turning off anti-aliasing. To do that, add the following line to the beginning of your drawing method:
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
My last observation is about your general approach. If your game is going to have more complicated graphics and animations, e.g. animated movement of pieces, you might be better off using OpenGL.
As of iOS 6, you can generate a checkerboard pattern using CICheckerboardGenerator.
You'll want to guard against the force unwraps in here, but here's the basic implementation:
var checkerboardImage: UIImage? {
let filter = CIFilter(name: "CICheckerboardGenerator")!
let width = NSNumber(value: Float(viewSize.width/16))
let center = CIVector(cgPoint: .zero)
let darkColor = CIColor.red
let lightColor = CIColor.green
let sharpness = NSNumber(value: 1.0)
filter.setDefaults()
filter.setValue(width, forKey: "inputWidth")
filter.setValue(center, forKey: "inputCenter")
filter.setValue(darkColor, forKey: "inputColor0")
filter.setValue(lightColor, forKey: "inputColor1")
filter.setValue(sharpness, forKey: "inputSharpness")
let context = CIContext(options: nil)
let cgImage = context.createCGImage(filter.outputImage!, from: viewSize)
let uiImage = UIImage(cgImage: cgImage!, scale: UIScreen.main.scale, orientation: UIImage.Orientation.up)
return uiImage
}
Apple Developer Docs
Your squares overlap. ix + CELLWIDTH is the same coordinate as ix in the next iteration of the loop.
You can fix this by setting the stroke color explicitly to transparent, or by not calling stroke.
[color set];
[[NSColor clearColor] setStroke];
or
[path fill];
// not [path stroke];

How do I avoid interpolation artifacts when drawing NSImage into a different size rect?

My end goal is to fill an arbitrarily sized rectangle with an NSImage. I want to:
Fill the entire rectangle
Preserve the aspect ratio of the image
Show as much as possible of the image while maintaining 1) and 2)
When not all the image can be shown, crop toward the center.
This demonstrates what I'm trying to do. The original image of the boat at the top is drawn into various sized rectangles below.
Okay, so far so good. I added a category to NSImage to do this.
#implementation NSImage (Fill)
/**
* Crops source to best fit the destination
*
* destRect is the rect in which we want to draw the image
* sourceRect is the rect of the image
*/
-(NSRect)scaleAspectFillRect:(NSRect)destRect fromRect:(NSRect)sourceRect
{
NSSize sourceSize = sourceRect.size;
NSSize destSize = destRect.size;
CGFloat sourceAspect = sourceSize.width / sourceSize.height;
CGFloat destAspect = destSize.width / destSize.height;
NSRect cropRect = NSZeroRect;
if (sourceAspect > destAspect) { // source is proportionally wider than dest
cropRect.size.height = sourceSize.height;
cropRect.size.width = cropRect.size.height * destAspect;
cropRect.origin.x = (sourceSize.width - cropRect.size.width) / 2;
} else { // dest is proportionally wider than source (or they are equal)
cropRect.size.width = sourceSize.width;
cropRect.size.height = cropRect.size.width / destAspect;
cropRect.origin.y = (sourceSize.height - cropRect.size.height) / 2;
}
return cropRect;
}
- (void)drawScaledAspectFilledInRect:(NSRect)rect
{
NSRect imageRect = NSMakeRect(0, 0, [self size].width, [self size].height);
NSRect sourceRect = [self scaleAspectFillRect:rect fromRect:imageRect];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationHigh];
[self drawInRect:rect
fromRect:sourceRect
operation:NSCompositeSourceOver
fraction:1.0 respectFlipped:YES hints:nil];
}
#end
When I want to draw the image into a certain rectangle I call:
[myImage drawScaledAspectFilledInRect:onScreenRect];
Works really well except for one problem. At certain sizes the image looks quite blurry:
My first thought was that I need to draw on integral pixels, so I used NSIntegralRect() before drawing. No luck.
As I thought about it I figured that it's probably a result of the interpolation. To draw from the larger image to the smaller draw rect NSImage has to interpolate. The blurry images are likely just a case where the values don't map very well and we end up with some undesirable artifacts that can't be avoided.
So, the question is this: How do I choose an optimal rect that avoids those artifacts? I can adjust either the draw rect or the crop rect before drawing to avoid this, but I don't know how or when to adjust them.

NSSlider NSSliderCell clipping custom knob

I am creating a custom NSSlider with a custom NSSliderCell. All is working beautifully, other than the knob. When I drag it to the max value the knob is being clipped, I can only see 50% of the knob image.
When assigning my custom NSSliderCell I am setting the knobThickness to the width of the image I am using as the knob. I assumed (I guess wrongly) that it would take that into account and stop it from clipping?
Any ideas what I am doing wrong? The slider is hitting the maxValue only when the knob is clipped at 50%, so its not travelling without adding any value.
- (void)drawKnob:(NSRect)knobRect {
NSImage * knob = _knobOff;
knobRectVar = knobRect;
[[self controlView] lockFocus];
[knob
compositeToPoint:
NSMakePoint(knobRect.origin.x+4,knobRect.origin.y+knobRect.size.height+20)
operation:NSCompositeSourceOver];
[[self controlView] unlockFocus];
}
- (void)drawBarInside:(NSRect)rect flipped:(BOOL)flipped {
rect.size.height = 8;
[[self controlView] lockFocus];
NSImage *leftCurve = [NSImage imageNamed:#"customSliderLeft"];
[leftCurve drawInRect:NSMakeRect(5, 25, 8, 8) fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
NSRect leftRect = rect;
leftRect.origin.x=13;
leftRect.origin.y=25;
leftRect.size.width = knobRectVar.origin.x + (knobRectVar.size.width/2);
[leftBarImage setSize:leftRect.size];
[leftBarImage drawInRect:leftRect fromRect: NSZeroRect operation: NSCompositeSourceOver fraction:1];
[[self controlView] unlockFocus];
}
The NSSLider expects a special sizes off the knob images for each control size:
NSRegularControlSize: 21x21
NSSmallControlSize: 15x15
NSMiniControlSize: 12x12
Unfortunately the height of your knob image mustn't exceed one of this parameters. But it's width may be longer. If it is you may count an x position for the knob like this:
CGFloat newOriginX = knobRect.origin.x *
(_barRect.size.width - (_knobImage.size.width - knobRect.size.width)) / _barRect.size.width;
Where _barRect is a cellFrame of your bar background from:
- (void)drawBarInside:(NSRect)cellFrame flipped:(BOOL)flipped;
I've created a simple solution for the custom NSSlider. Follow this link
https://github.com/Doshipak/LADSlider
You can override [NSSliderCell knobRectFlipped:] in addition to [NSSliderCell drawKnob:].
Here is my solution:
- (void)drawKnob:(NSRect)rect
{
NSImage *drawImage = [self knobImage];
NSRect drawRect = [self knobRectFlipped:[self.controlView isFlipped]];
CGFloat fraction = 1.0;
[drawImage drawInRect:drawRect fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:fraction respectFlipped:YES hints:nil];
}
- (NSRect)knobRectFlipped:(BOOL)flipped
{
NSImage *drawImage = [self knobImage];
NSRect drawRect = [super knobRectFlipped:flipped];
drawRect.size = drawImage.size;
NSRect bounds = self.controlView.bounds;
bounds = NSInsetRect(bounds, ceil(drawRect.size.width / 2), 0);
CGFloat val = MIN(self.maxValue, MAX(self.minValue, self.doubleValue));
val = (val - self.minValue) / (self.maxValue - self.minValue);
CGFloat x = val * NSWidth(bounds) + NSMinX(bounds);
drawRect = NSOffsetRect(drawRect, x - NSMidX(drawRect) + 1, 0);
return drawRect;
}
Know it's been awhile but I ran into this issue myself and found a quick-and-dirty workaround.
I couldn't get around the initial reason for this but it seems that NSSlider is expecting a quadratic handle image.
The easiest way I found was to set the range of your slider to be from 0.0f - 110.0f for example.
Then you check in the valueChanged target method assigned if the value is > 100.0f and set it back to that value if it is. I created a background image with some pixels of alpha-only pixels on the right side so your background isn't wider than the actual fader range.
Quick-and-dirty but doesn't require a lot code and works pretty well.
Hope this helps other guys stumbling upon the same issue.
You don’t need to lock and unlock focus on the controlView from inside cell drawing methods. These methods are only called by your controlView’s -drawRect: method, which is called with the view’s focus locked.
Why are you adding 20 points to the Y coordinate the knob image is composited to in -drawKnob?

Rotating an NSImage with or without NSAffineTransform

I've got an NSImage being drawn on a subclass of NSView. In a previous question, I was helped to draw it upright in the upper left corner. Now, I want to be able to rotate the image. I have a button that increments a rotation variable %4, and then I multiply this by -90 to find the rotation angle. I then use an NSAffineTransform to rotate the image, and translate it back onto the screen. However, it doesn't seem to be working the way I expect it to. I have two problems.
1) When I rotate, the portion of the image that is in an area that wasn't in the previous frame gets drawn correctly. However, the potion that was previously there remains as the original image. This means that after several rotations, there's a square of the original upright image and then a rectangle below or to the left of rotate image.
2) When I re-size the window, the image redraws (as it should) in the original upright orientation (as it should not).
Basically, I'm coming to the conclusion that NSAffineTransform doesn't work the way I think it does. Is there some other way to rotate (and translate) the image? thanks
large code chunk: (code between "WORKS" and "end works" is working code to just draw the image. It came from my previous question).
[OLD CODE DELETED, replaced lower with new code]
thanks
EDIT: A little more research finds that instead of doing an NSAffineTtransform I can rotate the view. This seems to work better. However, I can't get the translation to work quite right. new code below (original code deleted to save space)
- (void)drawRect:(NSRect)rect
{
//WORKS
NSRect frame;
frame.origin = NSZeroPoint;
frame.size = [image size];
// end works
float deltaX, deltaY, height, width;
// if the rotate button was just clicked, we need to rotate by 90 deg, otherwise not
double rotateDeg = justRot ? -90.0 : 0;
justRot = NO;
// rotate
deltaX = 0;
deltaY = 0;
// translate to account for rotation
height = [image size].height;
width = [image size].width;
switch (rotation)
{
case 0:
NSLog(#"No rotation ");
break;
case 1:
deltaX += width;
break;
case 2:
deltaX += width;
deltaY += height;
break;
case 3:
deltaY += height;
break;
}
NSPoint orig;
if (rotation != 0)
{
orig.x = -100;
orig.y = -100;
}
[self rotateByAngle: rotateDeg];
NSLog(#"orig %f %f", orig.x, orig.y);
// WORKS
[self setFrame: frame];
[image drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1];
// end works
[self translateOriginToPoint: orig];
}
Ok, for history, in case anyone else has this question here's the answer I've come up with:
The frame rotation stays in the drawRect, everything else moves to the rotate method:
-(void)rotate
{
float deltaX, deltaY, height, width;
rotation = (rotation +1) % 4 ;
deltaX = 0;
deltaY = 0;
// translate to account for rotation
height = [image size].height;
width = [image size].width;
switch (rotation)
{
case 0:
NSLog(#"No rotation ");
deltaY -= width;
break;
case 1:
deltaY -= height;
break;
case 2:
deltaX += height-width;
deltaY -= height ;
break;
case 3:
deltaX += height-width;
deltaY -= width;
break;
}
NSPoint orig;
orig.x = deltaX;
orig.y = deltaY;
[self rotateByAngle: 90.0];
[self translateOriginToPoint: orig];
[self setNeedsDisplay:YES];
}
I'm guessing that you'd like to rotate the image about it's center. If so you need to translate the origin of the affine transform to the center of the image, then rotate, then translate back. Now if you have artifacts of the previos position left over, it's probably because you didn't call -[NSView setNeedsDisplayInRect:] with the correct rectangles. Remember that you need to invalidate both the images previous and new positions.
Update: I just noticed that you're changing the view's frame from within drawRect:. That's too late to change the frame. Instead, draw the image in the correct location without changing the view's frame.
Have a look at my old Transformed Image sample code at: http://developer.apple.com/mac/library/samplecode/Transformed_Image/index.html