Scaling view depending on x coordinate while being dragged - objective-c

I have been trying to implement a UI feature which I've seen in a few apps which use cards to display information. My current view controller looks like this:
and users are able to drag the card along the x axis to the left and right. Dragging to the right side of the screen does nothing to the scale of the card (simply changes position) but if the user swipes it to the left I wanted to slowly decrease the scale of it depending on its y coordinate (e.g. the scale is smallest when card is furthest to the left, getting bigger from that point until the original size is reached). If the card is dragged far enough to the left it will fade out, but if the user does not drag it far enough it increases in scale and moves back into the middle. Code I've tried so far:
- (void)handlePanImage:(UIPanGestureRecognizer *)sender
{
static CGPoint originalCenter;
if (sender.state == UIGestureRecognizerStateBegan)
{
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
i-=0.001;
transform = CGAffineTransformScale(transform, i, i);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
else if (sender.state == UIGestureRecognizerStateEnded || sender.state == UIGestureRecognizerStateCancelled || sender.state == UIGestureRecognizerStateFailed)
{
if(sender.view.center.x>0){
[UIView animateWithDuration:0.2 animations:^{
CGRect rect=[sender.view frame];
rect.origin.x=([self.view frame].size.width/2)-_protoypeView.frame.size.width/2;
rect.size.height=originalHeight;
rect.size.width=originalWidth;
[sender.view setFrame:rect];
i=1.0;
}];
}
[UIView animateWithDuration:0.1 animations:^{
sender.view.alpha = 1.0;
}];
}
}
This seems very buggy and doesn't properly work. I also tried to change scale according to translation:
else if (sender.state == UIGestureRecognizerStateChanged)
{
CGPoint translation = [sender translationInView:self.view];
NSLog(#"%f x %f y",translation.x ,translation.y);
sender.view.center=CGPointMake(originalCenter.x + translation.x, yOfView);
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, translation.x/100, translation.x/100);
//transform = CGAffineTransformRotate(transform, self.rotationAngle);
sender.view.transform = transform;
}
but the scale either gets too big or too small. Any help would be greatly appreciated :)

In the last piece of code you tried, you're doing a couple of things wrong. The scale transform will be cumulative because you never set the translation of your pan gesture recognizer back to 0. Also, if you look at the math in your transform, you'll see that a small movement of say -1 point would scale the view to 0.01 times its original size. You obviously don't want that. You need to add that small negative number to 1, so that initial -1 point move would scale to 0.99. The change of setting the panner's translation back to zero necessitates changing the center calculation to use sender.view.center.x rather than originalCenter.X. You also need an if statement to check whether the center is left or right of its starting position so you know whether you should apply the scaling transform. Something like this,
-(void)handlePan:(UIPanGestureRecognizer *) sender {
if (sender.state == UIGestureRecognizerStateBegan) {
originalCenter = sender.view.center;
sender.view.alpha = 0.8;
[sender.view.superview bringSubviewToFront:sender.view];
}else if (sender.state == UIGestureRecognizerStateChanged){
CGPoint translation = [sender translationInView:self.view];
sender.view.center=CGPointMake(sender.view.center.x + translation.x, sender.view.center.y);
if (sender.view.center.x < originalCenter.x) {
CGAffineTransform transform = sender.view.transform;
transform = CGAffineTransformScale(transform, 1+translation.x/100.0, 1+translation.x/100.0);
sender.view.transform = transform;
}
[sender setTranslation:CGPointZero inView:self.view];
}
}
This doesn't take care of animating the view back, or fading out the view, but it should get you most of the way there.

Related

Restoring CGRect with Transform values from NSDictionary

I need to be able to save the frame after transformation of a UIImageView. In the below example, the original frame is when the image is added to the superview. The user then has the ability to rotate, scale and pan the image anywhere in the gray area (superview).
I take these images and save their coordinates to an NSDictionary (which is not the problem). The problem is that if I get the frame after the rotation, the frame is completely off. I need to be able to store the new frame with transform in the dictionary, so that when the user comes back to this view and the images are loaded, the frames and saved transformations are just like they intended.
Panning
CGPoint translation = [gestureRecognizer translationInView:[object superview]];
if (CGRectContainsPoint(self.frame, CGPointMake([object center].x + translation.x, [object center].y + translation.y))) {
[object setCenter:CGPointMake([object center].x + translation.x, [object center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[object superview]];
}
Rotating
self.transformRotation = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer view].transform = self.transformRotation;
if ( [gestureRecognizer rotation] != 0 ) {
self.rotate = [gestureRecognizer rotation];
}
[gestureRecognizer setRotation:0];
Scaling
self.transformScale = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer view].transform = self.transformScale;
if ( [gestureRecognizer scale] != 1 ) {
self.scale = [gestureRecognizer scale];
}
[gestureRecognizer setScale:1];
Using the Center point of the view keeps the image closer to the original location when it was saved, the first time it is loaded. Each time it is saved after that the position is the same, because the transform did not change during that session.
- (CGPoint)centerOnCanvas {
CGPoint originalCenter = self.center;
return originalCenter;
}
- (CGRect)frameOnCanvas {
return CGRectMake(
self.preTransformedFrame.origin.x,
self.preTransformedFrame.origin.y,
self.preTransformedFrame.size.width,
self.preTransformedFrame.size.height
);
}
- (CGRect)preTransformedFrame {
CGAffineTransform currentTransform = self.transform;
self.transform = CGAffineTransformIdentity;
CGRect originalFrame = self.bounds;
self.transform = currentTransform;
return originalFrame;
}
UPDATE: Slightly off and a little larger than the original
According to Apple document for UIView's transform property
When the value of this property is anything other than the identity transform, the value in the frame property is undefined and should be ignored.
That's the reason why you can't get frame after changing transform.
As I understand, after rotating, scaling or panning you want to save current state to restore later. In my opinion, all you need to do it is saving transform and center of UIImageView each time they are changed. You don't need frame in this case.
For example, _transformTarget is your UIImageView, to save its current state you can use below method (Instead of saving in a NSDictionary, I use NSUserDefaults. You can change it to NSDictionary)
- (void)saveCurrentState {
[[NSUserDefaults standardUserDefaults] setObject:NSStringFromCGAffineTransform(_transformTarget.transform) forKey:#"_transformTarget.transform"];
[[NSUserDefaults standardUserDefaults] setObject:NSStringFromCGPoint(_transformTarget.center) forKey:#"_transformTarget.center"];
}
At the end of each handling gesture method, save current state by using saveCurrentState.
- (void)handlePanGesture:(UIPanGestureRecognizer*)gesture {
CGPoint translation = [gesture translationInView:self.view];
CGPoint newCenter = CGPointMake(_transformTarget.center.x + translation.x, _transformTarget.center.y + translation.y);
if (CGRectContainsPoint(self.view.frame, newCenter)) {
_transformTarget.center = newCenter;
[gesture setTranslation:CGPointZero inView:self.view];
[self saveCurrentState]; // Save current state when center is changed
}
}
- (void)handleRotationGesture:(UIRotationGestureRecognizer*)gesture {
_transformTarget.transform = CGAffineTransformRotate(_transformTarget.transform, gesture.rotation);
gesture.rotation = 0;
[self saveCurrentState]; // Save current state when transform is changed
}
- (void)handlePinchGesture:(UIPinchGestureRecognizer*)gesture {
_transformTarget.transform = CGAffineTransformScale(_transformTarget.transform, gesture.scale, gesture.scale);
gesture.scale = 1;
[self saveCurrentState]; // Save current state when transform is changed
}
Now, the information about UIImageView is saved every time it's changed. At the next time user comes back, get info about center and transform from your dictionary and set them again.
- (void)restoreFromSavedState {
NSString *transformString = [[NSUserDefaults standardUserDefaults] objectForKey:#"_transformTarget.transform"];
CGAffineTransform transform = transformString ? CGAffineTransformFromString(transformString) : CGAffineTransformIdentity;
NSString *centerString = [[NSUserDefaults standardUserDefaults] objectForKey:#"_transformTarget.center"];
CGPoint center = centerString ? CGPointFromString(centerString) : self.view.center;
_transformTarget.center = center;
_transformTarget.transform = transform;
}
Result
For more detail, you can take a look at my sample repo
https://github.com/trungducc/stackoverflow/tree/restore-view-transform

Rotate UI Elements programmatically after shouldAutorotate

I have a viewController that should not "autorotate", but manually rotate specific GUI elements. The reason is that I use the front camera for taking a picture and I don't want the UIView that contains my UIImageView to be rotated.
My code looks like this:
-(BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation {
[self performSelector:#selector(refreshView) withObject:nil afterDelay:1.0];
return NO; // don't autorotate!
}
and:
- (void) refreshView {
UIDeviceOrientation actualDeviceOrientation = [[UIDevice currentDevice] orientation];
float rotation = 0; // UIDeviceOrientationPortrait
if (actualDeviceOrientation == UIDeviceOrientationPortraitUpsideDown) rotation = 180;
else if (actualDeviceOrientation == UIDeviceOrientationLandscapeLeft) rotation = 90;
else if (actualDeviceOrientation == UIDeviceOrientationLandscapeRight) rotation = 270;
float rotationRadians = rotation * M_PI / 180;
[UIView animateWithDuration:0.4
animations:^(void) {
self.labelPrize.center = self.prizeView.center;
self.prizeView.transform = CGAffineTransformMakeRotation(rotationRadians);
} completion:^(BOOL finished){ }];
}
"labelPrize" is the label with the caption "20 EURO" that is seen on the screenshots below, "prizeView" is it's container. prizeView is the only GUI element that has constraints defined, which look like this:
Just for clarification, here's what "labelPrize" looks like:
And finally, here's what the app produces:
This is not what I want to achieve, I'd like "prizeView"/"labelPrize" to be
always aligned to the horizon
always in the exact center of the screen
Also worth mentioning: I'd like to add labels above (header) and a button below ("okay") my "labelPrize" and rotate/position them as well in refreshView().
Thanks for any help!
There are two big problems here. Let's take them one at a time.
(1)
self.labelPrize.center = self.prizeView.center;
Think about it. labelPrize is a subview of prizeView. So you are mixing apples with oranges as far as coordinate systems go: labelPrize.center is measured with respect to prizeView.bounds, but prizeView.center is measured with respect to self.view.bounds. To keep the center of labelPrize at the center of prizeView, position it at the midpoint of prizeView's bounds. (However, you should not have to move it at all because the transform transforms the bounds.)
(2)
Rotation view transforms and auto layout are deadly enemies, as I explain here. That is why rotating the transform of prizeView seems to shift its position as well. My answer there gives you several possible workarounds.

UIView hitTest:withEvent: and pointInside:withEvent

I'm working with Mixare AR SDK for iOS and I need to solve some bugs that happends, one of them is show the information of a POI when the POI's view is tapped.
Prelude:
Mixare has an overlay UIView within MarkerView views are placed, MarkerView views are moving around the screen to geolocate the POIs and each one has two subviews, an UIImageView and an UILabel.
Issue:
Now, for example, there are 3 visible POIs in the screen, so there are 3 MarkerView as overlay subviews. If you touch anywhere in the overlay, a info view associated to a random POI of which are visible is showed.
Desired:
I want that the associated POI's info is shown only when the user tapped a MarkerView
Let's work. I've see that MarkerView inherits from UIView and implements hitTest:withEvent
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
viewTouched = (MarkerView*)[super hitTest:point withEvent:event];
return self;
}
I've put a breakpoint and hitTest is called once for each visible MarkerView but loadedView always is null so I can't work with it, so I've tried to check if the hit point is inside the MarkerView frame implementing pointInside:withEvent: by this way
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
NSLog(#"ClassName: %#", [[self class] description]);
NSLog(#"Point Inside: %f, %f", point.x, point.y);
NSLog(#"Frame x: %f y: %f widht:%f height:%f", self.frame.origin.x, self.frame.origin.y, self.frame.size.width, self.frame.size.height);
if (CGRectContainsPoint(self.frame, point))
return YES;
else
return NO;
return YES;
}
But this function always returns NO, even when I touch the MarkerView. When I check the log I saw that X and Y point values has negative values sometimes and width and height of the view are very small, 0.00022 or similar instead of 100 x 150 that I set the MarkerView frame on its initialization.
Here you are a extract of my log in which you can see the class name, the point and the MarkerView frame values.
ClassName: MarkerView
2011-12-29 13:20:32.679 paisromanico[2996:707] Point Inside: 105.224899, 49.049023
2011-12-29 13:20:32.683 paisromanico[2996:707] Frame x: 187.568573 y: 245.735138 widht:0.021862 height:0.016427
I'm very lost with this issue so any help will be welcome. Thanks in advance for any help provided and I'm sorry about this brick :(
Edit:
At last I've found that the problem is not in hitTest:withEvent: or pointInside:withEvent, problem is with CGTransform that applies to the MarkerView for scaling based on distande and rotating the view, if I comment any code related to this, the Mixare AR SDK works fine, I mean, info view is shown correctly if you touch a marker and doesn't do anything if any other place in the screen is touched.
So, by the moment, I've not solved the problem but I applied a patch removing the CGTransform related code in AugmentedViewController.m class - (void)updateLocations:(NSTimer *)timer function
- (void)updateLocations:(NSTimer *)timer {
//update locations!
if (!ar_coordinateViews || ar_coordinateViews.count == 0) {
return;
}
int index = 0;
NSMutableArray * radarPointValues= [[NSMutableArray alloc]initWithCapacity:[ar_coordinates count]];
for (PoiItem *item in ar_coordinates) {
MarkerView *viewToDraw = [ar_coordinateViews objectAtIndex:index];
viewToDraw.tag = index;
if ([self viewportContainsCoordinate:item]) {
CGPoint loc = [self pointInView:ar_overlayView forCoordinate:item];
CGFloat scaleFactor = 1.5;
if (self.scaleViewsBasedOnDistance) {
scaleFactor = 1.0 - self.minimumScaleFactor * (item.radialDistance / self.maximumScaleDistance);
}
float width = viewToDraw.bounds.size.width ;//* scaleFactor;
float height = viewToDraw.bounds.size.height; // * scaleFactor;
viewToDraw.frame = CGRectMake(loc.x - width / 2.0, loc.y-height / 2.0, width, height);
/*
CATransform3D transform = CATransform3DIdentity;
//set the scale if it needs it.
if (self.scaleViewsBasedOnDistance) {
//scale the perspective transform if we have one.
transform = CATransform3DScale(transform, scaleFactor, scaleFactor, scaleFactor);
}
if (self.rotateViewsBasedOnPerspective) {
transform.m34 = 1.0 / 300.0;
double itemAzimuth = item.azimuth;
double centerAzimuth = self.centerCoordinate.azimuth;
if (itemAzimuth - centerAzimuth > M_PI) centerAzimuth += 2*M_PI;
if (itemAzimuth - centerAzimuth < -M_PI) itemAzimuth += 2*M_PI;
double angleDifference = itemAzimuth - centerAzimuth;
transform = CATransform3DRotate(transform, self.maximumRotationAngle * angleDifference / (VIEWPORT_HEIGHT_RADIANS / 2.0) , 0, 1, 0);
}
viewToDraw.layer.transform = transform;
*/
//if we don't have a superview, set it up.
if (!(viewToDraw.superview)) {
[ar_overlayView addSubview:viewToDraw];
[ar_overlayView sendSubviewToBack:viewToDraw];
}
} else {
[viewToDraw removeFromSuperview];
viewToDraw.transform = CGAffineTransformIdentity;
}
[radarPointValues addObject:item];
index++;
}
float radius = [[[NSUserDefaults standardUserDefaults] objectForKey:#"radius"] floatValue];
if(radius <= 0 || radius > 100){
radius = 5.0;
}
radarView.pois = radarPointValues;
radarView.radius = radius;
[radarView setNeedsDisplay];
[radarPointValues release];
}
Any CoreGrapics or UI expert could give us his point of view about this issue??
You should either try to hittest as attached:
if ([self pointInside:point withEvent:event]) {
// do something
}
I would suggest you add the hit test on the superview, and do the following in the hit test of the parent of the markerViews
if ([markerView pointInside:point withEvent:event]) {
// extract the tag and show the relevant info
}
Hope this helps

Same CGAffineTransform different anchor

I have 1 view with 2 subviews. One of them being 10 times bigger than the other one. I have a gesture recognizer for the big one (which is on top).
I want to be able to scale the big one with the pinch gesture from an anchor point between the fingers. And I want the little one to make that same transform from the same global position anchor point but without changing its own anchor point.
Hope I explain myself. Here is the code:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
//this changes the anchor point of "big" without moving it
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
//this transforms "big"
[gestureRecognizer view].transform = transform;
//anchor point location in little view
CGPoint pivote = [gestureRecognizer locationInView:little];
CGAffineTransform transform_t = CGAffineTransformConcat(CGAffineTransformMakeTranslation(-pivote.x, -pivote.y), transform);
transform_t = CGAffineTransformConcat(transform_t, CGAffineTransformMakeTranslation(pivote.x, pivote.y));
little.transform = transform_t;
}
[gestureRecognizer setScale:1];
}
But this is not working, the little view keeps jumping around and goes crazy.
EDIT: More info.
Ok, this is the diagram:
The red square is the big view, the dark one is the little one. The dotted square is the main view.
The line: [self adjustAnchorPointForGestureRecognizer:gestureRecognizer]; changes the big views anchor point to the center of the pinch gesture. That works.
As I scale the big view, the small view should scale the same amount and move so it's centered in the big view as it is now. That is, it should scale with the same anchor point as the big view.
I would like to keep those transforms to the little view in a CGAffineTransform, if possible.
Ok, I finally found it. I don't know if it's the better solution, but it works.
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
little.transform = affineMatrix;
[eagleView setTransform:little.transform];
[gestureRecognizer setScale:1];
}
}
}
That eaglView line, is the real reason why I needed a CGAffineTransform and I couldn't change the anchor. I'm sending it to OpenGL to change the model view transform matrix.
Now it works perfectly with 3 transforms (rotate, scale, translate) at the same time through user feedback.
EDIT
Just a little note: It seems that when I move the view too fast, the eaglView and the UIView get out of sync. So I don't apply the transforms to the UIViews directly, I apply them with the info out of the eaglView. Like this:
- (void)twoFingerPinch:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
float scale = sqrt(transform.a*transform.a+transform.c*transform.c);
if((scale>0.1)&&(scale<20)) {
//[gestureRecognizer view].transform = transform;
CGPoint anchor = [gestureRecognizer locationInView:little];
anchor = CGPointMake(anchor.x - little.bounds.size.width/2, anchor.y-little.bounds.size.height/2);
CGAffineTransform affineMatrix = little.transform;
affineMatrix = CGAffineTransformTranslate(affineMatrix, anchor.x, anchor.y);
affineMatrix = CGAffineTransformScale(affineMatrix, [gestureRecognizer scale], [gestureRecognizer scale]);
affineMatrix = CGAffineTransformTranslate(affineMatrix, -anchor.x, -anchor.y);
//little.transform = affineMatrix;
[eagleView setTransform:affineMatrix];
[gestureRecognizer setScale:1];
CGAffineTransform transform = CGAffineTransformMakeRotation(eaglView.myRotation*M_PI/180);
transform = CGAffineTransformConcat(CGAffineTransformMakeScale(eaglView.myScale, eaglView.myScale), transform);
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeTranslation(eaglView.myTranslate.x, -eaglView.myTranslate.y));
little.transform = transform;
big.transform = transform;
}
}
}
To scale the smaller view using the center of the pinch as the anchor point then you'll need to calculate the new position by hand:
CGRect frame = little.frame; // Returns the frame based on the current transform
frame.origin.x = (frame.origin.x - pivot.x) * gestureRecognizer.scale;
frame.origin.y = (frame.origin.y - pivot.y) * gestureRecognizer.scale;
frame.width = frame.width * gestureRecognizer.scale;
frame.height = frame.height * gestureRecognizer.scale;
Then, update the transform. Personally I would do this based on the view's real position rather than transforming the current transform - I find it easier to think about. So for example:
little.transform = CGAffineTransformIndentity; // Remove the current transform
CGRect orgFrame = little.frame
CGFloat scale = frame.width / orgFrame.size.width;
CGAffineTransform t = CGAffineTransformMakeScale(scale, scale);
t = CGAffineTransformConcat(t, CGAffineTransformMakeTranslation(newFrame.origin.x - frame.origin.x, newFrame.origin.y - frame.origin.y));
little.transform = t;
Note that I've just typed in the code off the top of my head to give you and idea. It'll need testing and debugging!
Also, some of that code can be removed if you use the scale value based on the original pinch rather than resetting it each time and then transforming the transforms.
Tim

NSAffineTransforms not being used?

I have a subclass of NSView, and in that I'm drawing an NSImage. I'm unsing NSAffineTransforms to rotate, translate and scale the image.
Most of it works fine. However, sometimes, the transforms just don't seem to get activated.
For example, when I resize the window, the rotate transform doesn't happen.
When I zoom in on the image, it puts the lower left of the image in the correct place, but doesn't zoom it, but it does zoom the part of the image that would be to the right of the original sized image. If I rotate this, it zooms correctly, but translates wrong. (The transation may be a calculation error on my part)
Here is the code of my drawRect: (sorry for the long code chunk)
- (void)drawRect:(NSRect)rect
{
// Drawing code here.
double rotateDeg = -90* rotation;
NSAffineTransform *afTrans = [[NSAffineTransform alloc] init];
NSGraphicsContext *context = [NSGraphicsContext currentContext];
NSSize sz;
NSRect windowFrame = [[self window] frame];
float deltaX, deltaY;
NSSize superSize = [[self superview] frame].size;
float height, width, sHeight, sWidth;
NSRect imageRect;
if(image)
{
sz = [ image size];
imageRect.size = sz;
imageRect.origin = NSZeroPoint;
imageRect.size.width *= zoom;
imageRect.size.height *= zoom;
height = sz.height * zoom ;
width = sz.width *zoom ;
sHeight = superSize.height;
sWidth = superSize.width;
}
I need to grab the sizes of everything early so that I can use them later when I rotate. I am not sure that I need to protect any of that, but I'm paranoid from years of C...
[context saveGraphicsState];
// rotate
[afTrans rotateByDegrees:rotateDeg];
// translate to account for window size;
deltaX = 0;
deltaY = 0;
// translate to account for rotation
// in 1 and 3, X and Y are reversed because the entire FRAME
// (inculding axes) is rotated!
switch (rotation)
{
case 0:
// NSLog(#"No rotation ");
break;
case 1:
deltaY -= (sHeight - height);
deltaX -= sHeight ;
break;
case 2:
deltaX -= width;
deltaY -= ( 2*sHeight - height);
// it's rotating around the lower left of the FRAME, so,
// we need to move it up two frame hights, and then down
// the hieght of the image
break;
case 3:
deltaX += (sHeight - width);
deltaY -= sHeight;
break;
}
Since I'm rotating around the lower left corner, and I want the image to be locked to the upper left corner, I need to move the image around. When I rotate once, the image is in the +- quadrant, so I need to shift it up one view-height, and to the left a view-height minus an image height. etc.
[afTrans translateXBy:deltaX yBy:deltaY];
// for putting image in upper left
// zoom
[afTrans scaleBy: zoom];
printMatrix([afTrans transformStruct]);
NSLog(#"zoom %f", zoom);
[afTrans concat];
if(image)
{
NSRect drawingRect = imageRect;
NSRect frame = imageRect;
frame.size.height = MAX(superSize.height, imageRect.size.height) ;
[self setFrame:frame];
deltaY = superSize.height - imageRect.size.height;
drawingRect.origin.y += deltaY;
This makes the frame the correct size so that the image is in the upper left of the frame.
If the image is bigger than the window, I want the frame to be big enough so scroll bars appear. If it isn't I want the frame to be big enough that it reaches the top of the window.
[image drawInRect:drawingRect
fromRect:imageRect
operation:NSCompositeSourceOver
fraction:1];
if((rotation %2) )
{
float tmp;
tmp = drawingRect.size.width;
drawingRect.size.width = drawingRect.size.height;
drawingRect.size.height = tmp;
}
This code may be entirely historical, now that I look at it... the idea was to swap height andwidth if I rotated 90 or 270 degs.
}
else
NSLog(#"no image");
[afTrans release];
[context restoreGraphicsState];
}
Why do you use the superview's size? That's something you should almost never need to worry about. You should make the view work on its own without dependencies on being embedded in any specific view.
Scaling the size of imageRect is probably not the right way to go. Generally when calling -drawImage you want the source rect to be the bounds of the image, and scale the destination rect to zoom it.
The problems you're reporting kind of sound like you're not redrawing the entire view after changing the transformation. Are you calling -setNeedsDisplay: YES?
How is this view embedded in the window? Is it inside an NSScrollView? Have you made sure the scroll view resizes along with the window?