I try to rotate an image view in its superview so that this image view while rotating always touches superview's borders not crossing them, with appropriate resizing. How can I implement this? The image view should be able to rotate around 360˚.
Here I use calculations based on triangle formulas, considering initial image view diagonal angle.
Maybe I should take into account new bounding frame of the image view after it gets rotated (its x and y coordinates get negative and its frame size after transform gets bigger too).
No success so far, my image view gets sized down too quickly and too much. So my goal as I understand to get proper scale factor for CGAffineTransformScale. Maybe there are other ways to do the same.
// set initial values
_planImageView.layer.affineTransform = CGAffineTransformScale(CGAffineTransformIdentity, 1, 1);
_degrees = 0;
_initialWidth = _planImageView.frame.size.width;
_initialHeight = _planImageView.frame.size.height;
_initialAngle = MathUtils::radiansToDegrees(atan((_initialWidth / 2) / (_initialHeight / 2)));
// rotation routine
- (void)rotatePlanWithDegrees:(double)degrees
{
double deltaDegrees = degrees - _degrees;
_initialAngle -= deltaDegrees;
double newAngle = _initialAngle;
double newWidth = (_initialWidth / 2) * tan(MathUtils::degreesToRadians(newAngle)) * 2;
double newHeight = newWidth * (_initialHeight / _initialWidth);
NSLog(#"DEG %f DELTA %f A %f W %f H %f", degrees, deltaDegrees, newAngle, newWidth, newHeight);
double currentScale = newWidth / _initialWidth;
_planImageView.layer.affineTransform = CGAffineTransformScale(CGAffineTransformIdentity, currentScale, currentScale);
_planImageView.layer.affineTransform = CGAffineTransformRotate(_planImageView.layer.affineTransform, (CGFloat) MathUtils::degreesToRadians(degrees));
_degrees = degrees;
self->_planImageView.center = _center;
// NSLog(#"%#", NSStringFromCGRect(_planImageView.frame));
}
EDIT
I overwrote routine thanks to the answer and now it works!
- (void)rotatePlanWithDegrees:(double)degrees
{
double newWidth =
_initialWidth * abs(cos(MathUtils::degreesToRadians(degrees))) +
_initialHeight * abs(sin(MathUtils::degreesToRadians(degrees)));
double newHeight =
_initialWidth * abs(sin(MathUtils::degreesToRadians(degrees))) +
_initialHeight * abs(cos(MathUtils::degreesToRadians(degrees)));
CGFloat scale = (CGFloat) MIN(
self.planImageScrollView.frame.size.width / newWidth,
self.planImageScrollView.frame.size.height / newHeight);
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation((CGFloat) MathUtils::degreesToRadians(degrees));
CGAffineTransform scaleTransform = CGAffineTransformMakeScale(scale, scale);
_planImageView.layer.affineTransform = CGAffineTransformConcat(rotationTransform, scaleTransform);
self->_planImageView.center = _center;
}
When you rotate a rectangle W x H, the bounding box takes the dimensions W' = W |cos Θ| + H |sin Θ|, H' = W |sin Θ| + H |cos Θ|.
If you need to fit that in a W" x H" rectangle, the scaling factor is the smallest of W"/W' and H"/H'.
Related
I'm trying to warp the mouse using NSWindow local coordinates (but I'm starting from local coordinates in px instead of pt, with the y-axis reversed).
-(void)setProperRelativeMouseLocationTo:(NSPoint)loc
{
CGFloat scale = [[m_window screen] backingScaleFactor];
NSPoint point = NSMakePoint(loc.x / scale, loc.y / scale);
point.y = [m_view frame].size.height - point.y;
NSRect rect = NSZeroRect;
rect.origin = point;
rect = [m_window convertRectToScreen:rect];
point = rect.origin;
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
warpCursor(point);
}
void warpCursor(NSPoint loc)
{
CGPoint newCursorPosition = CGPointMake(loc.x, loc.y);
CGWarpMouseCursorPosition(newCursorPosition);
}
However, the result is unexpected on one of my screens, the x-axis is correct, but the y-axis is off by 280pt.
This value is not random, it corresponds to the gap between the two screens I'm using : the left is 1280*800 (pt) and the second one is 1920*1080 (pt) (the left one has backing scale factor of 2, while the right one has factor 1).
On the left screen, the mouse is warped exactly where it should be (if I read its local coordinates, they correspond to the ones I asked it to warp to).
Cocoa screen coordinates have their origin at the lower-left of the primary screen. Core Graphics coordinates have their origin at the top-left of the primary screen. Therefore, you have to use the primary screen's height to convert between the two.
You have:
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
You need:
const float screenHeight = [[NSScreen screens][0] frame].size.height;
point.y = screenHeight - point.y;
Hi I'm using this code to scroll my tile map, what would be the best way to implement parallax scrolling? Ive figured out a way, but it doesn't work very well. :(
- (void)setViewpointCenter:(CGPoint)position {
NSInteger x = MAX(position.x, self.size.width / 2);
NSInteger y = MAX(position.y, self.size.height / 2);
x = MIN(x, (self.map.mapSize.width * self.map.tileSize.width) - self.size.width / 2);
y = MIN(y, (self.map.mapSize.height * self.map.tileSize.height) - self.size.height / 2);
CGPoint actualPosition = CGPointMake(x, y);
CGPoint centerOfView = CGPointMake(self.size.width/2, self.size.height/2);
CGPoint viewPoint = CGPointSubtract(centerOfView, actualPosition);
self.map.position = viewPoint;
}
Thank you all for your help!
Roll your background map a fixed percentage of the foreground sprites. Typically this is only done in the axis most movement happens in (eg horizontal for side-scrollers and vertical for up-scrollers).
The following pseudo-code, assuming a side-scroller where you start at the bottom left of the map, this bottom left is (0,0) and axes increase as you move up/right
backgroundPercent = 80
background_x = foreground_x * backgroundPercent / 100
backgroup_y = foreground_y
Using the above, your background map only needs to be 80% as wide as the foreground
There is simple iPad application. Views hierarchy is:
Window - UIView - UIImageView
UIImageView has loaded image which is bigger then screen size (i.e. 1280 x 2000 against 768 x 1004). I need to zoom in arbitrary rectangular part of image (fit it on the screen) and then center it on the screen. At the moment I do zooming in following way:
-(IBAction)posIt:(id)sender
{
// rectangle size and origin in image's coordinates. Its received from out of application
CGFloat x = 550.f;
CGFloat y = 1006.f;
CGFloat w = 719.f;
CGFloat h = 850.f;
CGSize frameSize = CGSizeMake(_imgView.image.size.width, _imgView.image.size.height);
_imgView.frame = CGRectMakeWithParts(CGPointMake(0, 0), frameSize);
// Calculate center manually because orientation changing
CGPoint superCenter = CGPointMake(self.view.bounds.size.width/2, self.view.bounds.size.height/2);
// delta is the factor between size of image size and device screen
CGFloat delta = 768.f/w;
// resizing UIImageView to fit rectangle on the screen
frameSize = CGSizeMake(_imgView.image.size.width * delta, _imgView.image.size.height * delta);
_imgView.frame = CGRectMakeWithParts(CGPointMake(0, 0), frameSize);
// Here is I'm trying to calculate center of rectangle
_imgView.center = CGPointMake(superCenter.x + (superCenter.x - (w/2)*delta - x * delta), superCenter.y + (superCenter.y - (h/2)*delta - y * delta));
}
UIImageView's ContentMode parameter is AspectFit. I believe that to move one point (UIImageView's center) to another one (in order to move rectangle to the center of the superView) I shall use following formula:
newX = oldX + oldX - rectCenterX;
newY = oldY + oldY - rectCenterY;
Since UIImageView is scaled with delta factor, rectCenterX and rectCenterY should be multiplied with same delta. But I'm wrong. Rectangles centers jump out from superView center.
Is there exists some way to make these calculations correctly? What do I miss?
Thanks in advance.
I've managed correct and stable image centering. Instead of calculating center of image I'm calculating UIImageView's frame's origin coordinates like this:
CGPoint newOrigin = CGPointMake((-zoomRect.origin.x * delta) + superCenter.x - (zoomRect.size.width / 2) * delta,
(-zoomRect.origin.y * delta) + superCenter.y - (zoomRect.size.height / 2) * delta);
It turns out there is some difference between calculating center and origin. Anyway problem is solved. Thanks everybody.
I am working on an application that allows the user to select an image (camera or gallery), then draw on that image with their finger. The area that they draw becomes the transparent portion of a mask. A second image is then drawn below the first image.
I've been working on improving performance, especially on the iPad 3 and I seem to have hit a wall. I am new to iOS and Objective-C.
So my question is: What can I do to improve the drawing performance of my application?
I used this tutorial as a starting point for my code.
First I draw to a cache context:
- (void) drawToCache {
CGRect dirtyPoint1;
CGRect dirtyPoint2;
UIColor *color;
if (point1.x > -1){
hasDrawn = YES;
maskContext = CGLayerGetContext(maskLayer);
blackBackgroundContext = CGLayerGetContext(blackBackgroundLayer);
if (!doUndo){
color = [UIColor whiteColor];
CGContextSetBlendMode(maskContext, kCGBlendModeColor);
}
else{
color = [UIColor clearColor];
CGContextSetBlendMode(maskContext, kCGBlendModeClear);
}
CGContextSetShouldAntialias(maskContext, YES);
CGContextSetRGBFillColor(blackBackgroundContext, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(blackBackgroundContext, self.bounds);
CGContextSetStrokeColorWithColor(maskContext, [color CGColor]);
CGContextSetLineCap(maskContext, kCGLineCapRound);
CGContextSetLineWidth(maskContext, brushSize);
double x0 = (point0.x > -1) ? point0.x : point1.x; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double y0 = (point0.y > -1) ? point0.y : point1.y; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double x1 = point1.x;
double y1 = point1.y;
double x2 = point2.x;
double y2 = point2.y;
double x3 = point3.x;
double y3 = point3.y;
double xc1 = (x0 + x1) / 2.0;
double yc1 = (y0 + y1) / 2.0;
double xc2 = (x1 + x2) / 2.0;
double yc2 = (y1 + y2) / 2.0;
double xc3 = (x2 + x3) / 2.0;
double yc3 = (y2 + y3) / 2.0;
double len1 = sqrt((x1-x0) * (x1-x0) + (y1-y0) * (y1-y0));
double len2 = sqrt((x2-x1) * (x2-x1) + (y2-y1) * (y2-y1));
double len3 = sqrt((x3-x2) * (x3-x2) + (y3-y2) * (y3-y2));
double k1 = len1 / (len1 + len2);
double k2 = len2 / (len2 + len3);
double xm1 = xc1 + (xc2 - xc1) * k1;
double ym1 = yc1 + (yc2 - yc1) * k1;
double xm2 = xc2 + (xc3 - xc2) * k2;
double ym2 = yc2 + (yc3 - yc2) * k2;
double smooth_value = 0.5;
float ctrl1_x = xm1 + (xc2 - xm1) * smooth_value + x1 - xm1;
float ctrl1_y = ym1 + (yc2 - ym1) * smooth_value + y1 - ym1;
float ctrl2_x = xm2 + (xc2 - xm2) * smooth_value + x2 - xm2;
float ctrl2_y = ym2 + (yc2 - ym2) * smooth_value + y2 - ym2;
CGContextMoveToPoint(maskContext, point1.x, point1.y);
CGContextAddCurveToPoint(maskContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGContextStrokePath(maskContext);
dirtyPoint1 = CGRectMake(point1.x-(brushSize/2), point1.y-(brushSize/2), brushSize, brushSize);
dirtyPoint2 = CGRectMake(point2.x-(brushSize/2), point2.y-(brushSize/2), brushSize, brushSize);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
}
}
And my drawRect:
- (void)drawRect:(CGRect)rect {
CGRect imageRect = CGRectMake(0, 0, 1024, 668);
CGSize size = CGSizeMake(1024, 668);
//Draw the user touches to the context, this creates a black and white image to convert into a mask
UIGraphicsBeginImageContext(size);
CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), YES);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, blackBackgroundLayer);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, maskLayer);
CGImageRef maskRef = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Make the mask
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
//Mask the user image
CGImageRef masked = CGImageCreateWithMask([self.original CGImage], mask);
UIGraphicsBeginImageContext(size);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, backgroundTextureLayer);
CGImageRef background = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Flip the context so everything is right side up
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 668);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
//Draw the background on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, background);
//Draws the mask on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, masked);
//Release everything to prevent memory leaks
CGImageRelease(background);
CGImageRelease(maskRef);
CGImageRelease(mask);
CGImageRelease(masked);
}
I spent yesterday researching these questions:
How can I improve CGContextFillRect and CGContextDrawImage performance
CGContextDrawImage very slow on iPhone 4
Drawing in CATiledLayer with CoreGraphics CGContextDrawImage
iPhone: How do I add layers to UIView's root layer to display an image with the content property?
iPhone CGContextDrawImage and UIImageJPEGRepresentation drastically slowing down application
Any help is appreciated!
Thanks,
Kevin
I can give you some tips but you're going to need to experiment to see where the problems are (using Instruments):
do not assume you need to draw the whole view - when you need to dirty some area, use setNeedsDisplayInRect. Then in the drawRect only update the dirty area.
the above means your maskRef is much smaller and easier to construct
instead of '[self.original CGImage]', create a smaller subregion image using CGImageCreateWithImageInRect(), which lets you select a subsection of an image, then use that smaller image in conjunction with the mask.
I'm trying to zoom the drawing on the context based on pinching in and out. This is working perfectly, but it will always zoom from the origin (top left). This doesn't give the nice effect you get in other applications and is really annoying. The old zooming method is as follows:
- (IBAction)handlePinchGesture:(UIGestureRecognizer *)sender {
UIPinchGestureRecognizer *pinch = (UIPinchGestureRecognizer *)sender;
if ([pinch scale] > zoomLevel + MAX_ZOOM_PER_CALL) {
[pinch setScale: zoomLevel + MAX_ZOOM_PER_CALL];
}
if ([pinch scale] < zoomLevel - MAX_ZOOM_PER_CALL) {
[pinch setScale:zoomLevel - MAX_ZOOM_PER_CALL];
}
float prevZoomLevel = zoomLevel;
CGFloat factor = [pinch scale];
zoomLevel = factor;
if (zoomLevel < MIN_ZOOM_LEVEL) {
zoomLevel = MIN_ZOOM_LEVEL;
}
if (zoomLevel > MAX_ZOOM_LEVEL) {
zoomLevel = MAX_ZOOM_LEVEL;
}
//The next piece of code is added here
[self setNeedsDisplay];
}
The actual zooming is done in the drawing methods of the parts that are shown on the context. I was thinking of changing the origin of the screen (that is the relative origin to the items that are to be drawn) so it would simulate this. But it is not working at all, this is what i have right now:
float dZoomLevel = zoomLevel - prevZoomLevel;
if (dZoomLevel != 0) {
CGPoint touchLocation = [pinch locationInView:self];
CGPoint zoomedTouchLocation = CGPointMake(touchLocation.x * prevZoomLevel, touchLocation.y * prevZoomLevel);
CGPoint newLocation = CGPointMake(touchLocation.x * zoomLevel, touchLocation.y * zoomLevel);
float dX = ( newLocation.x - zoomedTouchLocation.x );
float dY = ( newLocation.y - zoomedTouchLocation.y );
_originX += roundf(dX);
_originY += roundf(dY);
}
The origin is taken from the xml (the smallest x and y of all the elements). It will usually be something between 12000 and 20000 but could in theory be between 0 and 100000.
I'm not sure if I've made myself clear, but if you need any information or don't understand something just ask me and i'll try to reply ASAP. Thanks in advance!
I found the answer myself. The way one needs to zoom in my case is by zooming from the point of the touch. So when the x and y coordinates get zoomed do this:
x -= pinchLocation.x;
y -= pinchLocation.y;
x *= zoomLevel;
y *= zoomLevel;
x += pinchLocation.x;
y += pinchLocation.y;
or in short:
x = (x - pinchLocation.x) * zoomLocation + pinchLocation.x;
y = (y - pinchLocation.y) * zoomLocation + pinchLocation.y;
now x and y are on the correct location!!