I have the following:
CGPoint pos //center of an image
CGPoint target //a point, somewhere in the coordinate system
float rotation //the current rotation of the image to the x-axis, clockwise, so "right" would be 90°
Now I want the image to rotate around it's centerpoint (pos) so that it looks directly towards the targetpoint.
My idea was: Calculate the angle corresponding to the x-axis, subtract rotation, and then rotate it.
Two things:
1.) I fail at calculating the angle. (Yes, I know it's all in rad...)
2.) What's best for rotating?
CGAffineTransform? But then I'd need an imageView
Or: save context, set origin to center of image, rotate context, draw image, restore context? More complicated, but no imageview neeeded...
Draw it on a CALayer, and move the layer around anyway you like.
Related
I'm making an application that generates a 2D area (you can think of it as a drawing), with a camera hovering over it. The size of said drawing isn't known in advance, and could change greatly. After the "drawing" is generated, I want to position the camera so that the whole drawing is in view.
My original idea was to calculate the points that are at the top, bottom, left, and right of the drawing and having the camera move back, "zooming out" until they are all in sight, but there has to be a better way, right?
Assuming you are working in 2D (thus orthographic camera mode), you can set the camera's orthographicSize:
Camera.main.orthographicSize = height / 2F; //half of the height of the area
Then, set the aspect ratio (width / height):
Camera.main.aspect = 1F; //for example, a square area
Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);
I have just started to learn some cocos2d and this issue had bothered me for quite a while. Basically what i am trying to do is to move a sprite in a layer by checking whether the touch landed on the sprite bounding box using ccTouchBegan and ccTouchMoved.
Everything worked until I moved the layer, which include many other sprite and is also lager than the screen size. After I moved the layer the sprite's bounding box is at a different position as where the sprite image shows. Had anyone experienced similar issue before?
A sprite's boundingBox is always relative to the sprite's parent's coordinate system. If you move, rotate or scale the parent, the child will still have the same boundingBox. You can convert that to another coordinate system. If the parent has only been moved (not rotated or scaled) you can convert to the world coordinate system just by changing the origin of the boundingBox:
CGRect boundingBox = child.boundingBox;
boundingBox.origin = [child.parent convertToWorldSpace:boundingBox.origin];
NSLog(#"%#", NSStringFromCGRect(boundingBox));
If the parent is scaled the size of the child's boundingBox changes accordingly. If the parent is rotated it gets quite complicated because both scale and aspect ratio of the child's boundingBox can change. If all you want to do is test if a touch occurred in the boundigBox, convert the touch location to the child's parent's coordinate system:
CGPoint touchLocation = [child.parent convertToNodeSpace:touchWorldLocation]
Now child.boundingBox and touchLocation are in the same coordinate system.
Array in use the boundingBox.
CGRect boundingBoxuser = user.boundingBox;
for (CCSprite *spritecoinleft in Arraycoinleft)
{
CGRect boundingBoxcoinleft = spritecoinleft.boundingBox;
if((CGRectIntersectsRect(boundingBoxcoinleft,boundingBoxuser)))
{
CCLOG(#"hi....!!");
}
}
Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.
I have a circle CGRectMake(0.0, 0.0, 100.0, 100.0) with cornerRadius = 100;
I want to move an image around that circle. The image should move around the circle and point to the north. In other words it's a compass, just the heading doesn't rotate, but moves around the circle. Also I need the animation to move forwards and backwards just like on a compass.
Has anyone implemented something like this or have any suggestions or ideas?
Instead of trying to work out the coors of it I'd just have an image that is the same size as the entire compass but completely transparent apart from the pointer which will sit on the top edge.
Then put a UIImageView with this image over your compass and just rotate the image around its centre point.
That way the pointer will always follow a circle and it's easier to point north because you're just dealing with an angle of rotation instead of coordinates.
For calculating the x,y coords for the circle...
You know the radius of the circle = radius.
You know the angle of rotation = theta. (angle is always in rads 2Pi rads = one full turn).
x coord = radius * cos(theta).
y coord = radius * sin(theta).
That should do it.
You may have to convert from RADs into degrees.