A sprite bounding box shows wrong position after moving the layer - objective-c

I have just started to learn some cocos2d and this issue had bothered me for quite a while. Basically what i am trying to do is to move a sprite in a layer by checking whether the touch landed on the sprite bounding box using ccTouchBegan and ccTouchMoved.
Everything worked until I moved the layer, which include many other sprite and is also lager than the screen size. After I moved the layer the sprite's bounding box is at a different position as where the sprite image shows. Had anyone experienced similar issue before?

A sprite's boundingBox is always relative to the sprite's parent's coordinate system. If you move, rotate or scale the parent, the child will still have the same boundingBox. You can convert that to another coordinate system. If the parent has only been moved (not rotated or scaled) you can convert to the world coordinate system just by changing the origin of the boundingBox:
CGRect boundingBox = child.boundingBox;
boundingBox.origin = [child.parent convertToWorldSpace:boundingBox.origin];
NSLog(#"%#", NSStringFromCGRect(boundingBox));
If the parent is scaled the size of the child's boundingBox changes accordingly. If the parent is rotated it gets quite complicated because both scale and aspect ratio of the child's boundingBox can change. If all you want to do is test if a touch occurred in the boundigBox, convert the touch location to the child's parent's coordinate system:
CGPoint touchLocation = [child.parent convertToNodeSpace:touchWorldLocation]
Now child.boundingBox and touchLocation are in the same coordinate system.

Array in use the boundingBox.
CGRect boundingBoxuser = user.boundingBox;
for (CCSprite *spritecoinleft in Arraycoinleft)
{
CGRect boundingBoxcoinleft = spritecoinleft.boundingBox;
if((CGRectIntersectsRect(boundingBoxcoinleft,boundingBoxuser)))
{
CCLOG(#"hi....!!");
}
}

Related

Get SCNVector3 from CGPoint

I am trying to get a SCNVector3 from a CGPoint. I am using a gesture recognizer to get the location of a touch (as a CGPoint).
The problem is that the touch doesn't always hit something when I hit test because there isn't always an object being touched. (Touch an empty space to move your ship to that empty spot).
Other Stack Overflow question that I have found uses the SCNHitTestResult to get the worldCoordinates but that doesn't work for me.
Does anyone know how to find this? Given that I know the z coordinate of course. Ships that move always move with a z position of 1.
I Need worldCoordinates to be able to use actions that move a SCNNode to a touch point which has a CGPoint. Thanks!
So, you want to turn a point in view space into a point in scene space? The catch to that, of course, is that scene space has a third dimension and view space doesn't. You use the SCNView (or other renderer) methods projectPoint and unprojectPoint to convert between scene space, which is 3D, and view space, which is... also 3D? Yep — two dimensions of screen pixelspoints, and one of normalized depth: the z-coordinate is 0 for points on the near clipping plane and 1 for points on the far clipping plane.
Anyhow, you have a useful constraint in that you're looking to map view-space points onto a specific plane (z=1) in scene space. You have an even more useful constraint if your scene space is oriented so that said plane is orthogonal to the view direction — i.e. the camera is pointing directly in the +z or -z direction.
If you want to map a view-space point to a particular scene-space depth, you need to know what the view-space depth for that plane is. Use projectPoint for that:
SCNVector3 projectedPlaneCenter = [view projectPoint:planeNode.position];
float projectedDepth = projectedPlaneCenter.z;
Now, hold onto that and you can make use of it whenever you need to map a touch location onto that plane:
CGPoint vp = [recognizer locationInView:view];
SCNVector3 vpWithDepth = SCNVector3Make(vp.x, vp.y, projectedDepth);
SCNVector3 scenePoint = [view unprojectPoint:vpWithDepth];
If your scene isn't oriented with the z-axis parallel to the camera, it's a bit harder — you have to work out where your z=1 plane is independently for any view-space point you process. In that case, you might find it easier to add an invisible SCNPlane to your scene and use the hitTest/worldCoordinates method to locate points on that plane.

How to find back the real position on the image on iOS?

Here is the view I got, I got a layer view, detect user touch, and a image view, which showing the image. The layer view is cover on top of the image view. The image view's image is aspect fit. So, it won't lost the ratio. If in my layer view touch on 100, 240, it is a layer view coordinate, but not the image's coordinate. I would like to know how to convert the layer view's coordinate to a image's coordinate. In this example, the image size may be 180*180, so, the coordinate in layer view in the image is 60, 90.
Thanks.
If I'm understanding this question correctly, you want to take a point, which is currently in relation to the layer's coordinate system, and convert it to the image view's coordinate system?
In that case, there are a couple of ways to do this.
Easiest is to use convertPoint:fromView: or convertPoint:toView:
CGPoint imageViewTouchPoint = [layerView convertPoint:touchPoint fromView:imageView];
CGPoint imageViewTouchPoint = [imageView convertPoint:touchPoint toView:layerView];
Either one should work.
EDIT - I realize now that this is only if the UIImageView has the same frame as the UIImage, which you said it might not, due to the UIViewContentModeScaleAspectFit property.
In this case, unless I'm mistaken, the image frame is calculated inside the UIImageView drawRect: method and isn't a property that gets set. This means you'll have to calculate this on your own.
Definitely get the imageViewTouchPoint from one of the methods above (just in case you want to use the same logic on a UIImageView which isn't the full screen size).
You will then need to calculate the scaled image frame. There are a couple of ways to do this. Some people go brute force and manually calculate based on which side of the image is longer, then determining which side should be scaled. Then they calculate the origin by by centering the image and subtracting the image and image view's sides and dividing by two.
I like to write as little code as possible if it's unnecessary, even if it means importing a framework. If you import AVFoundation you get a method AVMakeRectWithAspectRatioInsideRect which you can use to actually calculate the scaled rectangle in one line of code.
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame);
Whichever method you use, you will then simply translate your touched point with the scaled image origin:
CGPoint imageTouchPoint = CGPointMake(imageViewTouchPoint.x - imageRect.origin.x, imageViewTouchPoint.y - imageRect.origin.y);
You have to do the math yourself. Calculate the aspect ratio of your image and compare with the aspect ratio of the image view's bounds.
Look at this question: How to Get Image position in ImageView
After searching more, got a hack:
CGSize imageInViewSize = [photo resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:imageView.size interpolationQuality:kCGInterpolationNone].size;
CGRect overlayRect = CGRectMake((imageView.frame.size.width - imageInViewSize.width) / 2,
(imageView.frame.size.height - imageInViewSize.height) / 2,
imageInViewSize.width,
imageInViewSize.height);
NSLog(#"Frame of Image inside UIImageView: Left:%f Top:%f Width:%f Height:%f \n", overlayRect.origin.x, overlayRect.origin.y, overlayRect.size.width, overlayRect.size.height);

Change NSImage Origin

Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.

Rotate image to point

I have the following:
CGPoint pos //center of an image
CGPoint target //a point, somewhere in the coordinate system
float rotation //the current rotation of the image to the x-axis, clockwise, so "right" would be 90°
Now I want the image to rotate around it's centerpoint (pos) so that it looks directly towards the targetpoint.
My idea was: Calculate the angle corresponding to the x-axis, subtract rotation, and then rotate it.
Two things:
1.) I fail at calculating the angle. (Yes, I know it's all in rad...)
2.) What's best for rotating?
CGAffineTransform? But then I'd need an imageView
Or: save context, set origin to center of image, rotate context, draw image, restore context? More complicated, but no imageview neeeded...
Draw it on a CALayer, and move the layer around anyway you like.

How to recognize the touch of a non regular sprite image?

I have a sprite and if it is touched the touch should be recognized. I used the coordinates to do so. I took the coordinates (min x, min y, max x , max y)of the sprite image. But The sprite image is not a rectangular shape. So, even if I touch the coordinates outside the sprite and inside the rectangular bounds the sprite is recognized.
But for my application I need only the sprite to be recognized. So, I have to take only the coordinates of the sprite, but it is not regular shape. I am using CCSprite in my program.
So, what can I do to for only the sprite to be selected ? Which classes should use for this?
Thank You.
You could try one of the following...
Create a bounding box smaller than the absolute extents of the sprite image. Yes it will be smaller than the sprite. This will eliminate the dead space click detection of the sprite the trade off being parts of your sprite which look selectable won't be
Use a circular bounding area to detect if the user has clicked on your sprite. Again you will have the dead space problem in my first suggestion but the sphere may give you some better coverage area over the sprite giving you better results on touch detection
This is a standard problem in physics collision detection systems which often end up using circles or rectangles as their collision bodies. I would go with the either a circle or rectangle smaller than the size of your sprite as your bounding area. Going finer detail than that you could generate bounding area polygons. This would however introduce a whole bunch of new issues and concerns.
I am building a Cocos2D game right now and what I am doing is first I step through my sprites and see which sprites the touch hit (they overlap in my app)
Then, for each sprite hit I use [sprite convertTouchToNodeSpace] to get an X,Y co-ordinate inside the sprite, which I can use (although the Y axis is flipped) to reference the CGImage I created the sprite with.
If the pixel at the touch point is 'clear' ie alpha 0, then the sprite was not really touched, and I check the next sprite in the z-order to see if it has color where it was touched.
Sometimes I think I should be using a two color mask image to go along with each sprite, not the sprite image. But, I am mr. make it work, then make it fast.
I realise this is not super efficient, but I do not have very many sprites and I do this only for touches.