Centring a CGAffineTransformScale around a given point - objective-c

I'm animating objects falling onto a board from above, and I want to animate the board 'falling back' as the objects fall upon it. Objects can fall at any point on the board, and when the board 'falls back' I am scaling the board to a smaller scale.
When using CGAffineTransformScale objects scale based on their anchor point, the centre of the object; I want to scale the board and then line up the transformed board with the object that has fallen on it, so that the object that has fallen appears to stay in the same place relative to the board (or, more correctly, the board stays in the same place relative to the position of the board).
I spent hours, and hours changing the anchor point to the position that the object fell, but this revealed a fundamental misunderstanding on my part of how layer.anchorPoint actually works.
I imagine the solution is deriving a vector from the centre of the board to the given falling object and then somehow adjusting position of the board in the transformation so it's the same place. This is where I need help!
As you'd expect in these situations, an animated gif is required.

CALayer's anchorPoint property is the correct property to use for this, with the one minor annoyance that it works in the unit coordinate space, that is, it goes from 0 to 1, not in pixels:
You specify the value for this property using the unit coordinate space. The default value of this property is (0.5, 0.5), which represents the center of the layer’s bounds rectangle. All geometric manipulations to the view occur about the specified point. For example, applying a rotation transform to a layer with the default anchor point causes the layer to rotate around its center. Changing the anchor point to a different location would cause the layer to rotate around that new point.
Because of this, setting an anchor point in pixels would obviously result in some very strange behaviour. You would need to calculate your new anchor point in the unit coordinate space for it to work properly, so, instead of doing something like this:
board.layer.anchorPoint = CGPointMake(ball.x, ball.y);
you would do this:
board.layer.anchorPoint = CGPointMake(ball.x / board.layer.bounds.size.width,
ball.y / board.layer.bounds.size.height);
UPDATE: When you change the anchorPoint property, the view will move, because the anchorPoint, which is set relative to the layer in the unit coordinate space, is anchored to the layer's position property, which is set in the superview's coordinate space. In this way, when you change the value of the anchorPoint property, the view will move such that the point at the new anchor point is at the same place as the old one. You will need to compensate for this, as described in this answer.

Related

Get SCNVector3 from CGPoint

I am trying to get a SCNVector3 from a CGPoint. I am using a gesture recognizer to get the location of a touch (as a CGPoint).
The problem is that the touch doesn't always hit something when I hit test because there isn't always an object being touched. (Touch an empty space to move your ship to that empty spot).
Other Stack Overflow question that I have found uses the SCNHitTestResult to get the worldCoordinates but that doesn't work for me.
Does anyone know how to find this? Given that I know the z coordinate of course. Ships that move always move with a z position of 1.
I Need worldCoordinates to be able to use actions that move a SCNNode to a touch point which has a CGPoint. Thanks!
So, you want to turn a point in view space into a point in scene space? The catch to that, of course, is that scene space has a third dimension and view space doesn't. You use the SCNView (or other renderer) methods projectPoint and unprojectPoint to convert between scene space, which is 3D, and view space, which is... also 3D? Yep — two dimensions of screen pixelspoints, and one of normalized depth: the z-coordinate is 0 for points on the near clipping plane and 1 for points on the far clipping plane.
Anyhow, you have a useful constraint in that you're looking to map view-space points onto a specific plane (z=1) in scene space. You have an even more useful constraint if your scene space is oriented so that said plane is orthogonal to the view direction — i.e. the camera is pointing directly in the +z or -z direction.
If you want to map a view-space point to a particular scene-space depth, you need to know what the view-space depth for that plane is. Use projectPoint for that:
SCNVector3 projectedPlaneCenter = [view projectPoint:planeNode.position];
float projectedDepth = projectedPlaneCenter.z;
Now, hold onto that and you can make use of it whenever you need to map a touch location onto that plane:
CGPoint vp = [recognizer locationInView:view];
SCNVector3 vpWithDepth = SCNVector3Make(vp.x, vp.y, projectedDepth);
SCNVector3 scenePoint = [view unprojectPoint:vpWithDepth];
If your scene isn't oriented with the z-axis parallel to the camera, it's a bit harder — you have to work out where your z=1 plane is independently for any view-space point you process. In that case, you might find it easier to add an invisible SCNPlane to your scene and use the hitTest/worldCoordinates method to locate points on that plane.

Change NSImage Origin

Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.

Objective-C - Positioning an object according to two points on it

Let's say I have a rectangle-shaped object. I want to move it along a path. Is it possible to position this object according to not only one point, but two points on it? For example, the point A on the object is at 125,220, in this case I want point B to be at 140,235.This way I want to set the direction of the object.
In Objective-c (and I assume in other languages too) when we say "Position of a graphical object" we think of only one point, which is usually the bottom-left corner. But positioning
an object according to only that point will just redraw the object with the lower left corner in another point, and the rest part will be determined according to the height and the width of the object, which does not do what I want.
EDIT:
As you can see (and probably it's what you naturally expect) the object will move as a box from one point to another, because there's only one point determining it's position. You ask why I need a different thing. Because I have a timer and a curved path. Each time the timer ticks I need my object to be at a different location(the next position in an array of dumped points). So, instead of adding to X and Y coordinates, I explicitly tell the object to be at certain place. This way I want to achieve normal movement of my object along the curved path. When the front part of the object moves to some point, I need the rear part to move to a certain point as well.
I finally found a way to do do it. I have to rotate the object according the prior and the next points. So, assume there are points A,B,C,D,E,F,G,H on the path that the object will travel along. If the car is at point D, to calculate the rotation angle I do the following:
myObject.rotation=-atan((D.y-C.y)/(D.x-C.x))/3.141592*180;
As you can see it's just maths. A fine tuning can be applied to get a better, smoother rotation. Here for instance I subtract the Y of the previous position from the Y of the current position, then I do the same thing for X and then I get the minus arctangance of their ratio. But you can do
-atan((E.y-C.y)/(E.x-C.x))/3.141592*180;
Choosing the right positions to subtract their x and y coordinates will result in the right and smooth rotation.
I think you can guess that 3.141592 is M_PI;

How to render a 2d side-scroller game

I do not really understand the way I'm suppose to render a side-scroller? How do I know what to render when my character move? What kind of positionning should I use for the characters?
I hope my question is clear
The easiest way i've found to do it is have a characterX and characterY variable [integer or float, whatever you want] Then have a cameraX and cameraY variable. Every object in the scene is drawn at theObjectX-cameraX, theObjectY-cameraY...
CameraX/CameraY are tweened by a similar-to-midpoint formula so eventually they'll reach playerx/playery[Cx = (Cx*99+Px)/100] ... yeah
By doing this, every object moves in the stage's space, and is transformed only on render [saving you from headaches]
Use a matrix to define a camera reference frame.
Use space partitioning to split up your level into screens/windows.
Think of your player sprite as any other entity, like enemies and interactive objects.
Now what you want is the abstraction of a camera. You can define a camera as a 3x3 matrix with this layout:
[rotX_X, rotY_X, 0]
[rotX_Y, rotY_Y, 0]
[transX, transY, 1]
The 2x2 sub-matrix in the top-left corner is a rotation matrix. transX and transY defines the translation part, i.e the origin. You also get scaling for free. Just simply scale the rotation part with a scalar, and you have yourself a zoom.
For this to work properly with rotation, your sprites need to be polygons/primitives, say like triangles or quads; you can't just apply the matrix to the positions of the sprites when drawing. If you don't need rotation, just transforming the center point will work fine.
If you want the camera to follow the player, use the player's position as the camera origin. That is the translation vector [transX, transY]
So how do you apply the matrix to entity positions and model vertices? You do a vector-matrix multiplication.
v' = vM^-1, where v' is the new vector, v is the old vector, and M^-1 is the matrix inverse. A camera needs to be an inverse transform because it defines a local coordinate system. An analogy could be: If you are in front of me and I turn left from my reference frame, I am turning your right. This applies to all affine and linear transformations, like scaling, rotation and translation.
Split up your level into sub-parts so you can cull objects and scenery which does not need to be rendered. Your viewport is of a certain size/resolution. Only render scenery and entities which intersect with your viewport. Instead of checking each and every entity against the viewport bounds, assign each entity to a certain sub-screen and test the bounds of the sub-screen against the viewport and camera bounds. If your divide your levels into parts which are the same size as your viewport, then the maximum number of screens visible
at any particular time is:
2 if your camera only scrolls left and right.
4 if your camera scrolls left, right, up and down.
4 if your camera scrolls in any direction, and additionally can be rotated.
A screen-change is an event you can use to activate entities belonging to that screen. That could be enemies, background animations, doors or whatever you like.
If this is your first foray into writing a side-scroller, I'd suggest considering using an already existing game engine (like Construct or Gamemaker or XNA or whatever fits your experience level) so you don't have to worry about what order to render things and how to make it all work. Mess with that a bit--probably exploring a few of them--to get a feel for how they do things then venture out to your own once you've gotten used to it.
Not that there's anything wrong with baptism by fire but it can get pretty overwhelming in my opinion.

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.