What are anchor points? - objective-c

I already tried reading this: http://www.qcmat.com/understanding-anchorpoint-in-cocos2d/
But I got lost in the second example. (how does an anchor point of ccp(-1,-1) mean “Place the anchor 1 * myWidth to the left and 1 * myHeight under the sprite”?) Can somebody explain how anchor points work to me?
Thanks!

An anchor point is defined relative to the sprite. E.g., if the anchor point is (0,0), then it is at the left-bottom corner; if (1,1), at the right-top corner; if (0.5, 0.5), at the center.
So, an anchor point at (-1, -1), means that it lays outside of the sprite, at the coordinates that you mention in your question. This is the opposite to the right-top corner respect to the let-bottom corner (trace a diagonal from the right-top corner to the left-bottom corner, extend it beyond the latter point and take the symmetric to the right-top corner).
If you think that scaling and rotation (and other properties) are defined relative to the anchor point, then if you set the anchor point at (-1, -1) and rotate the sprite, you will see the sprite describe a circle (this is rotating plus translating).

See this link for an interactive demo what anchors are: http://sibirjak.com/osflash/projects/as3commons-ui/layers/examples/placementdemo/ Should you tell more than long narrative descriptions.

Related

How do 'normalized figure coordinates' work?

In matplotlib, I recently came across the term 'normalized figure coordinates', which is apparently a specification of a rectangle by four parameters.
It is evident that a rectangle can be described by four numbers, and I'm guessing these four numbers somehow describe the dimensions as well as the location of the rectangle. However, I haven't managed to find an answer as to which of these parameters specifies which value.
Additionally, I'm not sure whether this is a matplotlib-specific term or one of general meaning, as the matplotlib documentation does not cite or link any sources with respect to this term.
Can anyone shed some light on this issue, please?
There are several functions where normalized figure coordinates are used.
In general possibilities are
(left, bottom, width, height) (this is called "bounds" in matplotlib); or
(left, bottom, right, top) (called "extent").
Hopefully the documentation will make it clear which 4 tuple is expected in the respective case.
Here you seem to be interested in the GridSpec's tight_layout parameter rect. From its documentation
rect : tuple of 4 floats, optional
(left, bottom, right, top) rectangle in normalized figure coordinates that the whole subplots area (including labels) will fit into. Default is (0, 0, 1, 1).
To answer your last question, the term normalization is not matplotlib-specific you can get a very short intro from wikipedia.
As for Matplotlib: you can have different coordinate systems relative to different objects (e.g. the axis, the figure).
Each of these systems is normalized, in the sense that the 4 corners of the chosen reference object will always have the following coordinates:
(0,1) Top left corner
(1,1) Top right corner
(1,0) Bottom right corner
(0,0) Bottom left corner
Where the first element of each pair refers to x-axis and the second element refers to the y-axis.
This makes, among other things, annotation or placements of artist objects easier as you can specify the position of the element you wish to add using any of the available coordinate systems.
All you need to do is select an appropriate coordinate system by passing a transformation object to the transform parameter.
Some example:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([5.], [2.], 'o')
circle=plt.Circle((0, 0), 0.1, color="g",transform=ax.transAxes) #bottom (y=0) left (x=0) green circle of radius 0.1 (expressed in coord system)
ax.add_artist(circle)
ax.annotate('I am the top (y=1.0) right (x=1.0) Figure corner',
xy=(1, 1), xycoords=fig.transFigure,
xytext=(0.2, 0.2), textcoords='offset points',
)
plt.text( # position text relative to data
5., 2., 'I am the (5,2) data point', # x, y, text,
ha='center', va='bottom', # text alignment
transform=ax.transData # coordinate system transformation
)
plt.text( # position text relative to Axes
1.0, 0.0, 'I am the bottom (y=0.0) right (x=1.0) axis corner',
ha='right', va='bottom',
transform=ax.transAxes
)
plt.text( # position text relative to Figure
0.0, 1.0, 'I am the top (y=1.0) left (x=0.0) figure corner',
ha='left', va='top',
transform=fig.transFigure
)
plt.show()

How to change the anchor point from the top-left corner of a transformation matrix to the bottom-left corner?

Say, I have an image on an HTML page.
I apply an affine transformation to the image using CSS3 matrix function.
It looks like:
img#myimage {
transform: matrix(a, b, c, d, tx, ty);
/* use -webkit-transform, -moz-transform etc. */
}
The origin of an HTML page is the top-left corner and the y-axis is inverted.
I'm trying to put the same image in an environment (cocos2d) where the origin is the bottom-left corner and the y-axis is upright.
To get the same result in the other environment, I need to transform the origin somehow and reflect that in the resulting CGAffineTransform.
It would be great if I can get some help with the matrix math that goes here. (I'm not so good with matrices.)
The following formula would work,
for converting the position from CSS3 to Cocos2d:
(screen Size - "y" position in CSS3 - height of object)
Explanation:
To make the origin for the Cocos environment same as for the CSS3 environment we would only have to add the screen size to the cocos2d's bodies y co-ordinate.
Eg. The screen size is (100,100) and the body is a point object if you place it at (0,0) in CSS3 it would be at the top left corner. If we add the screen size to the y co-ordinates for cocos2d the object would be placed at (0,100) which is the top-left corner for cocos2d as well
To make the co-ordinates same, since the Y axis is inverted, we have to subtract the "Y" co-ordinate given in CSS3 from the Screen Size for Cocos2d. Suppose we place the same point object in the previous example at (0,10) in CSS3 we would place it at (0, 100 - 10) in cocos2d which would be the same positions on the screen
Since our body would NOT always be a point object we have to take care of its anchor point as well. If suppose the body's height is 20 and we place it at (0,10) in CSS3 then it would be placed at the top-left position and would be coming down because the Y axis is inverted
Hence we would also have to subtract the body's total height from the screen size and "y" co-ordinate to place it at the same position which would be (0, 100 - 10 - 20) putting the body at the same place in cocos2d environment
I hope I am correct and clear :)

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

What's the difference between layer.position.y and layer.frame.origin.y?

It seems like one points directly to the layer, and the other to the frame, but how are they functionally different? They both determine the position of the same view...
The origin of the frame is the upper left corner of the frame while the position is (as long as you don't change the anchor point) the center of the frame.

Calculating collision for a moving circle, without overlapping the boundaries

Let's say I have circle bouncing around inside a rectangular area. At some point this circle will collide with one of the surfaces of the rectangle and reflect back. The usual way I'd do this would be to let the circle overlap that boundary and then reflect the velocity vector. The fact that the circle actually overlaps the boundary isn't usually a problem, nor really noticeable at low velocity. At high velocity it becomes quite clear that the circle is doing something it shouldn't.
What I'd like to do is to programmatically take reflection into account and place the circle at it's proper position before displaying it on the screen. This means that I have to calculate the point where it hits the boundary between it's current position and it's future position -- rather than calculating it's new position and then checking if it has hit the boundary.
This is a little bit more complicated than the usual circle/rectangle collision problem. I have a vague idea of how I should do it -- basically create a bounding rectangle between the current position and the new position, which brings up a slew of problems of it's own (Since the rectangle is rotated according to the direction of the circle's velocity). However, I'm thinking that this is a common problem, and that a common solution already exists.
Is there a common solution to this kind of problem? Perhaps some basic theories which I should look into?
Since you just have a circle and a rectangle, it's actually pretty simple. A circle of radius r bouncing around inside a rectangle of dimensions w, h can be treated the same as a point p at the circle's center, inside a rectangle (w-r), (h-r).
Now position update becomes simple. Given your point at position x, y and a per-frame velocity of dx, dy, the updated position is x+dx, y+dy - except when you cross a boundary. If, say, you end up with x+dx > W (letting W = w-r), then you do the following:
crossover = (x+dx) - W // this is how far "past" the edge your ball went
x = W - crossover // so you bring it back the same amount on the correct side
dx = -dx // and flip the velocity to the opposite direction
And similarly for y. You'll have to set up a similar (reflected) check for the opposite boundaries in each dimension.
At each step, you can calculate the projected/expected position of the circle for the next frame.
If this lies outside the rectangle, then you can then use the distance from the old circle position to the rectangle's edge and the amount "past" the rectangle's edge that the next position lies at (the interpenetration) to linearly interpolate and determine the precise time when the circle "hits" the rectangle edge.
For example, if the circle is 10 pixels away from the rectangle's edge, then is predicted to move to 5 pixels beyond it, you know that for 2/3rds of the timestep (10/15ths) it moves on its orginal path, then is reflected and continues on its new path for the remaining 1/3rd of the timestep (5/15ths). By calculating these two parts of the motion and "adding" the translations together, you can find the correct new position.
(Of course, it gets more complicated if you hit near a corner, as there may be several collisions during the timestep, off different edges. And if you have more than one circle moving, things get a lot more complex. But that's where you can start for the case you've asked about)
Reflection across a rectangular boundary is incredibly simple. Just take the amount that the object passed the boundary and subtract it from the boundary position. If the position without reflecting would be (-0.8,-0.2) for example and the upper left corner is at (0,0), the reflected position would be (0.8,0.2).