How to change the anchor point from the top-left corner of a transformation matrix to the bottom-left corner? - objective-c

Say, I have an image on an HTML page.
I apply an affine transformation to the image using CSS3 matrix function.
It looks like:
img#myimage {
transform: matrix(a, b, c, d, tx, ty);
/* use -webkit-transform, -moz-transform etc. */
}
The origin of an HTML page is the top-left corner and the y-axis is inverted.
I'm trying to put the same image in an environment (cocos2d) where the origin is the bottom-left corner and the y-axis is upright.
To get the same result in the other environment, I need to transform the origin somehow and reflect that in the resulting CGAffineTransform.
It would be great if I can get some help with the matrix math that goes here. (I'm not so good with matrices.)

The following formula would work,
for converting the position from CSS3 to Cocos2d:
(screen Size - "y" position in CSS3 - height of object)
Explanation:
To make the origin for the Cocos environment same as for the CSS3 environment we would only have to add the screen size to the cocos2d's bodies y co-ordinate.
Eg. The screen size is (100,100) and the body is a point object if you place it at (0,0) in CSS3 it would be at the top left corner. If we add the screen size to the y co-ordinates for cocos2d the object would be placed at (0,100) which is the top-left corner for cocos2d as well
To make the co-ordinates same, since the Y axis is inverted, we have to subtract the "Y" co-ordinate given in CSS3 from the Screen Size for Cocos2d. Suppose we place the same point object in the previous example at (0,10) in CSS3 we would place it at (0, 100 - 10) in cocos2d which would be the same positions on the screen
Since our body would NOT always be a point object we have to take care of its anchor point as well. If suppose the body's height is 20 and we place it at (0,10) in CSS3 then it would be placed at the top-left position and would be coming down because the Y axis is inverted
Hence we would also have to subtract the body's total height from the screen size and "y" co-ordinate to place it at the same position which would be (0, 100 - 10 - 20) putting the body at the same place in cocos2d environment
I hope I am correct and clear :)

Related

Line Profile Diagonal

When you make a line profile of all x-values or all y-values the extraction from each pixel is clear. But when you take a line profile along a diagonal, how does DM choose which pixels to use in the one dimensional readout?
Not really a scripting question, but I'm rather certain that it uses bi-linear interpolation between the grid-points along the drawn line. (And if perpendicular integration is enabled, it does so in an integral.) It's the same interpolation you would get for a "rotate" image.
In fact, you can think of it as a rotate-image (bi-linearly interpolated) with a 'cut-out' afterwards, potentially summed/projected onto the new X-axis.
Here is an example
Assume we have a 5 x 4 image, which gives the grid as shown below.
I'm drawing top-left corners to indicate the coordinates system pixel convention used in DigitalMicrgraph, where
(x/y)=(0/0) is the top-left corner of the image
Now extract a LineProfile from (1/1) to (4/3). I have highlighted the pixels for those coordinates.
Note, that a Line drawn from the corners seems to be shifted by half-a-pixel from what feels 'natural', but that is the consequence of the top-left-corner convention. I think, this is why a LineProfile-Marker is shown shifted compared to f.e. LineAnnotations.
In general, this top-left corner convention makes schematics with 'pixels' seem counter-intuitive. It is easier to think of the image simply as grid with values in points at the given coordinates than as square pixels.
Now the maths.
The exact profile has a length of:
As we can only have profiles with integer channels, we actually extract a LineProfile of length = 4, i.e we round up.
The angle of the profile is given by the arc-tangent of dX and dY.
So to extract the profile, we 'rotate' the grid by that angle - done by bilinear interpolation - and then extract the profile as grid of size 4 x 1:
This means the 'values' in the profile are from the four points:
Which are each bi-linearly interpolated values from four closest points of the original image:
In case the LineProfile is averaged over a certain width W, you do the same thing but:
extract a 2D grid of size L x W centered symmetrically over the line.i.e. the grid is shifted by (W-1)/2 perpendicular to the profile direction.
sum the values along W

Problem drawing a rectangle in Godot fragment shader

I'm having a fragment shader that draw some stuff. On top of that I want it to draw 1-pixel thick rectangle around the fragment. I have using step function, but the problem is the UV coordinates that is between 0.0 -1.0. How do I know when the fragment is at a specific pixel? For this I want to draw on the edges.
c.r = step(0.99, UV.x);
c.r += step(0.99, 1.0-UV.x);
c.r += step(0.99, UV.y);
c.r += step(0.99, 1.0-UV.y);
The code above just draw a rectangle, but the problem thicknes is 0.01% of total width/hight.
Is there any good description of UX, FRAGCOORD, SCREEN_TEXTURE and SCREEN_UV?
If it is good enough for you to work in screen coordinates (i.e., you want to define position and thickness in terms of screen space) you can use FRAGCOORD. It corresponds to the (x, y) pixel coordinates within the viewport, i.e., with the default viewport of 1024 x 600, the lower left pixel would be (0, 0), and the top right would be (1024, 600).
If you want to map the fragment coordinates back to world space (i.e., you want to define position and thickness in terms of world space), you must follow the work-around mentioned here.

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

Quartz scaling sprite vertical range but not horizontal when go to fullscreen mode / increase window size

I have create a Quartz composition for use in MAC OS program as part of my interface.
I am relying on the fact that when you have composition sprite movement (a text bullet point in my case) is limited both in the X plane and Y plane to minimum -1 and maximum +1.
When I scale up the window / make my window full screen, I find that the horizontal plane (X axis) remains the same, with -1 being my far left point and +1 being my far right point. However the vertical plane (Y axis) changes, in full screen mode it goes from -0.7 to +0.7.
This scaling is screwing with my calculations. Is there anyway to get the application to keep the scale as -1 to +1 for both horizontal and vertical planes? Or is there a way of determining the upper and lower limits?
Appreciate any help/pointers
Quartz Composer viewer Y limits are usually -0.75 -> 0.75 but it's only a matter of aspect ratio. X limits are allways -1 -> 1, Y ones are dependents on them.
You might want to assign dynamically customs width and heigth variables, capturing the context bounds size. For example :
double myWidth = context.bounds.size.width;
double myHeight = context.bounds.size.height;
Where "context" is your viewer context object.
If you're working directly with the QC viewer : you should use the Rendering Destination Dimensions patch that will give you the width and the height. Divide Height by 2, then multiply the result by -1 to have the other side.

Convert point coordinates to a new coordinate system

Let's say I have point which has the coordinates (50,100) where (0,0) is in the upper left corner of a view.
How can I get the coordinates of the same point if I want the beginning of the coordinate system to be the center of the screen (ie width/2, height/2) ?
Note that I am implementing a custom View and I am drawing inside it and I just want to convert the coordinate inside that same view. I am basically implementing a graphic calculator and I need to have my coordinate system to start in the middle of the screen so the graphics could look better.
I notice you tagged it as iOS problem, so use the method Apple have built in UIView:
(CGPoint)convertPoint:(CGPoint)point fromView:(UIView *)view
Find the midpoint you will be using, so for a 100x100 screen, this would be (50,50). Then take the point you need to convert and subtract the midpoint X value from the point X value, and then subtract the point Y value from the midpoint Y value. Notice that you are not doing the same operation on both values.
So if the point is (30,25) the new point would be (-20,25) because 30 - 50 = -20 and 50 - 25 = 25.