CGContext - "modulo" drawing? - objective-c

Imagine I want to draw a custom view in a given rectangle (e.g. 100 x 100 pixels). My custom view's contents might be bigger than 100 x 100. Instead having some content not drawn, I'd like to draw all content inside the 100 x 100 area. For example, a point that would normally be located at (125, 140) would now be drawn at point (25, 40).
Is there any way to do this without having to (majorly) modify the drawing code? Keep in mind that I also draw more complex shapes, like bezier paths.

Perhaps you could scale your drawing space via CGContextScaleCTM(...).
e.x.
CGFloat sx, sy;
sx = self.frame.size.width / desiredWidth;
sy = self.frame.size.height / desiredHeight;
CGContextScaleCTM(context, sx, sy);
EDIT:
As Codo suggests below, you may be looking for CGContextTranslateCTM(...) which will offset your context's coordinate space by some x/y value.

Related

Problem drawing a rectangle in Godot fragment shader

I'm having a fragment shader that draw some stuff. On top of that I want it to draw 1-pixel thick rectangle around the fragment. I have using step function, but the problem is the UV coordinates that is between 0.0 -1.0. How do I know when the fragment is at a specific pixel? For this I want to draw on the edges.
c.r = step(0.99, UV.x);
c.r += step(0.99, 1.0-UV.x);
c.r += step(0.99, UV.y);
c.r += step(0.99, 1.0-UV.y);
The code above just draw a rectangle, but the problem thicknes is 0.01% of total width/hight.
Is there any good description of UX, FRAGCOORD, SCREEN_TEXTURE and SCREEN_UV?
If it is good enough for you to work in screen coordinates (i.e., you want to define position and thickness in terms of screen space) you can use FRAGCOORD. It corresponds to the (x, y) pixel coordinates within the viewport, i.e., with the default viewport of 1024 x 600, the lower left pixel would be (0, 0), and the top right would be (1024, 600).
If you want to map the fragment coordinates back to world space (i.e., you want to define position and thickness in terms of world space), you must follow the work-around mentioned here.

Visualizing the Anchor Point of a UIImageView

Is there an easy way of putting a mark (like a cross for example) on the anchor point of an UIImageView? I'm trying to line up several rotating images by their anchor point, and being able to see these points would make the job a lot easier.
Many thanks.
You are asking how to visualize the anchor point within a view but it seem to me that you are asking for it so that you can help align the anchor points. I'll try and answer both questions.
Visualizing the anchor point.
Every view on iOS have an underlying layer that has an anchor point. The anchor point is in unit coordinate space of the layer (x and y goes from 0 to 1). This means that you can multiply x by the width and y by the height to get the position of the anchor point inside the layer in the coordinate space of the view/layer. You can then place a subview/sublayer there to show the location of the anchor point.
In code you could do something like this to display a small black dot where the anchor point is.
CALayer *anchorPointLayer = [CALayer layer];
anchorPointLayer.backgroundColor = [UIColor blackColor].CGColor;
anchorPointLayer.bounds = CGRectMake(0, 0, 6, 6);
anchorPointLayer.cornerRadius = 3;
CGPoint anchor = viewWithVisibleAnchorPoint.layer.anchorPoint;
CGSize size = viewWithVisibleAnchorPoint.layer.bounds.size;
anchorPointLayer.position = CGPointMake(anchor.x * size.width,
anchor.y * size.height);
[viewWithVisibleAnchorPoint.layer addSublayer:anchorPointLayer];
You can see the result in the image below for four different rotations.
Aligning layers by their anchor point
That is cool and all but it's actually easier then that to align anchor points.
The key trick is that the position and the anchorPoint is always the same point, only in two different coordinate spaces. The position is specified in the coordinate space of the super layer. The anchor point is specified in the unit coordinate space of the layer.
The nice thing about this is that views that have their position property aligned will automatically have their anchorPoint aligned. Note that the content is drawn relative to the anchor point. Below is an example of a bunch of views that all have the same y component of their position, thus they are aligned in y.
There really isn't any special code to do this. Just make sure that the position properties are aligned.

Applying a scale and translate transformation to UIBezierPath

I have a UIBezierPath and I would like to:
Move to any coordinate on the UIView
Make bigger or smaller
I am drawing the UIBezierPath based off of a list of predefined coordinates. I implemented this code:
CGAffineTransform move = CGAffineTransformMakeTranslation(0, 0);
CGAffineTransform moveAndScale = CGAffineTransformScale(move, 1.0f, 1.0f);
[shape applyTransform:moveAndScale];
I have also tried scaling and then moving the shape, it seems to make little to no difference.
Using this code:
[shape moveToPoint:CGPointMake(0, 0)];
I start drawing the shape at (0, 0), but this is what happens. I assume this is because a line is being drawn from 0, 0 to the next point in the list.
When I set the move transformation to (0, 0) this is where it draws. Here, moveToPoint is set to the first coordinate pair in the list. As you can see, it is not at 0, 0.
Finally, increasing the 1.0f moves the shape off the screen completely, no matter where the I tell the shape to move.
Can someone help me understand why the shape is not drawing at 0, 0 and why it moves off the screen when I scale it.
(As requested by the OP in a comment above)
I might be wrong on this one, but doesn't this code
CGAffineTransformMakeTranslation(0, 0);
just say that something should be moved 0 pixels along the x-axis and 0 pixels along the y-axis? (reference) It won't actually move anything to origo (0, 0), as it seems you are trying to do.
Also, it seems like you have slightly misunderstood how to properly use moveToPoint:.. Think of it as a way to move your cursor, but without actually drawing anything. It is just a way to say 'I want to start drawing at this point'. The drawing itself can be performed by other methods. If you wanted to e.g. draw a square with sides of length L, then you could do something like this..
// 'shape' is a UIBezierPath
NSInteger L = 100;
CGPoint origin = CGPointMake(50, 50);
[shape moveToPoint:origin]; // Initial point to draw from
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y)]; // Draw from origin to the right
[shape addLineToPoint:CGPointMake(origin.x+L, origin.y+L)]; // Draw a vertical line
[shape addLineToPoint:CGPointMake(origin.x, origin.y+L)]; // Draw bottom line
[shape addLineToPoint:origin]; // Draw vertical line back to origin
Note that this code is not tested at all, but it should give you the idea of how to use moveToPoint: and addLineToPoint:.
You need to be careful of the order you apply the transforms in and you should think about concatenating the transforms together and applying them in one go.
The order is important as each transform affects all x,y positions in the path. So, the translation is affected by the scale. Reverse the order and the path will be scaled and then moved.
Also, the coordinate system is important, particularly if you are scaling. Ensure you draw around 0,0 and then scale and then translate. This is easiest if you normalise the points. Normalising for lat/long values means dividing latitude by 90 and longitude by 180 (this will actually give you a range -1..1). When doing this you should first scale the path, then translate it to the centre of the view, then apply your desired translation.

Quartz scaling sprite vertical range but not horizontal when go to fullscreen mode / increase window size

I have create a Quartz composition for use in MAC OS program as part of my interface.
I am relying on the fact that when you have composition sprite movement (a text bullet point in my case) is limited both in the X plane and Y plane to minimum -1 and maximum +1.
When I scale up the window / make my window full screen, I find that the horizontal plane (X axis) remains the same, with -1 being my far left point and +1 being my far right point. However the vertical plane (Y axis) changes, in full screen mode it goes from -0.7 to +0.7.
This scaling is screwing with my calculations. Is there anyway to get the application to keep the scale as -1 to +1 for both horizontal and vertical planes? Or is there a way of determining the upper and lower limits?
Appreciate any help/pointers
Quartz Composer viewer Y limits are usually -0.75 -> 0.75 but it's only a matter of aspect ratio. X limits are allways -1 -> 1, Y ones are dependents on them.
You might want to assign dynamically customs width and heigth variables, capturing the context bounds size. For example :
double myWidth = context.bounds.size.width;
double myHeight = context.bounds.size.height;
Where "context" is your viewer context object.
If you're working directly with the QC viewer : you should use the Rendering Destination Dimensions patch that will give you the width and the height. Divide Height by 2, then multiply the result by -1 to have the other side.

Position Subviews Relative to Screen Estate

I'd like to display multiple small UIViews as Subviews relative to the screen estate. This should work across different screen sizes (iPad, iPhone)/portrait/landscape modes.
Each subview to display has two NSNumber objects with an unsigned int ranging from -100 (min) to 100 (max) which needs to be mapped to the correct x and y coordinates for positioning.
What's the best way to translate those values (-100...100) to use them for positioning UIViews on the screen?
How do I position them in a relative rather then an absolute way, so that the code works across screen rotation and screen sizes?
Ok, so if I understand correctly you want a -100 in the x direction to map to the left most point on the screen, 100 in the x to map to the right most point on the screen, -100 in the y direction to map to the lowest point on the screen, and 100 in the y to map to the highest point on screen (or maybe you want the y inverted from what I have so that it agrees with the screen coordinate system in which y becomes bigger the lower on the screen you get?).
And we also want to account for rotation.
As far as I understand it, asking UIScreen for its height and width:
CGFloat width = [UIScreen mainScreen].bounds.size.width;
CGFloat height = [UIScreen mainScreen].bounds.size.height;
but this does not account for rotation. The only other way I am aware of that is pretty straightforward would be to ask a UIView covering the screen for its width and height (most simply, you could make your viewcontroller's view cover the whole screen).
If you had a UIView that perfectly covered the whole screen (let's call it myView), you could try:
CGFloat width = myView.frame.size.width;
CGFloat height = myView.frame.size.height;
these should adjust for orientation by themselves (from my experience, it should definitely work if you get the height and width in viewDidAppear:animated: or anything after. also the UIView needs to either be the UIViewControllers view property or a subview of this view. if not, you'll have to implement didRotateFromInterfaceOrientation: or find some other way to tell your view about any rotations). Once we have the 'width' and 'height' of the screen, we can convert from your int's to screen position. Try something like:
(CGPoint)convertX:(NSNumber *)x andY:(NSNumber *)y intoPoint
{
pointX = ([x intValue] + 100.0)*width/200.0;
pointY = (-[y intValue] + 100.0)*height/200.0; // remove the - sign at the front of the expression for y to grow as you move down the screen
return CGPointMake(pointX, pointY);
}
to convert from -100 to 100 in x and y to their respective points on the screen.
If you're working with a range of +/-100, then you may want to use the underlying CALayers to position your views. The nice part about CALayers, is that their anchor points are mapped to a device-agnostic grid that ranges from 0.0 to +1.0 on a Cartesian plane.