I want to check my resolution by drawing a frame with a resolution dimensions and see the drawn frame, in OpenGLES 2.0 emulator (at PC).
I draw a with line_strip on these coordinates:
-1.0f, 1.0f, 0.0f,
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
By this function:
glDrawArrays ( GL_LINE_STRIP, 0, 5 );
and I see only the right and the upper sides, and don't see the left and the lower sides. It seems like they are out of the screen (if I draw these sides not from -1.0f, but a little higher i.e -1.0f + 1 / screen_width then I can see all the sides).
Can you explain me please, why I don't see all the sides?
The reason for that is rounding the values. The values that present screen borders do not present a specific pixel but a value between pixels. For your specific case it would seem you have used "ortho" method to define your borders on interval [-1, 1]. Imagine your view consisting of only 4 pixels as 2x2: In this case you still have borders defined on interval [-1, 1] but the actual pixel positions would be (-.5, -.5), (-.5, .5), (.5, -.5), (.5, .5). Now all things considered when drawing lines representing the border you will have only 3 pixels filled (2 lines). Which of them will be filled depends on rounding the .5 floating value to integer value: If .5 is rounded to 1 you can expect upper left part to be filled, in other case the bottom and right (note that terms "left,right,up,down" depend on internal buffer structure, not what you see on the display).
So to fill the correct pixels with the line you would need something like this (left part only):
-1 + (2/bufferWidth)*.5
To break it down: -1 is the border value; 2 is the interval width defined in your "ortho" (right - left); bufferWidth represents the number of pixels; (2/bufferWidth) is the pixel width; .5 to get the center of the pixel.
To solve this issue in your case it might be enough to simply set the line width to 1.5.
Related
Can a frame's size be different from the bound's size of a UIView.
Whenever I set either of them, I notice that both change and they are always in sync. Is there an edge case where this is not true?
Yes; for example, a transformed (e.g. rotated) view has a different (and useless) frame size.
The frame is purely a convenience, and you could live entirely without it if you had to; the bounds size and center, together, accurately and always describe the view's position and size.
Yes, Please refer the below simple difference between frame and bound:-
The frame of a view is the rectangle, expressed as a location (x,y)
and size (width,height) relative to the superview it is contained
within.
The bounds of a view is the rectangle, expressed as a location (x,y)
and size (width,height) relative to its own coordinate system.
bounds "describes the view’s location and size in its own coordinate system".
frame "defines the origin and dimensions of the view in the coordinate system of its superview".
So the two should differ for any view that uses a different coordinate system than its parent. The key giveaway is:
However, if the transform property contains a non-identity transform,
the value of the frame property is undefined and should not be
modified. In that case, you can reposition the view using the center
property and adjust the size using the bounds property instead.
So that's an example Apple gives you of when frame is defined not to have a predictable relationship to bounds: whenever you've set a non-identity transform.
(source for all quotes was the UIView documentation)
They are different.
Assume I have a label:
label.frame = CGRect(x: 0, y: 0, width: 200, height: 20)
Its current frame & bounds (print(label.frame, label2.bounds)) are as follows:
(0.0, 0.0, 200.0, 20.0) (0.0, 0.0, 200.0, 20.0)
Note they are currently the same. It is shown in x-position, y-position, width, height (in that order).
Now I will apply a scale Y of 2 to the label like so:
label.transform = CGAffineTransform(scaleX: 1, y: 2)
Its new frame & bounds are as follows:
(0.0, -10.0, 200.0, 40.0) (0.0, 0.0, 200.0, 20.0)
Notice how its own bounds are still the same, while the frame has changed (height went from 20 to 40, and the y-position has shifted by 10 upwards to compensate for the 20 increase so it will remain centred).
This corresponds to what other answers/documentation are saying. Neither are useless, use it accordingly to your needs.
7 years late to the party but hope this still helps others.
The code below draws the following. One can notice the left side line has thin line as compare to that on right.
Other observation the Quad curve is not so sharp.
How can I make it look better?
- (void)drawRect:(CGRect)rect
{
CGContextRef contextRef=UIGraphicsGetCurrentContext();
[self drawBatteryEdges:contextRef withFinalBorderRect:rect];
}
-(void) drawBatteryEdges:(CGContextRef) contectRef withFinalBorderRect:(CGRect) batteryRect{
CGFloat topOffset=20.0f;
CGFloat bottomOffset=20.0f;
CGFloat curveOffset=4f;
CGMutablePathRef path=CGPathCreateMutable();
CGPathMoveToPoint(path, NULL, 0, topOffset);
CGPathAddQuadCurveToPoint(path, NULL, batteryRect.size.width/2.0, topOffset-(curveOffset), batteryRect.size.width, topOffset);
CGPathAddLineToPoint(path, NULL, batteryRect.size.width, batteryRect.size.height-bottomOffset);
CGPathAddQuadCurveToPoint(path, NULL,
batteryRect.size.width/2.0, (CGPathGetCurrentPoint(path).y)+(curveOffset),
0, (CGPathGetCurrentPoint(path).y));
CGPathCloseSubpath(path);
CGContextAddPath(contectRef, path);
CGContextDrawPath(contectRef, kCGPathStroke);
}
It draws the following.
Wenderlich, Ray. "Core Graphics Tutorial: Lines, Rectangles, and Gradients." 15 Apr. 2013.
http://www.raywenderlich.com/32283/core-graphics-tutorial-lines-rectangles-and-gradients
Well, it turns out that when Core Graphics strokes a path, it draws
the stroke on the middle of the exact edge of the path.
In your case, the edge of the path is the rectangle you wish to fill.
So when drawing a 1 pixel line along that edge, half of the line (1/2
pixel) will be on the inside of the rectangle, and the other half of
the line (1/2 pixel) will be on the outside of the rectangle.
But of course, since there’s no way to draw 1/2 a pixel, instead Core
Graphics uses anti-aliasing to draw in both pixels, but just a lighter
shade to give the appearance that it is only a single pixel drawn.
But you don’t want no anti-aliasing, you want just one pixel, darnit!
There are several ways to fix this:
You can use clipping to cut out the undesirable pixels
You can disable antialiasing and also modify the rectangle boundaries to make sure the stroke is where you want
You can modify the path to stroke so it takes the 1/2 pixel effect into consideration
I would suggest drawing your stroke on a half pixel, which would involve doing something like this:
CGRectMake(rect.origin.x + 0.5, rect.origin.y + 0.5, rect.size.width - 1, rect.size.height - 1);
I want to do a rounded rectangle outline on an NSImage and I figured that using NSBezierPath would be the best way. However, I ran into a problem: instead of drawing a nice curve, I get this:
For reasons I can't understand, NSBezierPath is drawing the rounded part with a darker color than the rest.
Here's the code I'm using (inside a drawRect: call on a custom view):
NSBezierPath* bp = [NSBezierPath bezierPathWithRoundedRect: self.bounds xRadius: 5 yRadius: 5];
[[[NSColor blackColor] colorWithAlphaComponent: 0.5] setStroke];
[bp stroke];
Any ideas?
Edit:
If I inset the path by 0.5 everything draws just fine. But why is it that I get this when I offset the path by 10 pixels (for example)?
If I understand correctly, it should draw a thin line as well...
Many rendering systems are derived from the PostScript drawing model. Core Graphics is one of these derivative systems. (Here are some others: PDF, SVG, the HTML Canvas 2D Context, Cairo.)
All of these systems have the idea of stroking a path with a line of some fixed width. When you stroke the path, the line straddles the path: half of the line's width is on one side of the path, and half of the line's width is on the other side. Here's a diagram that may make this clearer:
Now, what happens when you stroke a path that lies along the boundary of your view? Half of the stroke falls outside of your view's bounds and is clipped away - not drawn. You only see the half of the stroke that falls inside the view's bounds.
When you use a rounded corner, that corner pulls away from the view's boundary, toward its center, so more of the stroke around the corner falls inside the view's boundary. So the stroke appears to get thicker around the rounded corner, like this:
To fix this, you need to inset your path by half the line width, so that the entire stroke falls inside your view's bounds along the entire path. The default line width is 1.0, so:
NSBezierPath* bp = [NSBezierPath bezierPathWithRoundedRect:
NSRectInset(self.bounds, 0.5, 0.5) xRadius:5 yRadius:5];
In iOS field, just minus the radius of the circle to prevent from being clipped.
UIBezierPath *roundPath = [UIBezierPath bezierPath];
[roundPath addArcWithCenter:
CGPointMake(self.frame.size.width / 2, self.frame.size.height / 2)
radius:(self.frame.size.width / 2 - 0.5)
startAngle:M_PI_2 endAngle:M_PI * 3 / 2.f clockwise:YES];
I have a really annoying issue trying to draw into a bitmap CGContext. What I am trying to do is I have a couple of images to draw into the full size of the image. One can come in at any UIImageOrientation and I've written the code to correctly rotate that properly, but I'm struggling with the second bit which is trying to draw another view at an arbitrary rotation about its centre.
The other view comprises an image drawn possibly outside of its bounds. What I am having a problem with is drawing these at a rotated angle as though it was a UIView that had an affine transform applied to it. e.g. imagine a UIView at {100, 300} of size {20, 20} and an affine transform rotating it by 45 degrees. It would be rotated about {110, 310}.
What I have tried is this:
- (void)drawOtherViewInContext:(CGContextRef)context atRect:(CGRect)rect withRotation:(CGFloat)rotation contextSize:(CGSize)contextSize {
CGRect thisFrame = <SOLVED_FEATURE_FRAME_RELATIVE_TO_RECT_SIZE>;
thisFrame.origin.y = contextSize.height - thisFrame.origin.y - thisFrame.size.height;
CGRect rotatedRect = CGRectApplyAffineTransform(CGRectMake(0.0f, 0.0f, rect.size.width, rect.size.height), CGAffineTransformMakeRotation(-rotation));
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rotatedRect.size.width/2.0f),
+(rotatedRect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, thisFrame, theCGImageToDraw);
CGContextConcatCTM(context, CGAffineTransformInvert(transform));
}
So what I am doing there, I think, is this:
Translate to the bottom left of rect which is where this view is meant to be drawn.
Translate by half the rotated size in x and y.
Rotate by the required angle.
Translate back half the original size in x and y.
I thought that this would be what I wanted to do because the first step translates the coordinate system to be such that thisFrame is drawn correctly relative to where we're being told to draw (by the rect method parameter). Then it's a pretty normal rotate about the centre of a rectangle.
The problem is that when rotated by say 45 degrees, the image is drawn slightly out of place. It's almost correct, but just not quite. When at 0, 90, 180 or 270 degrees then the position is pretty much spot on, maybe a few pixels out but when at 45, 135, 225, 315 degrees the position is too far up and to the right.
Can anyone see what I'm doing wrong here?
Update:
Silly me, it's bigger because I was passing in the wrong rect! Edited to get rid of references to it being the wrong size. It's still not quite in the right place though.
OK I have fixed it. The first point was that I was passing in the wrong rect at first as I was grabbing the frame from a UIView which had an affine transform applied to it, and as we all know the frame in that case is undefined. More likely it's the CGRect that comes from CGRectApplyAffineTransform(bounds, transform) but anyway, I fixed that one.
Then the main problem of drawing offset was fixed by changing my transform to this:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, rect.origin.x, contextSize.height - rect.origin.y - rect.size.height);
transform = CGAffineTransformTranslate(transform,
+(rect.size.width/2.0f),
+(rect.size.height/2.0f));
transform = CGAffineTransformRotate(transform, -rotation);
transform = CGAffineTransformTranslate(transform,
-(rect.size.width/2.0f),
-(rect.size.height/2.0f));
That's what I had originally thought I should be doing, but for some reason I changed it to use the rotated CGRect.
I have two rects that intersect. They have the same dimensions, the only difference is that one of them is lower down the screen than the other. I know there is a way to get the rect of their intersection, but that's not what I want. I actually want a new rect from the area that lies outside of their intersection.
The top part of the lower view intersects with the bottom part of the top view. The new rect should not have that area. I basically want a rect with the same origin and width as the bottom view, but without the part that intersects with the top rect.
Thanks for the help.
CGRect intersectRect = CGRectIntersection(highestRect, lowestRect);
CGRect theRectYouWant = CGRectMake(0, 0, 0, 0);
if(!CGRectIsNull(intersectRect)) {
theRectYouWant =
CGRectMake(lowestRect.origin.x,
intersectRect.origin.y + intersectRect.size.height,
lowestRect.size.width,
lowestRect.size.height - intersectRect.size.height);
}
Have a look on this page for more, Elbimio ;)