iOS Quartz/CoreGraphics drawing feathered stroke - objective-c

I am drawing a path into a CGContext following a set of points collected from the user. There seems to be some random input jitter causing some of the line edges to look jagged. I think a slight feather would solve this problem. If I were using OpenGL ES I would simply apply a feather to the sprite I am stroking the path with; however, this project requires me to stay in Quartz/CoreGraphics and I can't seem to find a similar solution.
I have tried drawing 5 lines with each line slightly larger and more transparent to approximate a feather. This produces a bad result and slows performance noticeably.
This is the line drawing code:
CGContextMoveToPoint(UIGraphicsGetCurrentContext(),((int)lastPostionDrawing1.x), (((int)lastPostionDrawing1.y)));
CGContextAddCurveToPoint(UIGraphicsGetCurrentContext(), ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, lastPostionDrawing2.x, lastPostionDrawing2.y;
[currentPath addCurveToPoint:CGPointMake(lastPostionDrawing2.x-((int)furthestLeft.x)+((int)penSize), lastPostionDrawing2.y controlPoint1:CGPointMake(ctrl1_x, ctrl1_y) controlPoint2:CGPointMake(ctrl2_x, ctrl2_y)];

I'm going to go ahead and assume that your CGContext still has anti-aliasing turned on, but if not, then that's the obvious first think to try, as #Davyd's comment suggests: CGContextSetShouldAntialias is the function of interest.
Assuming that's not the problem, and the line is being anti-aliased by the context, but you're still wanting something 'softer.' I can think of a couple of ways to do this that should hopefully be faster than stroking 5 times.
First, you can try getting the stroked path (i.e. a path that describes the outline of the stroke of the current path) using CGContextReplacePathWithStrokedPath you can then fill this path with a gradient (or whatever other fill technique gives the desired results.) This will work well for straight lines, but won't be straightforward for curved paths (since the gradient is filling the area of the stroked path, and will be either linear or radial.)
Another perhaps less obvious option, might be to abuse CG's shadow drawing for this purpose. The function you want to look up is: CGContextSetShadowWithColor Here's the method:
Save the GState: CGContextSaveGState
Get the bounding box of the original path
Copy the path, translating it away from itself by 2.0 * bbox.width using CGPathCreateCopyByTransformingPath (note: use the X direction only, that way you don't need to worry about flips in the context)
Clip the context to the original bbox using CGContextClipToRect
Set a shadow on the context with CGContextSetShadowWithColor:
Some minimal blur (Start with 0.5 and go from there. The blur parameter is non-linear, and IME it's sort of a guess and check operation)
An offset equal to -2.0 * bbox width, and 0.0 height, scaled to base space. (Note: these offsets are in base space. This will be maddening to figure out, but assuming you're not adding your own scale transforms, the scale factor will either be 1.0 or 2.0, so practically speaking, you'll be setting an offset.width of either -2.0*bbox.width or -4.0*bbox.width)
A color of your choosing.
Stroke the translated-away path.
Pop the GState CGContextRestoreGState
This should leave you with "just" the shadow, which you can hopefully tweak to achieve the results you want.
All that said, CG's shadow drawing performance is, IME, less than completely awesome, and less than completely deterministic. I would expect it to be faster than stroking the path 5 times with 5 different strokes, but not overwhelmingly so.
It'll come down to how much achieving this effect is worth to you.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Re-sizing visual image while maintaining image dimensions

I'm working with documents, so maintaining the the original image dimensions and subsequent dpi is important.
The aspect ratio is always maintained so the automatic fill modes and alike don't seem to have any effect.
Say I have a 300 dpi document and the user want to clear an inch border around the image. So I need an inch cropped from the image but the result needs to be the original image dimensions (2550x3300).
I have been able to achieve this effect with...
...&crop=300,300,-300,-300&margin=300,300,300,300
This works, but seems more than a little clunky. I've tried a lot of other combinations but they all seem to enlarge or reduce the image size which is undesirable in my case.
So does someone know a simpler syntax to achieve the desired result, or do I need to re-size the image then calculate and fill with a margin as I'm doing now.
Thanks
It turns out that my example requests the image in it's full size which turns out to be a special case. When I introduce a width or height into the command line things don't work very well since crop size is in respect to the original image dimensions and margin size is in respect to the result image.
Thinking about it more I abandoned the crop approach. What I really needed was a way to introduce a clipping region into the result bitmap. So I built an extension to do just that. It works well as it doesn't interfere with any of Resizer's layout calculations and the size of the returned image is whatever the height or width were specified as. Which is just what I needed. The Faces plugin has an example of introducing a clipping region.
Karlton
Cropping and re-adding 300px on each edge is best accomplished exactly the way you're doing it:
&crop=300,300,-300,-300&margin=300
What kind of improved syntax would you expect? This isn't a common operation.

UIBezierPath subtraction from another UIBezierPath

I am making an app that allows the user to draw on the screen with his finger in different colors. The drawings are drawn with UIBezierPaths but I need an eraser. I did have an eraser that was just a path with the background image as the color but this method causes memory issues. I would like to delete the points from any path that is drawn on when eraser is selected.
Unfortunately UIBezierPath doesn't have a subtraction function so I want to make my own. So if eraser is selected, it will look at all the points that should be erased and see if any of the existing paths contain those points, then subdivide the path leaving a blank spot. But it should be able to see how many points in a row to delete not do it one at a time. In theory it makes sense but I'm having trouble getting started on the implementation.
Anyone have any guidance to set me on the right 'path'?
Upon first glance, it appears that you could do hit detection on a UIBezierPath by simply using containsPoint:. That works fine if you want to determine whether the point is contained in the fill of a UIBezierPath, but it does not work for determining whether only the stroke of the UIBezierPath intersects the point. Detecting whether or not a given point is in the stroke of a UIBezierPath can be done as described in the "Doing Hit-Detection on a Path" section at the bottom of this page. Actually, the code sample they give could be used either way. The basic idea is that you have to use the Core Graphics method CGContextPathContainsPoint.
Depending on how large the eraser brush is, you will probably want to check several different points on the edge of the brush circle to see if they intersect the curve, and you'll probably have to iterate through your UIBezierPaths until you get a hit. You should be able to optimize the search by using the bounds of the UIBezierPath.
After you detect that a point intersects a UIBezierPath, you must do the actual split of the path. There appears to be a good outline of the algorithm in this post. The main idea there is to use De Casteljau's algorithm to perform the subdivision of the curve. There are various implementations of the algorithm that you should be able to find with a quick search, including some in C++.

Beveling a path/shape in Core Graphics

I'm trying to bevel paths in core graphics. Has anyone done this already for arbitrary shapes and if so are they willing to share code?
I've included my implementation below. I use three variables to determine the bevel: CGFloat bevelSize, UIColor highlightColor, UIColor shadow. Note that the angle of the light source is always 135 degrees. I haven't finished implementing this yet, but here's essentially what I'm trying to do, split into two parts. Part one, generate focal points:
I find the bisectors for the angles between each adjacent lines in the path.
For arcs, the bisector is the line perpendicular to the line created by the two end points of the arc, originating from the mid-point. This should take care of the majority of situations in which an arc is used. I do not take the bisector of an arc and a line. The arc bisector should work fine in those cases.
I then calculate focal points based on the intersection of each adjacent bisectors.
If a focal point is within the shape it's used, otherwise it's discarded.
The purpose of generating the focal points is to 'shrink' the shape proportionally.
The second part is a little more complicated. I essential create each side/segment of the bevelled shape. I do this by drawing 'in' (by the bevelSize) each point of the original shape along radius of the line that extends from the nearest focal point to the point in question. When I have two consecutive 'bevelPoints', I create a UIBezierPath that extends along from the bevelPoints to the original points and back to the bevelPoints (note, this includes arcs). This creates a 'side/segment' I can use to fill. On straight sides, I simply fill with either the shadow or highlight color, depending on the angle of the side. For arcs, I determine the radian 'arc'. If that arc contains a transition angle (M_PI_4 or M_PI + M_PI_4) I fill it with a gradient (from shadow to highlight or highlight to shadow, which ever is appropriate). Otherwise I fill it with a solid color.
Update
I've split out my answer (see below) into a separate blog post. I'm not longer using the implementation details you see above but I'm keeping it all there for reference. I hope this helps anybody else looking to use Core Graphics.
So I did finally manage to write a routine to bevel arbitrary shapes in core graphics. It ended up being a lot of work, a lot more than I originally anticipated, but it's been a fun project. I've posted an explanation and the code to perform the bevel on my blog. Just note that I didn't create a full class (or set of classes) for this. This code was integrated into a much larger class I use to do all my core graphics drawing. However, all the code you need to bevel most arbitrary shapes is there.
UPDATE
I've rewritten the code as straight c and moved it into a separate file. It no longer depends on any other instance variables so that the beveling function can be called from within any context.
I explain the code and process I used to bevel here: Beveling Shapes In Core Graphics
Here is the code: Github : CGPathBevel.
The code isn't perfect: I'm open to suggestions/corrections/better ways of doing things.

how to get faster rendering of 400+ polygons with SFML

I'm making a basic simulation of moving planets and gravitational pull between them, and displaying the gravity with a big field of green vectors pointing in the direction gravity is pulling them and magnitude of the strength of the pull.
This means I have 400 + lines, which are really rectangles with a rotation, being redrawn each frame, and this is killing my frame-rate. Is there anyway to optimize this with other than making less lines? How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
EDIT:
SFML does the actual rendering each frame, but the way I create my lines is by making a rectangle-like sf::Shape. The generation function takes a width, and sets point 1 as (0, width), point 2 as (0, -width), point 3 as (LineLength, -width), and point 4 (LineLength, width). This forms a rectangle which extends along the positive x-axis. Finally I rotate the rectangle around (0,0) to get it to the right orientation, and set the shapes position to be wherever the start of the line is supposed to be.
How do 2d OpenGL games today achieve such high frame-rates even with many complex polygons/colors?
I imagine by not drawing 400+ 4-vertex objects that are each rotated and scaled with a matrix.
If you want to draw a lot of these things, you're going to have to stop relying on SFML's drawing classes. That introduces a lot of overhead. You're going to have to do it the right way: by drawing lines.
If you insist on each line having a separate width, then you can't use GL_LINES. You must instead compute the four positions of the "line" and stick them in a buffer object. Then, you draw them with a single GL_QUADS call. You will need to use proper buffer object streaming techniques to make this work reasonably fast.
Large batches and VBOs. Also double-check how much time you're spending in your simulation update code.
Quick check: If you have a glBegin() anywhere near your main render loop you are probably Doing It Wrong.
Calculate all your vertex positions, then stream them into the GPU via GL_STREAM_DRAW. If you can tolerate some latency use two VBOs and double-buffer.