UIBezierPath subtraction from another UIBezierPath - objective-c

I am making an app that allows the user to draw on the screen with his finger in different colors. The drawings are drawn with UIBezierPaths but I need an eraser. I did have an eraser that was just a path with the background image as the color but this method causes memory issues. I would like to delete the points from any path that is drawn on when eraser is selected.
Unfortunately UIBezierPath doesn't have a subtraction function so I want to make my own. So if eraser is selected, it will look at all the points that should be erased and see if any of the existing paths contain those points, then subdivide the path leaving a blank spot. But it should be able to see how many points in a row to delete not do it one at a time. In theory it makes sense but I'm having trouble getting started on the implementation.
Anyone have any guidance to set me on the right 'path'?

Upon first glance, it appears that you could do hit detection on a UIBezierPath by simply using containsPoint:. That works fine if you want to determine whether the point is contained in the fill of a UIBezierPath, but it does not work for determining whether only the stroke of the UIBezierPath intersects the point. Detecting whether or not a given point is in the stroke of a UIBezierPath can be done as described in the "Doing Hit-Detection on a Path" section at the bottom of this page. Actually, the code sample they give could be used either way. The basic idea is that you have to use the Core Graphics method CGContextPathContainsPoint.
Depending on how large the eraser brush is, you will probably want to check several different points on the edge of the brush circle to see if they intersect the curve, and you'll probably have to iterate through your UIBezierPaths until you get a hit. You should be able to optimize the search by using the bounds of the UIBezierPath.
After you detect that a point intersects a UIBezierPath, you must do the actual split of the path. There appears to be a good outline of the algorithm in this post. The main idea there is to use De Casteljau's algorithm to perform the subdivision of the curve. There are various implementations of the algorithm that you should be able to find with a quick search, including some in C++.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

Merge two UIBezierPath points

I want to be able to select some area from this image, and change the color of the selected area.
To do this, I thought of using CALayer and UIBezierPath.
I've cleared the colored area from the image, then I took each area point and drew a UIBezierPath beneath the image.
I have 3 CALayers for each area, each CALayer has a UIBezierPath with predefined points.
When the user clicks on a layer, it will show the selected layer without filling the UIBezierPath, just to have a border around the UIBezierPath, the result look like this:
I added a UIView over the image with Opacity = 0.6f and
redrew all the CALayers on it.
All the layers are hidden in the new UIView
Every thing is working great, the next step is to merge the selected areas:
I took the points from the first area and added it to the points of
the second area
I created a new UIBezierPath with the new points
My problem is that the result is wrong:
How to merge a UIBezierPath with a correct points order?
Is there a better way to accomplish something like this without
using UIBezierPath?
from looking at the image above, the path is wrong because the sequence of points is not followed which pretty much screws up your path. I don't think a Bezier Path is the right tool to do this in the first place as you have rectangular or point to point connections. So you more have a Poligon than a Bezier Path object. However UIKit seems to bundle all this into a UIBezierPath object (non optimal naming if you ask me).
The tricky thing here is to find out where the two shapes really touch each other and to add the points in the sequence as before but then tear up the vertical lines in the middle and connect the path to the other structure.
Another alternative could be to use a bitmap and simply union the bitmaps and create a new shape. It largely depends on how your base data is represented and managed. You could also simply keep two shapes and just join them in a meta object to draw them concurrently.

Beveling a path/shape in Core Graphics

I'm trying to bevel paths in core graphics. Has anyone done this already for arbitrary shapes and if so are they willing to share code?
I've included my implementation below. I use three variables to determine the bevel: CGFloat bevelSize, UIColor highlightColor, UIColor shadow. Note that the angle of the light source is always 135 degrees. I haven't finished implementing this yet, but here's essentially what I'm trying to do, split into two parts. Part one, generate focal points:
I find the bisectors for the angles between each adjacent lines in the path.
For arcs, the bisector is the line perpendicular to the line created by the two end points of the arc, originating from the mid-point. This should take care of the majority of situations in which an arc is used. I do not take the bisector of an arc and a line. The arc bisector should work fine in those cases.
I then calculate focal points based on the intersection of each adjacent bisectors.
If a focal point is within the shape it's used, otherwise it's discarded.
The purpose of generating the focal points is to 'shrink' the shape proportionally.
The second part is a little more complicated. I essential create each side/segment of the bevelled shape. I do this by drawing 'in' (by the bevelSize) each point of the original shape along radius of the line that extends from the nearest focal point to the point in question. When I have two consecutive 'bevelPoints', I create a UIBezierPath that extends along from the bevelPoints to the original points and back to the bevelPoints (note, this includes arcs). This creates a 'side/segment' I can use to fill. On straight sides, I simply fill with either the shadow or highlight color, depending on the angle of the side. For arcs, I determine the radian 'arc'. If that arc contains a transition angle (M_PI_4 or M_PI + M_PI_4) I fill it with a gradient (from shadow to highlight or highlight to shadow, which ever is appropriate). Otherwise I fill it with a solid color.
Update
I've split out my answer (see below) into a separate blog post. I'm not longer using the implementation details you see above but I'm keeping it all there for reference. I hope this helps anybody else looking to use Core Graphics.
So I did finally manage to write a routine to bevel arbitrary shapes in core graphics. It ended up being a lot of work, a lot more than I originally anticipated, but it's been a fun project. I've posted an explanation and the code to perform the bevel on my blog. Just note that I didn't create a full class (or set of classes) for this. This code was integrated into a much larger class I use to do all my core graphics drawing. However, all the code you need to bevel most arbitrary shapes is there.
UPDATE
I've rewritten the code as straight c and moved it into a separate file. It no longer depends on any other instance variables so that the beveling function can be called from within any context.
I explain the code and process I used to bevel here: Beveling Shapes In Core Graphics
Here is the code: Github : CGPathBevel.
The code isn't perfect: I'm open to suggestions/corrections/better ways of doing things.

top down game - checking, drawing enemy's line of sight area with obstacles

Examples of what i'm going to need:
I'm using cocos2d to draw a CCTMXTiledMap, on those tiles i'll have to draw the LOS triangle.
How would i test if the player is within that triangle, taking obstacles into account?
How would i draw the line of sight area like in the examples above?
BTW, i wasn't sure if this should have been posted here or on gamedev, don't be mad.
You may wish to look at point-in-polygon algorithms such as the ray casting algorithm described here.
You can break up the triangle to account for obstacles, or just make a more complex polygon. You should be able to find an implementation to suit your needs online.
You may also want to take a look at this article for some inspiration. You can maintain a tree like structure, a root triangle (or fulcrum) that can be used to determine whether a point is in general line of sight, with the children (triangles) taking obstacles into account. That way you can quickly eliminate more complex checks.
In the image below the dark blue dots are quickly eliminated from further checking as they do not fall within the root viewing triangle.