I'd like to generate and draw some bezier paths based on an image in my app. The image will be really simple (black lines on a white background) such as the following:
Is there a method to process the image and create a bezier path based on the black lines? I'm completely lost here.
If you're wondering the reason for needing the image to be a bezier path, I'm going to be comparing the generated bezier path with another bezier path that the user draws (basically a picture-password that the user will have to draw).
If there's a better way to accomplish comparing an image to a bezier path, I'm all ears. Or, if bezier paths aren't the way to go, then let me know. Thanks!
Why not do it the other way around?
let the user draw the path
create an image from the drawn path using core graphics
compare both images with a framework like opencv
You cannot do this with bezier lines. Those are curves, while what you want is lines.
If you wanted to solve for control points though the solutions for this inverse problem
are infinite therfore you must set the number of control point that the bezier curve has.
Related
I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.
I am using a CAShapeLayer to display a path that is updated as the user traces his/her finger on the screen. I'd like to transform this path to a rectangle that just encloses the path. I can compute the rectangle just fine; the tricky part is animating the transformation.
The docs say this about animating the path property of CAShapeLayer:
If the two paths have a different number of control points or segments the results are undefined.
So how do I go about adding more control points to the rectangular CGPath? Or is there a better way to achieve this animation? Thanks. =)
I think you have to manually construct a series of lines in the form of a rectangle. Iterate over the elements in the path, compute where on the bounding rectangle you want that point to transform to, and add that segment to the new rectangular path. You may have to use the same kind of element (e.g. cubic or quadratic Bézier curve that happens to form a straight line) so that the number of control points matches.
After the animation completes, you can reset the path to a pure rectangle if you want.
I am creating a heat map as an overlay for an image, the sections of the heat map are not rectangles and thus makes it a bit more difficult to overlay a color because when you overlay a color of the image it also hits the transparent pixels of the image. What is the best way to go about this?
The first image is what I am trying to overlay, and the second is the end result I am looking for...
Use Bézier paths, Bézier paths, Bézier paths, or Bézier paths.
See also the section of the Quartz 2D Programming Guide about Bézier paths.
I want to be able to select some area from this image, and change the color of the selected area.
To do this, I thought of using CALayer and UIBezierPath.
I've cleared the colored area from the image, then I took each area point and drew a UIBezierPath beneath the image.
I have 3 CALayers for each area, each CALayer has a UIBezierPath with predefined points.
When the user clicks on a layer, it will show the selected layer without filling the UIBezierPath, just to have a border around the UIBezierPath, the result look like this:
I added a UIView over the image with Opacity = 0.6f and
redrew all the CALayers on it.
All the layers are hidden in the new UIView
Every thing is working great, the next step is to merge the selected areas:
I took the points from the first area and added it to the points of
the second area
I created a new UIBezierPath with the new points
My problem is that the result is wrong:
How to merge a UIBezierPath with a correct points order?
Is there a better way to accomplish something like this without
using UIBezierPath?
from looking at the image above, the path is wrong because the sequence of points is not followed which pretty much screws up your path. I don't think a Bezier Path is the right tool to do this in the first place as you have rectangular or point to point connections. So you more have a Poligon than a Bezier Path object. However UIKit seems to bundle all this into a UIBezierPath object (non optimal naming if you ask me).
The tricky thing here is to find out where the two shapes really touch each other and to add the points in the sequence as before but then tear up the vertical lines in the middle and connect the path to the other structure.
Another alternative could be to use a bitmap and simply union the bitmaps and create a new shape. It largely depends on how your base data is represented and managed. You could also simply keep two shapes and just join them in a meta object to draw them concurrently.
I've created a canvas within which I display an image that is clipped when it goes over the edges. I can do this fine with a square shaped frame, however the frame I want to use is the one below. Is there any way I can clip the image inside the frame without having to add a non transparent square border around the image, i.e. just using the black line that I've already drawn? (on iPad)
You'll need to use Core Graphics and Quartz to handle this sort of clipping/graphics manipulation.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001066
If you're using UIBezierPath, you may be able to achieve the clipping you're after using the following process
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-TPXREF101
Convert your UIBezierPath to a CGPath
Get your image into a CGContext
Add your CGPath to the context via CGContextAddPath
Clip your context using CGContextClip
Alternatively, if you don't want to be messing with paths (and depending on whether this technique is suitable for your situation, your description of the issue makes it hard to tell), it might be worth using image masking to achieve the effect you're after. See the first link and look under "Bitmap Images and Image Masks".