What is the benefit of using CGPath? - objective-c

I am new to CGPath concept and have a decent idea about Bezier curves.
I am creating a small free hand drawing program using a View.
In drawRect I keep drawing recorded set of lines from an Array.
and while my mouse moves I add a new line to that array and refresh the view.
drawrect is called again and it draws the recorded set of lines again.
I was reading about CGPath, it says that internally it does something similar to what I am doing.
storing a set of lines and Bezier curves.
So is there any performance improvement, if I use CGPath ?

Hope this answer your question about CGPath.
You might not want to lose your path so easily, especially if it
depicts a complex scene you want to use over and over again. For that
reason, Quartz provides two data types for creating reusable
paths CGPathRef and CGMutablePathRef.
Reference in section creating a path.
http://developer.apple.com/library/IOS/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-SW1

Related

UIBezierPath subtraction from another UIBezierPath

I am making an app that allows the user to draw on the screen with his finger in different colors. The drawings are drawn with UIBezierPaths but I need an eraser. I did have an eraser that was just a path with the background image as the color but this method causes memory issues. I would like to delete the points from any path that is drawn on when eraser is selected.
Unfortunately UIBezierPath doesn't have a subtraction function so I want to make my own. So if eraser is selected, it will look at all the points that should be erased and see if any of the existing paths contain those points, then subdivide the path leaving a blank spot. But it should be able to see how many points in a row to delete not do it one at a time. In theory it makes sense but I'm having trouble getting started on the implementation.
Anyone have any guidance to set me on the right 'path'?
Upon first glance, it appears that you could do hit detection on a UIBezierPath by simply using containsPoint:. That works fine if you want to determine whether the point is contained in the fill of a UIBezierPath, but it does not work for determining whether only the stroke of the UIBezierPath intersects the point. Detecting whether or not a given point is in the stroke of a UIBezierPath can be done as described in the "Doing Hit-Detection on a Path" section at the bottom of this page. Actually, the code sample they give could be used either way. The basic idea is that you have to use the Core Graphics method CGContextPathContainsPoint.
Depending on how large the eraser brush is, you will probably want to check several different points on the edge of the brush circle to see if they intersect the curve, and you'll probably have to iterate through your UIBezierPaths until you get a hit. You should be able to optimize the search by using the bounds of the UIBezierPath.
After you detect that a point intersects a UIBezierPath, you must do the actual split of the path. There appears to be a good outline of the algorithm in this post. The main idea there is to use De Casteljau's algorithm to perform the subdivision of the curve. There are various implementations of the algorithm that you should be able to find with a quick search, including some in C++.

Simple Drawing App Design -- Hillegass Book, Ch. 18

I am working through Aaron Hillegass' Cocoa Programming for Mac OS X and am doing the challenge for Chapter 18. Basically, the challenge is to write an app that can draw ovals using your mouse, and then additionally, add saving/loading and undo support. I'm trying to think of a good class design for this app that follows MVC. Here's what I had in mind:
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
The thing is that I wasn't sure how to add archiving. Should I archive each JBOval? I think this would work, but archiving an NSView doesn't seem very efficient. Any ideas on a better class design?
Thanks.
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
That doesn't sound very MVC. “JBOval” sounds like a model class to me.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
I do like this part.
My suggestion is to have each model object (JBOval, etc.) able to create a Bézier path representing itself. The JBDrawingView (and you should come up with a better name for that, as all views draw by definition) should ask each model object for its Bézier path, fill settings, and stroke settings, and draw the object accordingly.
This keeps the knowledge of how to draw (the path, line width, colors, etc.) in the various shape classes where they belong, while also keeping the actual drawing code in the view layer where it belongs.
The answer to where to put archiving code should be intuitively obvious from this point.
Having a whole NSView for each oval seems rather heavyweight to me. I would descend them from NSObject instead and just have them draw to the current view.
They could also know how to archive themselves, although at that point you'd probably want to think about pulling them out of the view and thinking of them more as part of your model.
Your JBOval views would each be responsible for drawing themselves (basically drawing an oval path and filling it, within their bounds), but JBDrawingView would be responsible for mousing and dragging (and thereby sizing and positioning the JBOvals, which would be its subviews). The drawingView would do no drawing itself.
So far as archiving, you could either have a model class to represent each oval (such as its bounding rectangle, or any other dimensions you choose to represent each oval with). You could archive and unarchive these models to recreate your views.
Finally, I use the JB prefix too, so … :P at you.

What way to use the CGContext to draw is suitable?

I know that the CGContext cannot call it to draw directly, and it needs to fill the drawing logic in the drawInContext, and call the CGContext to draw using "setNeedsDisplay", so, I designed a cmd to execute, but it cause some problems... like this :
Why I can't draw in a loop? (Using UIView in iPhone)
I think the CGContext is very different from my previous programming experience....(I used HTML5 canvas, that allow me add more details, after I draw, so do the Java Swing)
Actually, I want to know what is the suitable to implement these kind of thing in Apples' programmer mind. Thz.
There are three approaches to what you're asking. You can draw everything in drawRect:, you can manage multiple layers, or you can draw in an image. Each has advantages, but first you need to think correctly about the problem so that you don't destroy performance.
Drawing happens constantly. Every time anything changes, there may be quite a lot of drawing that has to be done. Not the whole screen usually, but still a lot of drawing. Since drawRect: and drawInContext: can be called many times, they need to be efficient. That means that you don't want to do a lot of expensive calculations, and you don't want to do a lot of useless drawing. "Useless" means "won't actually be displayed because it's off screen or obscured by other drawing."
So in the usual case, you put your actual drawing code in drawRect:, but you do all your calculations elsewhere, generally when your data changes. For example, you read your files, figure out your coordinates, create CGPaths, etc whenever your data changes (which should be much less frequent then drawing). You save all the results into ivars, and then in drawRect: you just draw the final result. So in your loop example, you would probably have an NSArray of images in your view object, and in drawRect: you would draw them all in order.
Another approach is to create a separate layer for each image, set the image as the content, and then attach the layer to the view. You're done at that point. There is no more drawing code you need to write. Quartz handles layers very efficiently, so this can be a very good solution to a wide variety of problems.
Finally, you can composite everything into an image, and then stick that image in an image view, or draw the image directly in the view, or attach the image to a layer. This is a good solution if you have very complicated drawing (particularly using CGPath). This can be expensive if you're constantly changing things, since you have to create a new image context, draw the old image into the new context, draw on top of it, and then create a new image from the context. But it's good for a complicated drawing that doesn't change often.
But you're correct, CGContext is not like a canvas. It needs to be redrawn every draw cycle. You can do that yourself, or you can use another view object (like UIImageView) to do it for you. But it has to be done one way or another.

Using Core Animation/CALayer for simple layered painting

I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.