I'm doing a lot of animations in iOS using UIBezierPaths and have been manually tweaking the control points until I get the curve I want. Is there a program I can buy that will let me tweak the path with handles like the PS pen tool does then map the control points for me or is there a way to take a curve drawn in PhotoShop, Illustrator, or the like and convert it somehow to a UIBezierPath.
Im wasting so much time tweaking these animations it seems like there has to be a better way.
UIBezierPath *runnerPath = [UIBezierPath bezierPath];
[runnerPath moveToPoint:P(1100, 463)];
[runnerPath addCurveToPoint:P(872, 357)
controlPoint1:P(967, 453)
controlPoint2:P(1022, 366)];
[runnerPath addCurveToPoint:P(503, 366)
controlPoint1:P(664, 372)
controlPoint2:P(699, 480)];
Answered by Kjuly: PaintCode, Perfect!
Try PaintCode:
Designing an attractive, resolution-independent user interface is hard, especially if you have to program your drawing code. PaintCode is a simple vector drawing app that instantly generates resolution-independent Objective-C and C#/MonoTouch drawing code.
Not tried, but I think it'll work as you want.
However, as you see, the price ($99) is so high. :$
So what I'll do in this case is drawing multiple lines together with different colors, select the best one to do the next drawing. Of course, it is better to do it with Photoshop or GIMP. Tedious work..
Related
The question is very simple. I just want to draw a simple circle around some part of an image with mouse. Not a fancy circle. It can be not at all a complete circle. I just want to circle around some part of an image to make it stand out inside the image.
As simple as this task is, I did not find any solution on google as it always proposes me very complex tasks like how to draw a circle and the like, which are not at all what I want. The problem is that Gimp is very powerful and so non-intuitive for very simple use cases. Any help will be appreciated as it will free me of doing all these changes under windows on another computer and sending the images via email and etc.
Quickest:
Make a circle selection with the Ellipse select tool (you can constrain it to a circle by depressing the Shift key after you start dragging).
Edit > Stroke selection (use preferably "Line" mode, that will also allow you to make a dotted line).
This said, to annotate images there are better alternatives.
I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath in a fullscreen custom UIView and contains at most 320 data-points. The data is frequently updated and the graph needs to be redrawn accordingly – a refresh rate of 10/sec would be nice.
So far so easy. It seems however, that my approach takes a lot of CPU time. Refreshing the graph with 320 segments at 10 times per second results in 45% CPU load for the process on an iPhone 4S.
Maybe I underestimate the graphics-work under the hood, but to me the CPU load seems a lot for that task.
Below is my drawRect() function that gets called each time a new set of data is ready. N holds the number of points and points is a CGPoint* vector with the coordinates to draw.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
// set attributes
CGContextSetStrokeColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextSetLineWidth(context, 1.f);
// create path
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, N+1);
// stroke path
CGContextAddPath(context, path);
CGContextStrokePath(context);
// clean up
CGPathRelease(path);
}
I tried rendering the path to an offline CGContext first before adding it to the current layer as suggested here, but without any positive result. I also fiddled with an approach drawing to the CALayer directly but that too made no difference.
Any suggestions how to improve performance for this task? Or is the rendering simply more work for the CPU that I realize? Would OpenGL make any sense/difference?
Thanks /Andi
Update: I also tried using UIBezierPath instead of CGPath. This post here gives a nice explanation why that didn't help. Tweaking CGContextSetMiterLimit et al. also didn't bring great relief.
Update #2: I eventually switched to OpenGL. It was a steep and frustrating learning curve, but the performance boost is just incredible. However, CoreGraphics' anti-aliasing algorithms do a nicer job than what can be achieved with 4x-multisampling in OpenGL.
This post here gives a nice explanation why that didn't help.
It also explains why your drawRect: method is slow.
You're creating a CGPath object every time you draw. You don't need to do that; you only need to create a new CGPath object every time you modify the set of points. Move the creation of the CGPath to a new method that you call only when the set of points changes, and keep the CGPath object around between calls to that method. Have drawRect: simply retrieve it.
You already found that rendering is the most expensive thing you're doing, which is good: You can't make rendering faster, can you? Indeed, drawRect: should ideally do nothing but rendering, so your goal should be to drive the time spent rendering as close as possible to 100%—which means moving everything else, as much as possible, out of drawing code.
Depending on how you make your path, it may be that drawing 300 separate paths is faster than one path with 300 points. The reason for this is that often the drawing algorithm will be looking to figure out overlapping lines and how to make the intersections look 'perfect' - when perhaps you only want the lines to opaquely overlap each other. Many overlap and intersection algorithms are N**2 or so in complexity, so the speed of drawing scales with the square of the number of points in one path.
It depends on the exact options (some of them default) that you use. You need to try it.
tl;dr: You can set the drawsAsynchronously property of the underlying CALayer, and your CoreGraphics calls will use the GPU for rendering.
There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the drawsAsynchronously property of the underlying CALayer switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.
I ran a few performance comparisons (involving several different CG calls, including CGContextDrawRadialGradient, CGContextStrokePath, and CoreText rendering using CTFrameDraw), and for larger render targets there was a massive performance increase of over 10x.
As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.
Have you tried using UIBezierPath instead? UIBezierPath uses CGPath under-the-hood, but it'd be interesting to see if performance differs for some subtle reason. From Apple's Documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'd would also try setting different properties on the CGContext, in particular different line join styles using CGContextSetLineJoin(), to see if that makes any difference.
Have you profiled your code using the Time Profiler instrument in Instruments? That's probably the best way to find where the performance bottleneck is actually occurring, even when the bottleneck is somewhere inside the system frameworks.
I am no expert on this, but what I would doubt first is that it could be taking time to update 'points' rather than rendering itself. In this case, you could simply stop updating the points and repeat rendering the same path, and see if it takes nearly the same CPU time. If not, you can improve performance focusing on the updating algorithm.
If it IS truly the problem of the rendering, I think OpenGL should certainly improve performance because it will render all 320 lines at the same time in theory.
I am new to CGPath concept and have a decent idea about Bezier curves.
I am creating a small free hand drawing program using a View.
In drawRect I keep drawing recorded set of lines from an Array.
and while my mouse moves I add a new line to that array and refresh the view.
drawrect is called again and it draws the recorded set of lines again.
I was reading about CGPath, it says that internally it does something similar to what I am doing.
storing a set of lines and Bezier curves.
So is there any performance improvement, if I use CGPath ?
Hope this answer your question about CGPath.
You might not want to lose your path so easily, especially if it
depicts a complex scene you want to use over and over again. For that
reason, Quartz provides two data types for creating reusable
paths CGPathRef and CGMutablePathRef.
Reference in section creating a path.
http://developer.apple.com/library/IOS/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-SW1
So I want to have a view (NSView, NSOpenGLView, something CG related?) which basically displays a map. Such as:
http://dump.tanaris4.com/map.png
Obviously that looks horrible, but I did it using an NSView, and it draws SO slow. Clearly not designed for this.
I just need to allow users to click on the individual (x,y) coordinates to make changes, and zoom into a certain area (to see it better).
Should I go the OpenGL route? And if so - any suggestions as to how to get started? (I was able to follow the guide to draw a triangle, so that's good).
I did find this post on zooming in an NSView: How to implement zoom/scale in a Cocoa AppKit-application
My concern is if I'm drawing over 6000 coordinates and the lines connecting them, this isn't efficient at all.
I don't think using OpenGL would be of any good here. The problem does not seem to be the actual painting, but rather the rendering strategy. You would need a scene graph of some kind to dynamically handle level of detail and culling.
Qt has all this packaged in a nice class class QGraphicsScene (see http://doc.qt.nokia.com/latest/qgraphicsscene.html for reference, and http://doc.qt.nokia.com/main-snapshot/demos-chip.html for an example).
Some basic concepts you should consider using:
http://en.wikipedia.org/wiki/Scene_graph
http://en.wikipedia.org/wiki/Quadtree
http://en.wikipedia.org/wiki/Level_of_detail
Try using core graphics for this, really there is so much that could be done. Watch the video Practical Drawing for iOS Developers from WWDC 2011 and it should give an over view of what can be done with CG.
I believe even CoreGraphics will suffice for what you want to achieve, and that should work under a UIView if you draw the rectangle of your view completely under the DrawRect method of your UIView (you must overload this method). Please see the UIView Class Reference. I have a mobile application that logs points on the UIMapKit, kind of like Nike+, and it certainly works well for massive amounts of points/line segments. There is no reason why this simple approach cannot work for you as well.
My code goes as following -
[[NSColor whiteColor] set];
// `path' is a bezier path with more than 1000 points in it
[path setLineWidth:2];
[path setLineJoinStyle:NSRoundLineJoinStyle];
[path stroke];
// some other stuff...
Running the time profiling tool in Instruments it tells me my app is spending 93.5% of the time doing the last line [path stroke], and Quartz Debugger tells me my app is only running at less than 10 fps (another view changing position on top of it is always causing the update).
I'm looking for ways to improve the performance of stroking the bezier path, sometimes paths with more than 1000 points draws very quickly with >60fps, however in some extreme cases even with the same number of points, perhaps if the points are too far from each other (or too dense?) the performance becomes really sluggish.
I'm not sure what I can do about this. I think caching the view as a bitmap rep is helpful, but it can't really help with live resize.
Edit: commenting out the line [path setLineWidth:2]; certainly helps, but the path looked really too 'thin'.
You can adjust the flatness of the curve using the method setFlatness:, higher values increase the rendering speed at the expense of accuracy. You should use a higher value during live resizes, for example.
Back when I asked the Quartz 2D team at Apple about this, they told me that the biggest factor in the performance of stroking a path with a large number of points, is how many times the path intersects itself. There's a lot of work that goes into properly anti-aliasing an intersection.
Do you build the path at each draw - does it change from drawing to drawing? It sounds like it does change. There may be caching if you draw the same path over and over, so try creating it and keeping it around until it changes. It may help.
You can also drop down an API level or two. Perhaps CALayer objects may do what you want. In other words - do you really have a 1000 point line that needs to be curved to connect the points? You can do a bunch of CALayer objects to draw line segments.
The math on these processors is fast. You could also perhaps write a math routine to throw out unneeded points, to cut the number from 1000 to about 200 or so, say. The math would try to eliminate points that are close together, etc.
My bet is on the math to throw out points that don't make any visual difference. The flatness thing sounds interesting too - it may be that by going totally flat you are doing line segments.
What do you mean when you say
another view changing position on top
of it is always causing the update
??
Do you mean that you are not redrawing the view at 60fps? Then that will be why you are not seeing 60fps then.
When
Instruments it tells me my app is
spending 93.5% of the time
doing something, it doesn't mean 'all available' time it means 93.5% of the cpu cycles your app consumed. ie, it could be no time at all. It isn't enough to determine that you need to optimise. I'm not saying you don't need to, or that stroking massive beziers isn't dog slow. Just that alone doesn't mean much.