My code goes as following -
[[NSColor whiteColor] set];
// `path' is a bezier path with more than 1000 points in it
[path setLineWidth:2];
[path setLineJoinStyle:NSRoundLineJoinStyle];
[path stroke];
// some other stuff...
Running the time profiling tool in Instruments it tells me my app is spending 93.5% of the time doing the last line [path stroke], and Quartz Debugger tells me my app is only running at less than 10 fps (another view changing position on top of it is always causing the update).
I'm looking for ways to improve the performance of stroking the bezier path, sometimes paths with more than 1000 points draws very quickly with >60fps, however in some extreme cases even with the same number of points, perhaps if the points are too far from each other (or too dense?) the performance becomes really sluggish.
I'm not sure what I can do about this. I think caching the view as a bitmap rep is helpful, but it can't really help with live resize.
Edit: commenting out the line [path setLineWidth:2]; certainly helps, but the path looked really too 'thin'.
You can adjust the flatness of the curve using the method setFlatness:, higher values increase the rendering speed at the expense of accuracy. You should use a higher value during live resizes, for example.
Back when I asked the Quartz 2D team at Apple about this, they told me that the biggest factor in the performance of stroking a path with a large number of points, is how many times the path intersects itself. There's a lot of work that goes into properly anti-aliasing an intersection.
Do you build the path at each draw - does it change from drawing to drawing? It sounds like it does change. There may be caching if you draw the same path over and over, so try creating it and keeping it around until it changes. It may help.
You can also drop down an API level or two. Perhaps CALayer objects may do what you want. In other words - do you really have a 1000 point line that needs to be curved to connect the points? You can do a bunch of CALayer objects to draw line segments.
The math on these processors is fast. You could also perhaps write a math routine to throw out unneeded points, to cut the number from 1000 to about 200 or so, say. The math would try to eliminate points that are close together, etc.
My bet is on the math to throw out points that don't make any visual difference. The flatness thing sounds interesting too - it may be that by going totally flat you are doing line segments.
What do you mean when you say
another view changing position on top
of it is always causing the update
??
Do you mean that you are not redrawing the view at 60fps? Then that will be why you are not seeing 60fps then.
When
Instruments it tells me my app is
spending 93.5% of the time
doing something, it doesn't mean 'all available' time it means 93.5% of the cpu cycles your app consumed. ie, it could be no time at all. It isn't enough to determine that you need to optimise. I'm not saying you don't need to, or that stroking massive beziers isn't dog slow. Just that alone doesn't mean much.
Related
I'm doing a lot of animations in iOS using UIBezierPaths and have been manually tweaking the control points until I get the curve I want. Is there a program I can buy that will let me tweak the path with handles like the PS pen tool does then map the control points for me or is there a way to take a curve drawn in PhotoShop, Illustrator, or the like and convert it somehow to a UIBezierPath.
Im wasting so much time tweaking these animations it seems like there has to be a better way.
UIBezierPath *runnerPath = [UIBezierPath bezierPath];
[runnerPath moveToPoint:P(1100, 463)];
[runnerPath addCurveToPoint:P(872, 357)
controlPoint1:P(967, 453)
controlPoint2:P(1022, 366)];
[runnerPath addCurveToPoint:P(503, 366)
controlPoint1:P(664, 372)
controlPoint2:P(699, 480)];
Answered by Kjuly: PaintCode, Perfect!
Try PaintCode:
Designing an attractive, resolution-independent user interface is hard, especially if you have to program your drawing code. PaintCode is a simple vector drawing app that instantly generates resolution-independent Objective-C and C#/MonoTouch drawing code.
Not tried, but I think it'll work as you want.
However, as you see, the price ($99) is so high. :$
So what I'll do in this case is drawing multiple lines together with different colors, select the best one to do the next drawing. Of course, it is better to do it with Photoshop or GIMP. Tedious work..
I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath in a fullscreen custom UIView and contains at most 320 data-points. The data is frequently updated and the graph needs to be redrawn accordingly – a refresh rate of 10/sec would be nice.
So far so easy. It seems however, that my approach takes a lot of CPU time. Refreshing the graph with 320 segments at 10 times per second results in 45% CPU load for the process on an iPhone 4S.
Maybe I underestimate the graphics-work under the hood, but to me the CPU load seems a lot for that task.
Below is my drawRect() function that gets called each time a new set of data is ready. N holds the number of points and points is a CGPoint* vector with the coordinates to draw.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
// set attributes
CGContextSetStrokeColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextSetLineWidth(context, 1.f);
// create path
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, N+1);
// stroke path
CGContextAddPath(context, path);
CGContextStrokePath(context);
// clean up
CGPathRelease(path);
}
I tried rendering the path to an offline CGContext first before adding it to the current layer as suggested here, but without any positive result. I also fiddled with an approach drawing to the CALayer directly but that too made no difference.
Any suggestions how to improve performance for this task? Or is the rendering simply more work for the CPU that I realize? Would OpenGL make any sense/difference?
Thanks /Andi
Update: I also tried using UIBezierPath instead of CGPath. This post here gives a nice explanation why that didn't help. Tweaking CGContextSetMiterLimit et al. also didn't bring great relief.
Update #2: I eventually switched to OpenGL. It was a steep and frustrating learning curve, but the performance boost is just incredible. However, CoreGraphics' anti-aliasing algorithms do a nicer job than what can be achieved with 4x-multisampling in OpenGL.
This post here gives a nice explanation why that didn't help.
It also explains why your drawRect: method is slow.
You're creating a CGPath object every time you draw. You don't need to do that; you only need to create a new CGPath object every time you modify the set of points. Move the creation of the CGPath to a new method that you call only when the set of points changes, and keep the CGPath object around between calls to that method. Have drawRect: simply retrieve it.
You already found that rendering is the most expensive thing you're doing, which is good: You can't make rendering faster, can you? Indeed, drawRect: should ideally do nothing but rendering, so your goal should be to drive the time spent rendering as close as possible to 100%—which means moving everything else, as much as possible, out of drawing code.
Depending on how you make your path, it may be that drawing 300 separate paths is faster than one path with 300 points. The reason for this is that often the drawing algorithm will be looking to figure out overlapping lines and how to make the intersections look 'perfect' - when perhaps you only want the lines to opaquely overlap each other. Many overlap and intersection algorithms are N**2 or so in complexity, so the speed of drawing scales with the square of the number of points in one path.
It depends on the exact options (some of them default) that you use. You need to try it.
tl;dr: You can set the drawsAsynchronously property of the underlying CALayer, and your CoreGraphics calls will use the GPU for rendering.
There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the drawsAsynchronously property of the underlying CALayer switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.
I ran a few performance comparisons (involving several different CG calls, including CGContextDrawRadialGradient, CGContextStrokePath, and CoreText rendering using CTFrameDraw), and for larger render targets there was a massive performance increase of over 10x.
As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.
Have you tried using UIBezierPath instead? UIBezierPath uses CGPath under-the-hood, but it'd be interesting to see if performance differs for some subtle reason. From Apple's Documentation:
For creating paths in iOS, it is recommended that you use UIBezierPath
instead of CGPath functions unless you need some of the capabilities
that only Core Graphics provides, such as adding ellipses to paths.
For more on creating and rendering paths in UIKit, see “Drawing Shapes
Using Bezier Paths.”
I'd would also try setting different properties on the CGContext, in particular different line join styles using CGContextSetLineJoin(), to see if that makes any difference.
Have you profiled your code using the Time Profiler instrument in Instruments? That's probably the best way to find where the performance bottleneck is actually occurring, even when the bottleneck is somewhere inside the system frameworks.
I am no expert on this, but what I would doubt first is that it could be taking time to update 'points' rather than rendering itself. In this case, you could simply stop updating the points and repeat rendering the same path, and see if it takes nearly the same CPU time. If not, you can improve performance focusing on the updating algorithm.
If it IS truly the problem of the rendering, I think OpenGL should certainly improve performance because it will render all 320 lines at the same time in theory.
I am drawing a path into a CGContext following a set of points collected from the user. There seems to be some random input jitter causing some of the line edges to look jagged. I think a slight feather would solve this problem. If I were using OpenGL ES I would simply apply a feather to the sprite I am stroking the path with; however, this project requires me to stay in Quartz/CoreGraphics and I can't seem to find a similar solution.
I have tried drawing 5 lines with each line slightly larger and more transparent to approximate a feather. This produces a bad result and slows performance noticeably.
This is the line drawing code:
CGContextMoveToPoint(UIGraphicsGetCurrentContext(),((int)lastPostionDrawing1.x), (((int)lastPostionDrawing1.y)));
CGContextAddCurveToPoint(UIGraphicsGetCurrentContext(), ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, lastPostionDrawing2.x, lastPostionDrawing2.y;
[currentPath addCurveToPoint:CGPointMake(lastPostionDrawing2.x-((int)furthestLeft.x)+((int)penSize), lastPostionDrawing2.y controlPoint1:CGPointMake(ctrl1_x, ctrl1_y) controlPoint2:CGPointMake(ctrl2_x, ctrl2_y)];
I'm going to go ahead and assume that your CGContext still has anti-aliasing turned on, but if not, then that's the obvious first think to try, as #Davyd's comment suggests: CGContextSetShouldAntialias is the function of interest.
Assuming that's not the problem, and the line is being anti-aliased by the context, but you're still wanting something 'softer.' I can think of a couple of ways to do this that should hopefully be faster than stroking 5 times.
First, you can try getting the stroked path (i.e. a path that describes the outline of the stroke of the current path) using CGContextReplacePathWithStrokedPath you can then fill this path with a gradient (or whatever other fill technique gives the desired results.) This will work well for straight lines, but won't be straightforward for curved paths (since the gradient is filling the area of the stroked path, and will be either linear or radial.)
Another perhaps less obvious option, might be to abuse CG's shadow drawing for this purpose. The function you want to look up is: CGContextSetShadowWithColor Here's the method:
Save the GState: CGContextSaveGState
Get the bounding box of the original path
Copy the path, translating it away from itself by 2.0 * bbox.width using CGPathCreateCopyByTransformingPath (note: use the X direction only, that way you don't need to worry about flips in the context)
Clip the context to the original bbox using CGContextClipToRect
Set a shadow on the context with CGContextSetShadowWithColor:
Some minimal blur (Start with 0.5 and go from there. The blur parameter is non-linear, and IME it's sort of a guess and check operation)
An offset equal to -2.0 * bbox width, and 0.0 height, scaled to base space. (Note: these offsets are in base space. This will be maddening to figure out, but assuming you're not adding your own scale transforms, the scale factor will either be 1.0 or 2.0, so practically speaking, you'll be setting an offset.width of either -2.0*bbox.width or -4.0*bbox.width)
A color of your choosing.
Stroke the translated-away path.
Pop the GState CGContextRestoreGState
This should leave you with "just" the shadow, which you can hopefully tweak to achieve the results you want.
All that said, CG's shadow drawing performance is, IME, less than completely awesome, and less than completely deterministic. I would expect it to be faster than stroking the path 5 times with 5 different strokes, but not overwhelmingly so.
It'll come down to how much achieving this effect is worth to you.
I'm working on a simple program that has 500 "particles" that have an x and a y coordinate. They move around the screen and respond to touches. As I go past 500 particles the app starts running much slower. Using CPU sampler I discovered that drawing the particles is taking up the most CPU time.
This is the drawing code:
CGContextSetFillColorWithColor(context, [UIColor colorWithRed:red/255 green:green/255 blue:blue/255 alpha:1].CGColor);
CGRect rectangle = CGRectMake(xpos,ypos,9,9);
CGContextAddEllipseInRect(context, rectangle);
CGContextFillPath(context);
red,green,and blue are floats used to change the color of the particles based on their speed, but this isn't the problem.
This is how I was taught to use Quartz and it works just fine for most drawing, but this code is executed 500+ times and the game starts slowing down. I've run the program with CPU sampler with the drawing code commented out and there is hardly any CPU usage despite all the math going on in the background.
Is there a more efficient way to draw circles in iOS?
You can try two different approaches to help speed up performance...
Use prerendered UIImage/CGImage instead of points (won't give you the ability to change colors/sizes dynamically, but maybe you only need a limited range for your app)
Use OpenGL, GL_POINTS
Quartz is generally slower than OpenGL especially for path based drawing from all the research I've done on the IPhone. Refer to the IPhone Dev forums and you'll see a general consensus about this.
Making a layer (CALayer) for each particle might actually make sense. In general, doing drawing "yourself" in -drawRect: is the path to slowness on iOS. Avoid it if at all possible.
I'm developing an iPhone Cocos2D game and reading about optimization. some say use spritesheet whenever possible. others say use atlassprite whenever possible and others say sprite is fine.
I don't get the "whenever possible", when each one can and can't be used?
Also what is the best case for each type?
My game will typically use 100 sprites in a grid, with about 5 types of sprites and some other single sprites. What is the best setup for that? guidelines for deciding for general cases will help too.
Here's what you need to know about spritesheets vs. sprites, generally.
A spritesheet is just a bunch of images put together onto one big image, and then there will be a separate file for image location data (i.e. image 1 starts at coordinate 0,0 with a size of 100,100, image 2 starts at coordinate 100,0, etc).
The advantage here is that loading textures (sprites) is a pretty I/O and memory-alloc intensive operation. If you're trying to do this continually in your game, you may get lags.
The second advantage is memory optimization. If you're using transparent PNGs for your images, there may be a lot of blank pixels -- and you can remove those and "pack" your texture sizes way down than if you used individual images. Good for both space & memory concerns. (TexturePacker is the tool I use for the latter).
So, generally, I'd say it's always a good idea to use a sprite sheet, unless you have non-transparent sprites.