I am working on an iOS App that visualizes data as a line-graph. The graph is drawn as a CGPath
in a fullscreen custom UIView
and contains at most 320 data-points. The data is frequently updated and the graph needs to be redrawn accordingly – a refresh rate of 10/sec would be nice.
So far so easy. It seems however, that my approach takes a lot of CPU time. Refreshing the graph with 320 segments at 10 times per second results in 45% CPU load for the process on an iPhone 4S.
Maybe I underestimate the graphics-work under the hood, but to me the CPU load seems a lot for that task.
Below is my drawRect()
function that gets called each time a new set of data is ready. N
holds the number of points and points
is a CGPoint*
vector with the coordinates to draw.
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
// set attributes
CGContextSetStrokeColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextSetLineWidth(context, 1.f);
// create path
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddLines(path, NULL, points, N+1);
// stroke path
CGContextAddPath(context, path);
CGContextStrokePath(context);
// clean up
CGPathRelease(path);
}
I tried rendering the path to an offline CGContext first before adding it to the current layer as suggested here, but without any positive result. I also fiddled with an approach drawing to the CALayer directly but that too made no difference.
Any suggestions how to improve performance for this task? Or is the rendering simply more work for the CPU that I realize? Would OpenGL make any sense/difference?
Thanks /Andi
Update: I also tried using UIBezierPath
instead of CGPath
. This post here gives a nice explanation why that didn't help. Tweaking CGContextSetMiterLimit
et al. also didn't bring great relief.
Update #2: I eventually switched to OpenGL. It was a steep and frustrating learning curve, but the performance boost is just incredible. However, CoreGraphics' anti-aliasing algorithms do a nicer job than what can be achieved with 4x-multisampling in OpenGL.
tl;dr: You can set the
drawsAsynchronously
property of the underlyingCALayer
, and your CoreGraphics calls will use the GPU for rendering.There is a way to control the rendering policy in CoreGraphics. By default, all CG calls are done via CPU rendering, which is fine for smaller operations, but is hugely inefficient for larger render jobs.
In that case, simply setting the
drawsAsynchronously
property of the underlyingCALayer
switches the CoreGraphics rendering engine to a GPU, Metal-based renderer and vastly improves performance. This is true on both macOS and iOS.I ran a few performance comparisons (involving several different CG calls, including
CGContextDrawRadialGradient
,CGContextStrokePath
, and CoreText rendering usingCTFrameDraw
), and for larger render targets there was a massive performance increase of over 10x.As can be expected, as the render target shrinks the GPU advantage fades until at some point (generally for render target smaller than 100x100 or so pixels), the CPU actually achieves a higher framerate than the GPU. YMMV and of course this will depend on CPU/GPU architectures and such.
Depending on how you make your path, it may be that drawing 300 separate paths is faster than one path with 300 points. The reason for this is that often the drawing algorithm will be looking to figure out overlapping lines and how to make the intersections look 'perfect' - when perhaps you only want the lines to opaquely overlap each other. Many overlap and intersection algorithms are N**2 or so in complexity, so the speed of drawing scales with the square of the number of points in one path.
It depends on the exact options (some of them default) that you use. You need to try it.
It also explains why your
drawRect:
method is slow.You're creating a CGPath object every time you draw. You don't need to do that; you only need to create a new CGPath object every time you modify the set of points. Move the creation of the CGPath to a new method that you call only when the set of points changes, and keep the CGPath object around between calls to that method. Have
drawRect:
simply retrieve it.You already found that rendering is the most expensive thing you're doing, which is good: You can't make rendering faster, can you? Indeed,
drawRect:
should ideally do nothing but rendering, so your goal should be to drive the time spent rendering as close as possible to 100%—which means moving everything else, as much as possible, out of drawing code.I am no expert on this, but what I would doubt first is that it could be taking time to update 'points' rather than rendering itself. In this case, you could simply stop updating the points and repeat rendering the same path, and see if it takes nearly the same CPU time. If not, you can improve performance focusing on the updating algorithm.
If it IS truly the problem of the rendering, I think OpenGL should certainly improve performance because it will render all 320 lines at the same time in theory.
Have you tried using UIBezierPath instead? UIBezierPath uses CGPath under-the-hood, but it'd be interesting to see if performance differs for some subtle reason. From Apple's Documentation:
I'd would also try setting different properties on the CGContext, in particular different line join styles using
CGContextSetLineJoin()
, to see if that makes any difference.Have you profiled your code using the Time Profiler instrument in Instruments? That's probably the best way to find where the performance bottleneck is actually occurring, even when the bottleneck is somewhere inside the system frameworks.