Shader optimization for retina screen on iOS

2019-04-08 15:20发布

I make a 3D iphone application which use many billboards. My frame buffer is twice larger on retina screen because I want to increase their quality on iPhone 4. The problem is that fragment shaders consume much more time due to the framebuffer size. Is there a way to manage retina screen and high definition textures without increase shader precision ?

1条回答
闹够了就滚
2楼-- · 2019-04-08 16:06

If you're rendering with a framebuffer at the full resolution of the Retina display, it will have four times as many pixels to raster over when compared with the same physical area of a non-Retina display. If you are fill-rate limited due to the complexity of your shaders, this will cause each frame to take that much longer to render.

First, you'll want to verify that you are indeed limited by the fragment processing part of the rendering pipeline. Run the OpenGL ES Driver instrument against your application and look at the Tiler and Renderer Utilization statistics. If the Renderer Utilization is near 100%, that indicates that you are limited by your fragment shaders and your overall ability to push pixels. However, if you see your Tiler Utilization percentage up there, that means that you are geometry limited and changes in screen resolution won't affect performance as much as reducing the complexity and size of your vertex data.

Assuming that you are limited by your fragment shaders, there are a few things you can do to significantly improve performance on the iOS GPUs.

In your case, it sounds like texture size might be an issue. The first thing I'd do is use PowerVR Texture Compressed (PVRTC) textures instead of standard bitmap sources. PVRTC textures are stored in a compressed format in memory, and can be much smaller than equivalent bitmaps. This might allow for much faster access by increasing cache hits on texture reads.

Make your textures a power of two in size, and enable mipmaps. I've seen mipmaps really help out for larger textures that often get shrunken down to appear on smaller objects. This definitely sounds like the case for your application which might need to support Retina and non-Retina devices.

Avoid dependent texture reads in your fragment shaders like the plague. Anything that performs a calculation to determine a texture coordinate, or any texture reads that fall within a branching statement, triggers a dependent texture read which can be more than an order of magnitude slower to perform on the iOS GPUs. During normal texture reads, the PowerVR GPUs can do a little reading ahead of texture values, but if you use cause a dependent texture read you can lose that optimization.

I could go on about various optimizations (using lowp or mediump precision instead of highp where appropriate, etc.), I've had a little help in this area myself, but these seem like the first things I'd focus on. Finally, you can also try running your shaders through PowerVR's profiling editor which can give you cycle time estimates for the best and worst case performance of these shaders.

The Retina display devices are not even the worst offenders when it comes to fragment shader limitations. Try getting something rendering to the full screen of the iPad 1 to be performant, because it has more pixels than the iPhone 4 / 4S, yet a far slower GPU than the iPad 2/3 or iPhone 4S. If you can get something to run well on the iPad 1, it will be good on everything else out there (even the Retina iPad).

查看更多
登录 后发表回答