How to draw an array of pixels directly to the scr

2019-05-07 20:27发布

问题:

I want to write pixels directly to to screen (not using vertices and polygons). I have investigated a variety of answers to similar questions, the most notable ones here and here.

I see a couple ways drawing pixels to the screen might be possible, but they both seem to be indirect and use unnecessary floating point operations:

  1. Draw a GL_POINT for each pixel on the screen. I've tried this and it works, but this seems like an inefficient way to draw pixels onto the screen. Why write my data in floating-points when it's going to be transformed into an array of pixel data.

  2. Create a 2d quad that spans the entire screen and write a texture to it. Like the first options, this seems to be a roundabout way of putting pixels on the screen. The texture would still have to go through rasterization before getting put on the screen. Also textures must be square, and most screens are not square, so I'd have to handle that problem.

How do I get, a matrix of colors, where pixels[0][0] corresponds to the upper left corner and pixels[1920][1080] corresponds to the bottom right, onto the screen in the most direct and efficient way possible using OpenGL?

Writing directly to the framebuffer seems like the most promising choice, but I have only seen people using the framebuffer for shading.

回答1:

First off: OpenGL is a drawing API designed to make use of a rasterizer system that ingests homogenous coordinates to define geometric primitives, which get transformed and, well rasterized. Merely drawing pixels is not what the OpenGL API is concerned with. Also most GPUs are floating point processors by nature and in fact can process floating point data more efficiently than integers.

Why write my data in floating-points when it's going to be transformed into an array of pixel data.

Because OpenGL is a rasterizer API, i.e. it takes primitive geometrical data and turns it into pixels. It doesn't deal with pixels as input data, except in the form of image objects (textures).

Also textures must be square, and most screens are not square, so I'd have to handle that problem.

Whoever told you that, or whereever you got that from: They are wrong. OpenGL-1.x had that constraint that textures had to be power-of-2 sized in either direction, but width and height may differ. Ever since OpenGL-2 texture sizes are completely arbitrary.

However a texture might not be the most efficient way to directly update single pixels on the screen either. It is however a great idea to first draw pixels of an pixel buffer, which for display is loaded into a texture, that then gets drawn onto a full viewport quad.

However if your goal is direct manipulation of on-screen pixels, without a rasterizer inbetween, then OpenGL is not the right API for the job. There are other, 2D graphics APIs that allow you to directly push pixels to the screen.

However pushing individual pixels is very inefficient. I strongly recomment operating on a pixel buffer, which is then blited or drawn as a whole for display. And doing it with OpenGL, drawing a full viewport, textured quad is as good for this, and as efficient as any other graphics API.