光栅化算法:寻找在2D四和逆向投影的“ST”点的坐标(Rasterisation Algorithm

2019-10-21 14:59发布

我的目标是使采用光栅化算法四的形象。 我一直尽可能:

  • 在创建三维四
  • 使用透视分割突出四元的顶点到屏幕上
  • 从屏幕空间转换所得到的坐标栅格空间,并且COMPUT四边形的边界框在光栅空间
  • 遍历该边框内的所有像素,如果当前像素P包含四中查不到。 为此,我正在使用一个简单的测试,其由服用顶点A和点P.我重复这个过程对所有4个边缘之间限定的四边形的边AB和矢量之间的点如果符号是相同的,那么该点位于四里。

我已经成功地实现了这个(见下面的代码)。 但我坚持,我想与基本上找到ST或我四的纹理坐标玩的剩余位。

  • 我不知道是否有可能找到在光栅空间四当前像素P的坐标ST,再转换回成世界空间? 你能有人请指出我告诉我如何做到这一点的方向是正确的?
  • 备选地如何可以计算包含在四像素的z或深度值。 我想这是涉及到发现的四点的坐标ST,然后插值顶点的z值?

PS:这不是一门功课。 我这样做是为了了解光栅化算法,并且恰恰是我现在卡住了,是位我不明白,我相信在GPU渲染管线涉及到某种逆投影,但我只是失去了在这一点上。 谢谢你的帮助。

    Vec3f verts[4]; // vertices of the quad in world space
    Vec2f vraster[4]; // vertices of the quad in raster space
    uint8_t outside = 0; // is the quad in raster space visible at all?
    Vec2i bmin(10e8), bmax(-10e8);
    for (uint32_t j = 0; j < 4; ++j) {
        // transform unit quad to world position by transforming each
        // one of its vertices by a transformation matrix (represented
        // here by 3 unit vectors and a translation value)
        verts[j].x = quads[j].x * right.x + quads[j].y * up.x + quads[j].z * forward.x + pt[i].x;
        verts[j].y = quads[j].x * right.y + quads[j].y * up.y + quads[j].z * forward.y + pt[i].y;
        verts[j].z = quads[j].x * right.z + quads[j].y * up.z + quads[j].z * forward.z + pt[i].z;

        // project the vertices on the image plane (perspective divide)
        verts[j].x /= -verts[j].z;
        verts[j].y /= -verts[j].z;

        // assume the image plane is 1 unit away from the eye
        // and fov = 90 degrees, thus bottom-left and top-right
        // coordinates of the screen are (-1,-1) and (1,1) respectively.
        if (fabs(verts[j].x) > 1 || fabs(verts[j].y) > 1) outside |= (1 << j);

        // convert image plane coordinates to raster
        vraster[j].x = (int32_t)((verts[j].x + 1) * 0.5 * width);
        vraster[j].y = (int32_t)((1 - (verts[j].y + 1) * 0.5) * width);


        // compute box of the quad in raster space
        if (vraster[j].x < bmin.x) bmin.x = (int)std::floor(vraster[j].x);
        if (vraster[j].y < bmin.y) bmin.y = (int)std::floor(vraster[j].y);
        if (vraster[j].x > bmax.x) bmax.x = (int)std::ceil(vraster[j].x);
        if (vraster[j].y > bmax.y) bmax.y = (int)std::ceil(vraster[j].y);
    }

    // cull if all vertices are outside the canvas boundaries
    if (outside == 0x0F) continue;

    // precompute edge of quad
    Vec2f edges[4];
    for (uint32_t j = 0; j < 4; ++j) {
        edges[j] = vraster[(j + 1) % 4] - vraster[j];
    }

    // loop over all pixels contained in box
    for (int32_t y = std::max(0, bmin.y); y <= std::min((int32_t)(width -1), bmax.y); ++y) {
        for (int32_t x = std::max(0, bmin.x); x <= std::min((int32_t)(width -1), bmax.x); ++x) {
            bool inside = true;
            for (uint32_t j = 0; j < 4 && inside; ++j) {
                Vec2f v = Vec2f(x + 0.5, y + 0.5) - vraster[j];
                float d = edges[j].x * v.x + edges[j].y * v.y;
                inside &= (d > 0);
            }
            // pixel is inside quad, mark in the image
            if (inside) {
                buffer[y * width + x] = 255;
            }
        }
    }
文章来源: Rasterisation Algorithm: finding the “ST” coordinates of point in 2D quad and Inverse Projection