I have an OpenGL application that outputs stereoscopic 3D video to off-the-shelf TVs via HDMI, but it currently requires the display to support the pre-1.4a methods of manually choosing the right format (side-by-side, top-bottom etc). However, now I have a device that I need to support that ONLY supports HDMI 1.4a 3D signals, which as I understand it is some kind of packet sent to the display that tells it what format the 3D video is in. I'm using an NVIDIA Quadro 4000 and I would like to know if it's possible to output my video (or tell the video card how to) in a way that a standard 3DTV will see the correct format, similar to a 3D Blu-ray or other 1.4a-compatible device, without having to manually select a certain 3D mode. Is this possible?
问题:
回答1:
I don't see a direct answer for the question.
HDMI 1.4a defines meta data to describe 3D format. video_format 010 means 3D 3d_structure 0000 frame packing, 0110 top-bottom, 1000 side-by-side
But, if the driver doesn't have an api for that, you need to change its code (assuming it's open or you have access)
回答2:
If your drivers allow it, you can create a quad-buffer stereo rendering context. This context has two back buffers and two front buffers, one pair for the left eye and one pair for the right. You render to one back buffer (GL_BACK_LEFT), then the other (GL_BACK_RIGHT), then swap them with the standard swap function.
Creating a QBS context requires platform-specific coding. If you're on Windows, you need to pick a pixel format with quad-buffers.
This is only possible if your drivers allow it. They may not. And if they don't there is nothing you can do.
回答3:
If your OpenGL application happens to use a sufficiently simple subset of OpenGL, the following might work:
- Use GLDirect to dynamically convert your OpenGL calls to DirectX.
- Use Nvidia 3DTV Play to automatically stereoify and package the signal over HDMI 1.4.