How to map a decoded buffer from ffmpeg into QVide

2019-05-26 11:03发布

I'm trying to put by decoded ffmpeg buffer into a QFrame so I can put this frame into a QAbstractVideoBuffer and then put this buffer into a QMediaPlayer.

Here's the code for the VideoSurface. According to QT's documentation, I just have to implement these two functions: constructor and bool present, which processes the frame into the QVideoFrame named frame

QList<QVideoFrame::PixelFormat> VideoSurface::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType = QAbstractVideoBuffer::NoHandle) const
{
    Q_UNUSED(handleType);

    // Return the formats you will support
    return QList<QVideoFrame::PixelFormat>() << QVideoFrame::Format_YUV420P;
}

bool VideoSurface::present(const QVideoFrame &frame)
{
    //Q_UNUSED(frame);
    std:: cout << "VideoSurface processing 1 frame " << std::endl; 

    QVideoFrame frametodraw(frame);

    if(!frametodraw.map(QAbstractVideoBuffer::ReadOnly))
    {
        setError(ResourceError);
        return false;
    } 
    // Handle the frame and do your processing
    const size_t bufferSize = 398304;
    uint8_t frameBuffer[bufferSize];
    this->mediaStream->receiveFrame(frameBuffer, bufferSize);
    //Frame is now in frameBuffer, we must put into frametodraw, I guess
    // ------------What should I do here?-------------
    frametodraw.unmap();
    return true;
}

Look at this->mediaStream.decodeFrame(frameBuffer, bufferSize). This line decodes a new h264 frame into frameBuffer in the YUV420P format.

My idea was to use the map function and then try to receive a buffer pointer using frametodraw.bits() function and try to point this pointer to another thing, but I don't think this is the way. I think I should copy the contents of frameBuffer to this pointer, but this pointer does not inform me of its size, for example, so I guess this is also not the way.

So... How should I map my buffer into the QVideoFrame called frame?

I also noticed that, when I put my VideoSurface instance into my QMediaPlayer, present is never called. I think something's wrong, even with player->play() This is important.

I also do not have the size of the decoded image inside frameBuffer, I only have its total size. I think this should also be a problem.

I also noticed that QMediaPlayer is not a displayable element... So which Widget will display my video? This seems to me important.

1条回答
Summer. ? 凉城
2楼-- · 2019-05-26 11:30

I think you are misunderstanding the role of each class. You are subclassing QAbstractVideoSurface and it is supposed to assist with access to data that is ready for presentation. Inside the present method, you are provided an already decoded QVideoFrame. If you would like to display this onscreen then you would need to implement it in the VideoSurface class.

You can set the VideoSurface on the QMediaPlayer, and the media player already handles the decoding of the video and the negotiation of the pixel format. That QVideoFrame you receive in the the present of the VideoSurface already has the height/width and pixel format from the media player. The typical use of the media player is to have it load and decode the files and have it display onscreen with a video widget.

If you require to use your own custom ffmpeg decoder, my advice is to convert the frame from yuv420 to rgb (libswscale?), create your own custom widget that you can pass the frame data too and you can render it onscreen with QPainter using after loading it into a QPixmap.

查看更多
登录 后发表回答