I am new to Qt, I only know the basics: create interfaces and connect slots. In a few words, my knowledge is not deep at all.
I need to open a video file and capture all of its frames to get the R, G, B channels and, later on, process optical flow (this is already done) frame to frame to finally represent it on a window.
Is it possible to get the video frames with Qt? I have researched a lot but not found anything conclusive.
You can use QMediaPlayer to achieve this.
- Instantiate the QMediaPlayer.
- Subclass the QAbstractVideoSurface.
- Set your implementation as the output for the media player via
QMediaPlayer::setVideoOutput
.
- Feed the media player the needed file and eventually it will start calling
QAbstractVideoSurface::present(const QVideoFrame & frame)
on your implementation of QAbstractVideoSurface
if the video was loaded successfully. Then you can access the channels and everything from the QVideoFrame and draw the frame on a widget.
I do not know why I could not include the necessary Qt headers to process frames (they seemed to always have unresolved dependencies and some did not exist) so I turned to OpenCV 3.0 and did it this way:
cv::VideoCapture cap(videoFileName);
if(!cap.isOpened()) // check if we succeeded
return;
while (cap.isOpened())
{
cv::Mat frame;
cap >> frame;
cv::flip(frame, frame, -1);
cv::flip(frame, frame, 1);
// get RGB channels
w = frame.cols;
h = frame.rows;
int size = w * h * sizeof(unsigned char);
unsigned char * r = (unsigned char*) malloc(size);
unsigned char * g = (unsigned char*) malloc(size);
unsigned char * b = (unsigned char*) malloc(size);
for(int y = 0; y < h;y++)
{
for(int x = 0; x < w; x++)
{
// get pixel
cv::Vec3b color = frame.at<cv::Vec3b>(cv::Point(x,y));
r[y * w + x] = color[2];
g[y * w + x] = color[1];
b[y * w + x] = color[0];
}
}
}
cap.release();
It has worked perfectly for my purpose so I did not continue researching.
Thanks anyway.