I am having an OpenCV program which works like this:
VideoCapture cap(0);
Mat frame;
while(true) {
cap >> frame;
myprocess(frame);
}
The problem is if myprocess
takes a long time which longer than camera's IO interval, the captured frame be delayed, cannot get the frame synchronized with the real time.
So, I think to solve this problem, should make the camera streaming and myprocess
run parallelly. One thread does IO operation, another does CPU computing. When the camera finished capture, send to work thread to processing.
Is this idea right? Any better strategy to solve this problem?
Demo:
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cv::Mat tmp;
cap >> tmp;
mutex.lock();
buffer = tmp.clone(); // copy the value
mutex.unlock();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
while (cv::waitKey(20)) { // process in the main thread
mutex.lock();
cv::Mat tmp = buffer.clone(); // copy the value
mutex.unlock();
if(!tmp.data)
std::cout<<"null"<<std::endl;
else {
std::cout<<"not null"<<std::endl;
cv::imshow("test", tmp);
}
}
return 0;
}
Or use a thread keep clearing the buffer.
int main(int argc, char *argv[])
{
cv::Mat buffer;
cv::VideoCapture cap;
std::mutex mutex;
cap.open(0);
std::thread product([](cv::Mat& buffer, cv::VideoCapture cap, std::mutex& mutex){
while (true) { // keep product the new image
cap.grab();
}
}, std::ref(buffer), cap, std::ref(mutex));
product.detach();
int i;
while (true) { // process in the main thread
cv::Mat tmp;
cap.retrieve(tmp);
if(!tmp.data)
std::cout<<"null"<<i++<<std::endl;
else {
cv::imshow("test", tmp);
}
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
The second demo I thought shall be work base on https://docs.opencv.org/3.0-beta/modules/videoio/doc/reading_and_writing_video.html#videocapture-grab, but it's not...
The idea to improve performance by threading for IO operations while another thread does CPU computing is a classic strategy. In addition to CPU processing latency,
cv2.VideoCapture().read()
is a blocking operation, so your main program may experience I/O latency and must stall any processing while it waits for new frames. By having separate dedicated threads, your program operates in parallel instead of relying on a single thread to grab frames and process frames in a sequential order.You can store all frames from the IO thread into a Queue while the processing thread simply reads and processes the most recent frame in the Queue without having any latency issues.
In project with Multitarget tracking I used 2 buffers for frame (cv::Mat frames[2]) and 2 threads:
One thread for capturing the next frame and detect objects.
Second thread for tracking the detected objects and draw result on frame.
I used index = [0,1] for the buffers swap and this index was protected with mutex. For signalling about the end of work was used 2 condition variables.
First works CatureAndDetect with frames[capture_ind] buffer and Tracking works with previous frames[1-capture_ind] buffer. Next step - switch the buffers: capture_ind = 1 - capture_ind.
Do you can this project here: Multitarget-tracker.