I have a simple Python script that captures a webcam using OpenCV. My webcam has the potential to stream 30 FPS, but since my Raspberry Pi isn't powerful enough, I can only read ~ 20 FPS. When running the script, one core of my CPU is maxed to 100%, but the the rest of the cores are untouched, so I am trying to split up the reading into the most threads I can in order to use my CPU to it's maximum potential and easily reach 30 FPS.
So is it possible to read webcam frames in parallel?
This is my attempt:
import numpy as np
import cv2
import time
from threading import Thread
CV_CAP_PROP_FPS = 5
cap = cv2.VideoCapture(0)
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter('output.avi', fourcc, cap.get(CV_CAP_PROP_FPS), (640, 480))
threads = []
class MyThread(Thread):
def run(self):
ret, frame = cap.read()
if __name__ == '__main__':
try:
while(cap.isOpened()):
thread = MyThread()
thread.start()
threads.append(thread)
time.sleep(0.035)
except KeyboardInterrupt:
for thread in threads:
thread.join()
cap.release()
out.release()
When running this script, I get a couple of VIDIOC_QBUF: Invalid argument
in my terminal (usually 4 times). The greater the sleep value is, the less error messages I get. So for example, if I time.sleep(0.1)
, I might get 2-3 error messages instead of 4.
This is problematic, because the resulting video file that is generated in the second part of my script (that isn't posted here), is corrupted. This error only occurs when reading the webcam feed in parallel. When executing everything sequentially, the video file is good and I can read it with no problems at all.
Any insight is greatly appreciated. Cheers!
Edit:
I think it is also important to note that the VIDIOC_QBUF: Invalid argument
error messages happen after reading the first couple frames. For example, I could start my script, which would almost instantaneously trigger those errors, but then my script could run fine for an "infinite" amount of time without any error messages.