I have created a network stream with following gstreamer commands:
sender:
gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=20/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=500 speed-preset=superfast ! rtph264pay ! udpsink host=X.X.X.X port=5000
receiver:
gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink
This just works fine. I now want to include the stream on the receiver side in a python script. In the script I want to do some video processing with opencv.
Does anyone know how to convert the described pipeline, so that it can be used with opencv?
Thanks!
edit1:
Found out that this should work:
cap = cv2.VideoCapture("udpsrc port=5000 ! application/x- rtp,media=video,payload=26,clock-rate=90000,encoding-name=H264, payload=96 ! rtph264depay ! decodebin ! videoconvert ! appsink", cv2.CAP_GSTREAMER)
I get following error:
OpenCV Error: Assertion failed (size.width>0 && size.height>0) in imshow, file /home/nvidia/build-opencv/opencv/modules/highgui/src/window.cpp, line 331 Traceback (most recent call last): File "launchstream_ip.py", line 13, in <module> cv2.imshow('frame', frame) cv2.error: /home/nvidia/build-opencv/opencv/modules/highgui/src/window.cpp:331: error: (-215) size.width>0 && size.height>0 in function imshow