I'm using OpenCV and GStreamer 0.10.
I use this pipeline to receive the MPEG ts packets over UDP with a custom socket sockfd
provided by python and display it with xvimagesink
, and it works perfectly. Following commend line is for this pipeline:
PIPELINE_DEF = "udpsrc do-timestamp=true name=src blocksize=1316 closefd=false buffer-size=5600 !" \
"mpegtsdemux !" \
"queue !" \
"ffdec_h264 max-threads=0 !" \
"ffmpegcolorspace !" \
"xvimagesink name=video"
Now, I want to get one frame from this pipeline and display it with OpenCV. How can I do it? I know a lot about getting buffer data from appsink. But I still do not know how to convert those buffer to each frames for OpenCV. Thanks for reply, and any help :]