I have two media files (say, "file0" and "file1") and I want to merge them into a single one with "picture-in-picture" effect - the content from "file0" to be displayed on the whole window, and the content from "file1" will be shown on the top-left corner in the smaller box.
One more point is that the content from "file1" should be rendered some time later from the base time, at the point marked as "X1" on the diagram below.
In other words, if I take "videotestsrc" as a video source input, I would get the following sample pipeline which illustrates what I need to get as a result of this task:
gst-launch-1.0 -ev \
videotestsrc pattern="snow" num_buffers=100 ! queue ! videoscale ! capsfilter caps="video/x-raw,width=320,height=240,framerate=15/1" ! videoconvert ! videomixer.sink_0 \
videotestsrc pattern=5 timestamp-offset=3000000000 num_buffers=30 ! queue ! videoscale ! capsfilter caps="video/x-raw,width=120,height=80,framerate=15/1" ! videoconvert ! videomixer.sink_1 \
videomixer name="videomixer" ! autovideosink
(you'll see snowy 320x240 window with small green box appearing after three seconds from the start for two seconds)
So, the question is - what is the best way to shift the time of rendering the content from file1 ? (either by the means of gstreamer plugins, specifying a parameter in a pipeline, OR by doing an API call)