change recording file programmatically in directsh

2019-02-20 17:12发布

问题:

I made a console application, using directshow, that record from a live source (now a webcam, then a tv capture card), add current date and time in overlay and then save audio and video as .asf. Now I want that the output file is going to change every 60 minutes without stopping the graph. I must not loose any seconds of the live stream. The graph is something like this one:

http://imageshack.us/photo/my-images/543/graphp.jpg/

I took a look at the GMFBridge but I have some compiling problem with their examples. I am wondering if there is a way to split what exist from the overlay filter and audio source, connect them to another asf writer (paused) and then switch them every 60 minutes. The paused asf filter's file name must change (pp.asf, pp2.asf, pp4.asf ...). Something like this:

http://imageshack.us/photo/my-images/546/graph1f.jpg/

with pp1 paused. I found some people in internet that say that the asf writer deletes the current file if the graph does not go in stop mode.

回答1:

Well, I have the product (http://www.videophill.com) that does exactly what you described (its used for broadcast compliance recording purposes) - and I found that only way to do that is this:

  • create a dshow graph that will be used only to capture the audio and video
  • then, at the end of the graph, insert samplegrabber filters, both for audio and video
  • then, use IWMWritter to create and save wmv file, using samples fetched from samplegrabber filters
  • when time comes, close one IWMWritter and create another one.

That way, you won't lose single frame when switching the output files.

Of course, there is also question of queue-ing and storing the samples (when switching the writters) and properly re-aligning the audio/video timestamps, but from my research, that's the only 'normal' way to do it, and I used in practice.



回答2:

The solution is in writing a custom DShow filter with two input pins in your case. One for audio stream and the other for video stream. Inside that filter (doesn't have to be inside from the architecture point of view, because you can also use callbacks for example and do the job somewhere else) you should create asf files. While switching files, A/V data would be stored in cache (e.g. big enough circular buffer). You can also watch and modify A/V sync in that filter. For writing ASF files I would recommend Windows Media Format SDK.
You can also add output pins if you like to pass A/V data further if necessary for preview, parallel streaming etc...



回答3:

GMFBridge is a viable, but complicated solution, a more direct approach I have implemented in the past is querying your ASF Writer for the IWMWriterAdvanced2 interface and setting a custom sink. Within that interface you have methods to remove and add sinks to your ASF writer. The sink automatically connected will write to the file that you speficifed. One way to write whereever you want to is

1.) remove all default sinks:

pWriterAdv->RemoveSink(NULL);

2.) register a custom sink:

pWriterAdv->AddSink((IWMWriterSink*)&streamSink);

The custom sink can be a class that implements IWMWriterSink, which requires implementing callback methods that are called i.e. when the ASF header is written (OnHeader(/* [in] */ INSSBuffer *pHeader);) and when a data packet is written (OnDataUnit(/* [in] */ INSSBuffer *pDataUnit);) - in your implementation you can then write them wherever you want, for example offer additional methods on this class where you can specify the file name you want to write to.

Note that this solution does not quite get you were you want to if you need to write out the header information in each of the 60 minute files - after the initial header you will only get ASF packet data. A workaround for that could be to re-write the intial header before any packet data of each file, however this will produce an unindexed (non-seekable) ASF file.