Force gstreamer appsink buffers to only hold 10ms

2019-08-09 10:06发布

I have a gstreamer pipeline which drops all of its data into an appsink:

command = g_strdup_printf ("autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=%d, width=%d, channels=%d, rate=%d !"
                " appsink name=soundSink max_buffers=2 drop=true ",
                  bitDepthIn, bitDepthIn, channelsIn, sampleRateIn);

Which usually looks something like,

autoaudiosrc ! audio/x-raw-int, signed=true, endianness=1234, depth=16, width=16, channels=1, rate=16000 ! appsink name=soundSink max_buffers=2 drop=true

at runtime.

It captures the audio fine, the problem is that it tends to capture any random amount of data it wants instead of a set size or time interval. So For Instance, the rtp lib that is asking for the data will only ask for 960 bytes (10ms of 48khz/1 1channel/16 bit depth) but the buffers will be anywhere from 10ms to 26ms in length. It is very important that this pipeline only return 10ms per buffer. Is there a way to do this? Here is the code that grabs the data.

void GSTMediaStream::GetAudioInputData(void* data, int max_size, int& written)
{
   if (soundAppSink != NULL) 
   {
         GstBuffer* buffer = gst_app_sink_pull_buffer (GST_APP_SINK (soundAppSink));
         if (buffer) 
         {
               uint bufSize = MIN (GST_BUFFER_SIZE (buffer), max_size);
               uint offset = 0;

               std::cout << "buffer time length is " << GST_BUFFER_DURATION(buffer) << "ns buffer size is " <<  GST_BUFFER_SIZE (buffer)
                       << " while max size is " << max_size << "\n";
               //if max_size is smaller than the buffer, then only grab the last 10ms captured.
               //I am assuming that the reason for the occasional difference is because the buffers are larger
               //in the amount of audio frames than the rtp stream wants.
               if(bufSize > 0)
                 uint offset = GST_BUFFER_SIZE (buffer)- bufSize;

               memcpy (data, buffer->data + offset, bufSize);
               written = bufSize;
               gst_buffer_unref(buffer);
             }
     }
}

Update Ok, so I've narrowed the problem down to the pulse audio plugin for gstreamer. The autoaudiosrc is using the pulsesrc plugin for capture and for whatever reason, the pulse server slows down after a few resamplings. I tested with alsasrc and it seems to handle the sample rate changes while keeping the 10ms buffers but the problem is that it will not let me capture the audio in mono: only in stereo.

1条回答
在下西门庆
2楼-- · 2019-08-09 10:56

I got rid of the autoaudiosrc and plugged in alsasrc instead. The pulsesrc plugin was what was causing the erratic blocking behavior on the buffer pull which was giving me varying buffer lengths. The only problem then was that alsasrc wouldn't capture in mono. I remedied that by adding in an audioconvert element to the pipeline. My final pipe was:

alsasrc ! audioconvert ! audio/x-raw-int, signed=true, endianness=1234, depth=16, width=16, channels=1, rate=16000 ! appsink name=soundSink max_buffers=2 drop=true

This gave me the buffer lengths I needed. However, is this going to give me any significant performance issues as this is going to be on an embedded device?

查看更多
登录 后发表回答