What happens when you do a conversion from AV_SAMPLE_FMT_S16P to AV_SAMPLE_FMT_S16? How is the AVFrame structure going to contain the planar and non-planar data?
相关问题
- Can we recover audio from MFCC coefficients?
- Is there a way to play audio on a mobile browser w
- Improving accuracy of Google Cloud Speech API
- Is it possible to know the duration of an MP3 befo
- Playing specific system sound with Qt
相关文章
- Handling ffmpeg library interface change when upgr
- How to use a framework build of Python with Anacon
- Android Visualizer class throwing runtime exceptio
- c++ mp3 library [closed]
- Passing a native fd int to FFMPEG from openable UR
- FFmpeg - What does non monotonically increasing dt
- Simulate Microphone (virtual mic)
- Android Studio Mediaplayer how to fade in and out
AV_SAMPLE_FMT_S16P
is planar signed 16 bit audio, i.e. 2 bytes for each sample which is same forAV_SAMPLE_FMT_S16
.The only difference is in
AV_SAMPLE_FMT_S16
samples of each channel are interleaved i.e. if you have two channel audio then the samples buffer will look likewhere
c1
is a sample for channel1 andc2
is sample for channel2.while for one frame of planar audio you will have something like
now how is it stored in AVFrame:
data[i] will contain the data of channel i (assuming channel 0 is first channel).
however if you have more channels than 8, then data for rest of the channels can be found in extended_data attribute of AVFrame.
data[0] will contain the data for all channels in an interleaved manner.