H.264多工使用了libavformat不播放MP4(H.264 muxed to MP4 usi

2019-07-21 03:06发布

我想H.264数据复用器成MP4文件。 有似乎是在拯救这个H.264附件B数据输出到一个MP4文件中没有错误,但该文件无法播放。

我已经做了该文件的二进制比较和问题似乎在某处什么被写入MP4文件的页脚(预告片)。

我怀疑它是一些与正在创建流的方式什么的。

在里面:

AVOutputFormat* fmt = av_guess_format( 0, "out.mp4", 0 );
oc = avformat_alloc_context();
oc->oformat = fmt;
strcpy(oc->filename, filename);

这个原型应用程序,我有部分是创建每个IFrame的PNG文件。 遇到第一个iframe中所以,当我创建了视频流和写入AV头等等:

void addVideoStream(AVCodecContext* decoder)
{
    videoStream = av_new_stream(oc, 0);
    if (!videoStream)
    {
         cout << "ERROR creating video stream" << endl;
         return;        
    }
    vi = videoStream->index;    
    videoContext = videoStream->codec;      
    videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
    videoContext->codec_id = decoder->codec_id;
    videoContext->bit_rate = 512000;
    videoContext->width = decoder->width;
    videoContext->height = decoder->height;
    videoContext->time_base.den = 25;
    videoContext->time_base.num = 1;    
    videoContext->gop_size = decoder->gop_size;
    videoContext->pix_fmt = decoder->pix_fmt;       

    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
        videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;

    av_dump_format(oc, 0, filename, 1);

    if (!(oc->oformat->flags & AVFMT_NOFILE))
    {
        if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
        cout << "Error opening file" << endl;
    }
    avformat_write_header(oc, NULL);
}

我写出来的数据包:

unsigned char* data = block->getData();
unsigned char videoFrameType = data[4];
int dataLen = block->getDataLen();

// store pps
if (videoFrameType == 0x68)
{
    if (ppsFrame != NULL)
    {
        delete ppsFrame; ppsFrameLength = 0; ppsFrame = NULL;
    }
    ppsFrameLength = block->getDataLen();
    ppsFrame = new unsigned char[ppsFrameLength];
    memcpy(ppsFrame, block->getData(), ppsFrameLength);
}
else if (videoFrameType == 0x67)
{
    // sps
    if (spsFrame != NULL)
    {
        delete spsFrame; spsFrameLength = 0; spsFrame = NULL;
}
    spsFrameLength = block->getDataLen();
    spsFrame = new unsigned char[spsFrameLength];
    memcpy(spsFrame, block->getData(), spsFrameLength);                 
}                                           

if (videoFrameType == 0x65 || videoFrameType == 0x41)
{
    videoFrameNumber++;
}
if (videoFrameType == 0x65)
{
    decodeIFrame(videoFrameNumber, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
}

if (videoStream != NULL)
{
    AVPacket pkt = { 0 };
    av_init_packet(&pkt);
    pkt.stream_index = vi;
    pkt.flags = 0;                      
    pkt.pts = pkt.dts = 0;                                  

    if (videoFrameType == 0x65)
    {
        // combine the SPS PPS & I frames together
        pkt.flags |= AV_PKT_FLAG_KEY;                                                   
        unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
        memcpy(videoFrame, spsFrame, spsFrameLength);
        memcpy(&videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);
        memcpy(&videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);

        // overwrite the start code (00 00 00 01 with a 32-bit length)
        setLength(videoFrame, spsFrameLength-4);
        setLength(&videoFrame[spsFrameLength], ppsFrameLength-4);
        setLength(&videoFrame[spsFrameLength+ppsFrameLength], dataLen-4);
        pkt.size = dataLen + spsFrameLength + ppsFrameLength;
        pkt.data = videoFrame;
        av_interleaved_write_frame(oc, &pkt);
        delete videoFrame; videoFrame = NULL;
    }
    else if (videoFrameType != 0x67 && videoFrameType != 0x68)
    {   
        // Send other frames except pps & sps which are caught and stored                   
        pkt.size = dataLen;
        pkt.data = data;
        setLength(data, dataLen-4);                     
        av_interleaved_write_frame(oc, &pkt);
    }

最后关闭文件关闭:

av_write_trailer(oc);
int i = 0;
for (i = 0; i < oc->nb_streams; i++)
{
    av_freep(&oc->streams[i]->codec);
    av_freep(&oc->streams[i]);      
}

if (!(oc->oformat->flags & AVFMT_NOFILE))
{
    avio_close(oc->pb);
}
av_free(oc);

如果我把独自H.264数据,并将其转换:

ffmpeg -i recording.h264 -vcodec copy recording.mp4

所有,但该文件的“页脚”是相同的。

从我的程序输出:readrec recording.tcp out.mp4 **** **** START 2013年1月3日14时26分01秒180000输出#0,MP4,到 'out.mp4':流#0:0 :视频:H264,YUV420P,352×288,q = 2-31,512 kb / s的,90K TBN,25 TBC **** **** END 2013年1月3日14时27分01秒102000写1499个的视频帧。

如果我尝试转换使用ffmpeg的MP4文件使用代码创建的:

ffmpeg -i out.mp4 -vcodec copy out2.mp4
ffmpeg version 0.11.1 Copyright (c) 2000-2012 the FFmpeg developers
      built on Mar  7 2013 12:49:22 with suncc 0x5110
      configuration: --extra-cflags=-KPIC -g --disable-mmx
      --disable-protocol=udp --disable-encoder=nellymoser --cc=cc --cxx=CC
libavutil      51. 54.100 / 51. 54.100
libavcodec     54. 23.100 / 54. 23.100
libavformat    54.  6.100 / 54.  6.100
libavdevice    54.  0.100 / 54.  0.100
libavfilter     2. 77.100 /  2. 77.100
libswscale      2.  1.100 /  2.  1.100
libswresample   0. 15.100 /  0. 15.100
h264 @ 12eaac0] no frame!
    Last message repeated 1 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 23 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 74 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 64 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 34 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 49 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 24 times
[h264 @ 12eaac0] Partitioned H.264 support is incomplete
[h264 @ 12eaac0] no frame!
    Last message repeated 23 times
[h264 @ 12eaac0] sps_id out of range
[h264 @ 12eaac0] no frame!
    Last message repeated 148 times
[h264 @ 12eaac0] sps_id (32) out of range
    Last message repeated 1 times
[h264 @ 12eaac0] no frame!
    Last message repeated 33 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 128 times
[h264 @ 12eaac0] sps_id (32) out of range
    Last message repeated 1 times
[h264 @ 12eaac0] no frame!
    Last message repeated 3 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 3 times
[h264 @ 12eaac0] slice type too large (0) at 0 0
[h264 @ 12eaac0] decode_slice_header error
[h264 @ 12eaac0] no frame!
    Last message repeated 309 times
[h264 @ 12eaac0] sps_id (32) out of range
    Last message repeated 1 times
[h264 @ 12eaac0] no frame!
    Last message repeated 192 times
[h264 @ 12eaac0] Partitioned H.264 support is incomplete
[h264 @ 12eaac0] no frame!
    Last message repeated 73 times
[h264 @ 12eaac0] sps_id (32) out of range
    Last message repeated 1 times
[h264 @ 12eaac0] no frame!
    Last message repeated 99 times
[h264 @ 12eaac0] sps_id (32) out of range
    Last message repeated 1 times
[h264 @ 12eaac0] no frame!
    Last message repeated 197 times
[mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] decoding for stream 0 failed
[mov,mp4,m4a,3gp,3g2,mj2 @ 12e3100] Could not find codec parameters
(Video: h264 (avc1 / 0x31637661), 393539 kb/s)
out.mp4: could not find codec parameters

我真的不知道在哪里的问题是,除了它必须是与流正在设立的方式。 我看着在离其他人都在做类似的事情的码位,并试图建立流使用这个建议,但都无济于事!


这给了我一个H.264 / AAC多路复用的最终代码(同步)文件如下。 首先一点背景资料。 该数据是从IP摄像头的。 该数据通过第三方API作为视频/音频数据包提交。 视频数据包被表示为RTP有效载荷数据(无标头),并且由NALU的被重构和附件B格式转换为H.264视频的。 AAC音频被呈现为原料AAC和被转换为ADTS格式,以使播放。 这些数据包已投入的比特流格式,允许有一些其他的东西一起时间戳(自1970年1月1日64个毫秒)的传输。

这是或多或少的原型,而不是在任何方面干净。 它可能泄漏坏。 但是我做的,希望这可以帮助其他人就试图实现类似于我是什么东西。

全局:

AVFormatContext* oc = NULL;
AVCodecContext* videoContext = NULL;
AVStream* videoStream = NULL;
AVCodecContext* audioContext = NULL;
AVStream* audioStream = NULL;
AVCodec* videoCodec = NULL;
AVCodec* audioCodec = NULL;
int vi = 0;  // Video stream
int ai = 1;  // Audio stream

uint64_t firstVideoTimeStamp = 0;
uint64_t firstAudioTimeStamp = 0;
int audioStartOffset = 0;

char* filename = NULL;

Boolean first = TRUE;

int videoFrameNumber = 0;
int audioFrameNumber = 0;

主要:

int main(int argc, char* argv[])
{
    if (argc != 3)
    {   
        cout << argv[0] << " <stream playback file> <output mp4 file>" << endl;
        return 0;
    }
    char* input_stream_file = argv[1];
    filename = argv[2];

    av_register_all();    

    fstream inFile;
    inFile.open(input_stream_file, ios::in);

    // Used to store the latest pps & sps frames
    unsigned char* ppsFrame = NULL;
    int ppsFrameLength = 0;
    unsigned char* spsFrame = NULL;
    int spsFrameLength = 0;

    // Setup MP4 output file
    AVOutputFormat* fmt = av_guess_format( 0, filename, 0 );
    oc = avformat_alloc_context();
    oc->oformat = fmt;
    strcpy(oc->filename, filename);

    // Setup the bitstream filter for AAC in adts format.  Could probably also achieve
    // this by stripping the first 7 bytes!
    AVBitStreamFilterContext* bsfc = av_bitstream_filter_init("aac_adtstoasc");
    if (!bsfc)
    {       
        cout << "Error creating adtstoasc filter" << endl;
        return -1;
    }

    while (inFile.good())
    {
        TcpAVDataBlock* block = new TcpAVDataBlock();
        block->readStruct(inFile);
        DateTime dt = block->getTimestampAsDateTime();
        switch (block->getPacketType())
        {
            case TCP_PACKET_H264:
            {       
                if (firstVideoTimeStamp == 0)
                    firstVideoTimeStamp = block->getTimeStamp();
                unsigned char* data = block->getData();
                unsigned char videoFrameType = data[4];
                int dataLen = block->getDataLen();

                // pps
                if (videoFrameType == 0x68)
                {
                    if (ppsFrame != NULL)
                    {
                        delete ppsFrame; ppsFrameLength = 0;
                        ppsFrame = NULL;
                    }
                    ppsFrameLength = block->getDataLen();
                    ppsFrame = new unsigned char[ppsFrameLength];
                    memcpy(ppsFrame, block->getData(), ppsFrameLength);
                }
                else if (videoFrameType == 0x67)
                {
                    // sps
                    if (spsFrame != NULL)
                    {
                        delete spsFrame; spsFrameLength = 0;
                        spsFrame = NULL;
                    }
                    spsFrameLength = block->getDataLen();
                    spsFrame = new unsigned char[spsFrameLength];
                    memcpy(spsFrame, block->getData(), spsFrameLength);                   
                }                                           

                if (videoFrameType == 0x65 || videoFrameType == 0x41)
                {
                    videoFrameNumber++;
                }
                // Extract a thumbnail for each I-Frame
                if (videoFrameType == 0x65)
                {
                    decodeIFrame(h264, spsFrame, spsFrameLength, ppsFrame, ppsFrameLength, data, dataLen);
                }
                if (videoStream != NULL)
                {
                    AVPacket pkt = { 0 };
                    av_init_packet(&pkt);
                    pkt.stream_index = vi;
                    pkt.flags = 0;           
                    pkt.pts = videoFrameNumber;
                    pkt.dts = videoFrameNumber;           
                    if (videoFrameType == 0x65)
                    {
                        pkt.flags = 1;                           

                        unsigned char* videoFrame = new unsigned char[spsFrameLength+ppsFrameLength+dataLen];
                        memcpy(videoFrame, spsFrame, spsFrameLength);
                        memcpy(&videoFrame[spsFrameLength], ppsFrame, ppsFrameLength);

                        memcpy(&videoFrame[spsFrameLength+ppsFrameLength], data, dataLen);
                        pkt.data = videoFrame;
                        av_interleaved_write_frame(oc, &pkt);
                        delete videoFrame; videoFrame = NULL;
                    }
                    else if (videoFrameType != 0x67 && videoFrameType != 0x68)
                    {                       
                        pkt.size = dataLen;
                        pkt.data = data;
                        av_interleaved_write_frame(oc, &pkt);
                    }                       
                }
                break;
            }

        case TCP_PACKET_AAC:

            if (firstAudioTimeStamp == 0)
            {
                firstAudioTimeStamp = block->getTimeStamp();
                uint64_t millseconds_difference = firstAudioTimeStamp - firstVideoTimeStamp;
                audioStartOffset = millseconds_difference * 16000 / 1000;
                cout << "audio offset: " << audioStartOffset << endl;
            }

            if (audioStream != NULL)
            {
                AVPacket pkt = { 0 };
                av_init_packet(&pkt);
                pkt.stream_index = ai;
                pkt.flags = 1;           
                pkt.pts = audioFrameNumber*1024;
                pkt.dts = audioFrameNumber*1024;
                pkt.data = block->getData();
                pkt.size = block->getDataLen();
                pkt.duration = 1024;

                AVPacket newpacket = pkt;                       
                int rc = av_bitstream_filter_filter(bsfc, audioContext,
                    NULL,
                    &newpacket.data, &newpacket.size,
                    pkt.data, pkt.size,
                    pkt.flags & AV_PKT_FLAG_KEY);

                if (rc >= 0)
                {
                    //cout << "Write audio frame" << endl;
                    newpacket.pts = audioFrameNumber*1024;
                    newpacket.dts = audioFrameNumber*1024;
                    audioFrameNumber++;
                    newpacket.duration = 1024;                   

                    av_interleaved_write_frame(oc, &newpacket);
                    av_free_packet(&newpacket);
                }   
                else
                {
                    cout << "Error filtering aac packet" << endl;

                }
            }
            break;

        case TCP_PACKET_START:
            break;

        case TCP_PACKET_END:
            break;
        }
        delete block;
    }
    inFile.close();

    av_write_trailer(oc);
    int i = 0;
    for (i = 0; i < oc->nb_streams; i++)
    {
        av_freep(&oc->streams[i]->codec);
        av_freep(&oc->streams[i]);       
    }

    if (!(oc->oformat->flags & AVFMT_NOFILE))
    {
        avio_close(oc->pb);
    }

    av_free(oc);

    delete spsFrame; spsFrame = NULL;
    delete ppsFrame; ppsFrame = NULL;

    cout << "Wrote " << videoFrameNumber << " video frames." << endl;

    return 0;
}

流流/编解码器被添加并创建的头在一个叫addVideoAndAudioStream()函数。 这个功能是从decodeIFrame(称为),所以有几个假设(其不一定是好的)1.一种视频信息包至上2. AAC是本

该decodeIFrame是一种独立的原型由我在那里创造了每一个I帧的缩略图的。 生成缩略图的代码来自: https://gnunet.org/svn/Extractor/src/plugins/thumbnailffmpeg_extractor.c

所述decodeIFrame函数传递一个AVCodecContext成addVideoAudioStream:

void addVideoAndAudioStream(AVCodecContext* decoder = NULL)
{
    videoStream = av_new_stream(oc, 0);
    if (!videoStream)
    {
        cout << "ERROR creating video stream" << endl;
        return;       
    }
    vi = videoStream->index;   
    videoContext = videoStream->codec;       
    videoContext->codec_type = AVMEDIA_TYPE_VIDEO;
    videoContext->codec_id = decoder->codec_id;
    videoContext->bit_rate = 512000;
    videoContext->width = decoder->width;
    videoContext->height = decoder->height;
    videoContext->time_base.den = 25;
    videoContext->time_base.num = 1;
    videoContext->gop_size = decoder->gop_size;
    videoContext->pix_fmt = decoder->pix_fmt;       

    audioStream = av_new_stream(oc, 1);
    if (!audioStream)
    {
        cout << "ERROR creating audio stream" << endl;
        return;
    }
    ai = audioStream->index;
    audioContext = audioStream->codec;
    audioContext->codec_type = AVMEDIA_TYPE_AUDIO;
    audioContext->codec_id = CODEC_ID_AAC;
    audioContext->bit_rate = 64000;
    audioContext->sample_rate = 16000;
    audioContext->channels = 1;

    if (oc->oformat->flags & AVFMT_GLOBALHEADER)
    {
        videoContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
        audioContext->flags |= CODEC_FLAG_GLOBAL_HEADER;
    }

    av_dump_format(oc, 0, filename, 1);

    if (!(oc->oformat->flags & AVFMT_NOFILE))
    {
        if (avio_open(&oc->pb, filename, AVIO_FLAG_WRITE) < 0) {
            cout << "Error opening file" << endl;
        }
    }

    avformat_write_header(oc, NULL);
}

据我所知,很多假设似乎并不重要,例如:1比特率。 而我指定的512kbit 2 AAC通道的实际视频码率为262K〜。 我指定的单声道,虽然实际产量从内存立体声

你还需要知道什么帧速率(时基)是视频和音频。

相反,很多其他的例子,在视频数据包设置PTS和DTS的时候,它不能播放。 我需要知道的时基(25fps的),然后设置PTS&DTS根据该时间基准,即第一帧= 0(PPS,SPS,I),第二帧= 1(中间帧,无论其被称为)) 。

AAC我也不得不作出的假设,这是16000赫兹。 每AAC包1024个样本(您也可以AAC @ 960克的样品,我认为)来确定音频“偏移”。 我已将此添加到PTS和DTS。 因此,PTS / DTS是,它是在回放的样本数。 您还需要确保的1024持续时间在分组中也写作之前设置。

-

我今天还发现,附录B是不与任何其他的球员,所以AVCC格式确实应该使用真正兼容。

这些URL帮助: 问题来解码H264视频通过RTP与ffmpeg的(libavcodec的) http://aviadr1.blogspot.com.au/2010/05/h264-extradata-partially-explained-for.html

当构建视频流,我填写了 - 而额外extradata_size:

// Extradata contains PPS & SPS for AVCC format
int extradata_len = 8 + spsFrameLen-4 + 1 + 2 + ppsFrameLen-4;
videoContext->extradata = (uint8_t*)av_mallocz(extradata_len);
videoContext->extradata_size = extradata_len;
videoContext->extradata[0] = 0x01;
videoContext->extradata[1] = spsFrame[4+1];
videoContext->extradata[2] = spsFrame[4+2];
videoContext->extradata[3] = spsFrame[4+3];
videoContext->extradata[4] = 0xFC | 3;
videoContext->extradata[5] = 0xE0 | 1;
int tmp = spsFrameLen - 4;
videoContext->extradata[6] = (tmp >> 8) & 0x00ff;
videoContext->extradata[7] = tmp & 0x00ff;
int i = 0;
for (i=0;i<tmp;i++)
    videoContext->extradata[8+i] = spsFrame[4+i];
videoContext->extradata[8+tmp] = 0x01;
int tmp2 = ppsFrameLen-4;   
videoContext->extradata[8+tmp+1] = (tmp2 >> 8) & 0x00ff;
videoContext->extradata[8+tmp+2] = tmp2 & 0x00ff;
for (i=0;i<tmp2;i++)
    videoContext->extradata[8+tmp+3+i] = ppsFrame[4+i];

当写出来的帧,不预置的SPS和PPS帧,只写出来的I帧和P帧。 另外,代替与所述I / P帧的大小包含在第一个4个字节的附件B的起始码(0×00 0×00 0×00 0×01)。

Answer 1:

请让我总结一下:你的(原)代码的问题是,输入av_interleaved_write_frame()不应该与数据包长度开始。 该文件可能仍然是可玩的,如果你不剥去00 00 00 01起始码,但恕我直言,是玩家的弹性行为,我不会在此计数。



文章来源: H.264 muxed to MP4 using libavformat not playing back