I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this:
-(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{
AVFrame frame;
AVPicture picture;
AVPacket pkt;
AVCodecContext *context;
pkt.data = frameData;
pat.size = frameSize;
avcodec_get_frame_defaults(&frame);
avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight);
avcodec_decode_video2(&context, &frame, &got_picture, &pkt);
}
The code woks fine, but it's software decoding. I want to enhance the decoding performance by hardware decoding. After lots of research, I know it may be achieved by AVFoundation framework.
The AVAssetReader class may help, but I can't figure out what's the next.Could anyone points out the following steps for me? Any help would be appreciated.
iOS does not provide any public access directly to the hardware decode engine, because hardware is always used to decode H.264 video on iOS.
Therefore, session 513 gives you all the information you need to allow frame-by-frame decoding on iOS. In short, per that session:
- Generate individual network abstraction layer units (NALUs) from your H.264 elementary stream. There is much information on how this is done online. VCL NALUs (IDR and non-IDR) contain your video data and are to be fed into the decoder.
- Re-package those NALUs according to the "AVCC" format, removing NALU start codes and replacing them with a 4-byte NALU length header.
- Create a
CMVideoFormatDescriptionRef
from your SPS and PPS NALUs via CMVideoFormatDescriptionCreateFromH264ParameterSets()
- Package NALU frames as
CMSampleBuffer
s per session 513.
- Create a
VTDecompressionSessionRef
, and feed VTDecompressionSessionDecodeFrame()
with the sample buffers
- Alternatively, use
AVSampleBufferDisplayLayer
, whose -enqueueSampleBuffer:
method obviates the need to create your own decoder.
Edit:
This link provide more detail explanation on how to decode h.264 step by step: stackoverflow.com/a/29525001/3156169
Original answer:
I watched the session 513 "Direct Access to Video Encoding and Decoding" in WWDC 2014 yesterday, and got the answer of my own question.
The speaker says:
We have Video Toolbox(in iOS 8). Video Toolbox has been there on
OS X for a while, but now it's finally populated with headers on
iOS.This provides direct access to encoders and decoders.
So, there is no way to do hardware decoding frame by frame in iOS 7, but it can be done in iOS 8.
Is there anyone figure out how to directly access to video encoding and decoding frame by frame in iOS 8?