I am looking for a fast way to add my own alpha channel to a decoded ffmpeg frame.
I have an AVI file with RGB information, and I have a synchronized video stream describing the transparency alpha channel (grayscale). While decoding the AVI file using ffmpeg, I want to convert the output frame to RGBA, while adding my own alpha information. In the end, I would obtain a semi-transparent video stream.
Is there any optimized function, maybe in libswscale or libswresample, to do such a thing better than just iterating through pixels ?
Basically I would like to be able to write such a function, if only I had such a function as sws_scale_and_add_alpha:
void* FFmpegLib_nextFrame_withAlpha(void* _handle, uint8_t* my_alpha_channel)
{
FFmpegLibHandle* handle = (FFmpegLibHandle*)_handle;
AVPacket packet;
int frameFinished;
while(av_read_frame(handle->pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if(packet.stream_index==handle->videoStream) {
// Decode video frame
avcodec_decode_video2(handle->pCodecCtx, handle->pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
sws_scale_and_add_alpha
(
handle->sws_ctx,
(uint8_t const * const *)handle->pFrame->data,
handle->pFrame->linesize,
0,
handle->pCodecCtx->height,
handle->pFrameARGB->data,
handle->pFrameARGB->linesize,
my_alpha_channel
);
return handle->pFrameARGB->data;
}
}
}
return NULL;
}