The question does not mean that I'm interested if ffmpeg code can be used on Andoid. I know that it can. I'm just asking if somebody has the real performance progress with that stuff. I've created the question after several weeks of experiments with the stuff and I've had enough... I do not want to write to branches where people even do not say what kind of video they decode (resolution, codec) and talk only about some mystical FPS. I just don't understand what they want to do. Also I'm not going to develop application only for my phone or for Android 2.2++ phones that have some extended OpenGL features. I have quite popular phone HTC Desire so if the application does not work on it, so what's next?
Well, what do I have?
FFMpeg source from the latest HEAD branch. Actually I could not buld it with NDK5 so I decided to use stolen one.
Bambuser's build script (bash) with appropriate ffmpeg source ([web]: http://bambuser.com/r/opensource/ffmpeg-4f7d2fe-android-2011-03-07.tar.gz). It builds well after some corrections by using NDK5.
Rockplayer's gelded ffmpeg source code with huge Android.mk in the capacity of build script ([web]: http://www.rockplayer.com/download/rockplayer_ffmpeg_git_20100418.zip). It builds by NDK3 and NDK5 after some corrections. Rockplayer is probably the most cool media player for Android and I supposed that I would have some perks using it's build.
I had suitable video for a project (is not big and is not small): 600x360 H.264.
Both libraries we got from clauses 2 and 3 provide us possibility to get frames from video (frame-by-frame, seek etc.). I did not try to get an audio track because I did not need one for the project. I'm not publishing my source here because I think that's traditional and it's easy to find.
Well, what's the results with video? HTC Desire, Android 2.2 600x360, H.264 decoding and rendering are in different threads
- Bambuser (NDK5 buld for armv5te, RGBA8888): 33 ms/frame average.
- Rockplayer (NDK3 build for neon, RGB565): 27 ms/frame average.
It's not bad for the first look, but just think that these are results only to decode frames. If somebody has much better results with decoding time, let me know.
The most hard thing for a video is rendering. If we have bitmap 600x360 we should scale one somehow before painting because different phones have different screen sizes and we can not expect that our video will be the same size as screen.
What options do we have to rescale a frame to fit it to screen? I was able to check (the same phone and video source) those cases:
- sws_scale() C function in Bambuser's build: 70 ms/frame. Unacceptable.
- Stupid bitmap rescaling in Android (Bitmap.createScaledBitmap): 65 ms/frame. Unacceptable.
- OpenGL rendering in ortho projection on textured quad. In this case I did not need to scale frame. I just needed to prepare texture 1024x512 (in my case it was RGBA8888) containig frame pixels and than load it in GPU (gl.glTexImage2D). Result: ~220 ms/frame to render. Unacceptable. I did not expect that glTexImage2D just sucked on Snapdragon CPU.
That's all. I know that there is some way to use fragment shader to convert YUV pixels using GPU, but we will have the same glTexImage2D and 200 ms just to texture loading.
But this is not the end. ...my only friend the end... :) It's not hopeless condition.
Trying to use RockPlayer you definitely will wonder how they do that damn frame scaling so fast. I suppose that they have really good experience in ARM achitecture. They most probably use avcodec_decode_video2 and than img_convert (as I did in RP version), but then they use some tricks (depends of ARM version) for scaling. Maybe they also have some "magic" buld configuration for ffmpeg decreasing decoding time but Android.mk that they published is not THE Android.mk they use. Dunno...
So, now it looks like you can not just buld some easy JNI bridge for ffmpeg and than have real media player for Android platform. You can do this only if you have suitable video that you do not need to scale.
Any ideas? I hope for you ;)
I use http://writingminds.github.io/ffmpeg-android-java/ for my project. There is some workaround with complex commands but for simple commands the wrapper work very well for me.
In my experience, YUV to RGB conversion has always been a bottleneck. Therefore, using an OpenGL shader for this proved to give a significant boost.
I did compile ffmpeg on android. From this point - playing video is purely implementation dependant, so no point of measuring latencies on things whitch can be highly optimised in needed place and not using standart swscale. And yes - you can build some easy JNI bridge and use it in NDK to perform ffmpeg calls, but this would already be a player code.