I'm reading this tutorial on getting pixel data from the iPhone camera.
While I have no issue running and using this code, I need to take the output of the camera data (which comes in BGRA) and convert it to ARGB so that I can use it with an external library. How do I do this?
If you're on iOS 5.0, you can use vImage within the Accelerate framework to do a NEON-optimized color component swap using code like the following (drawn from Apple's WebCore source code):
where
width
,height
, andsrcBytesPerRow
are obtained from your pixel buffer viaCVPixelBufferGetWidth()
,CVPixelBufferGetHeight()
, andCVPixelBufferGetBytesPerRow()
.srcRows
would be the pointer to the base address of the bytes in the pixel buffer, anddestRows
would be memory you allocated to hold the output RGBA image.This should be much faster than simply iterating over the bytes and swapping the color components.
Depending on the image size, an even faster solution would be to upload the frame to OpenGL ES, render a simple rectangle with this as a texture, and use glReadPixels() to pull down the RGBA values. Even better would be to use iOS 5.0's texture caches for both upload and download, where this process only takes 1-3 ms for a 720p frame on an iPhone 4. Of course, using OpenGL ES means a lot more supporting code to pull this off.