Question: Does anyone know how to get the Tango's color camera image buffer using the Tango Java (Jacobi) API onFrameAvailable()
callback?
Background:
I have an augmented reality application that displays video in the background of the Tango. I've successfully created the video overlay example using the the Java API (Jacobi) following this example. My application works fine, and the video is rendered in the background properly.
As part of the application, I'd like to store a copy of the video backbuffer when the user presses a button. Therefore, I need access to the camera's RGB data.
According to the Jacobi release notes, any class desiring access to the camera RGB data should implement the new onFrameAvailable()
method in the OnTangoUpdateListener
. I did this, but I don't see any handle or arguments to actually get the pixels:
Java API
@Override
public void onFrameAvailable(int cameraId) {
//Log.w(TAG, "Frame available!");
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
tangoCameraPreview.onFrameAvailable();
}
}
as shown, onFrameAvailable
only has one argument, and integer designating the id of the camera generating the view. Contrast this with the C-library call back, which provides access to the image buffer:
C API
TangoErrorType TangoService_connectOnFrameAvailable(
TangoCameraId id, void* context,
void (*onFrameAvailable)(void* context, TangoCameraId id,
const TangoImageBuffer* buffer));
I was expecting the Java method to have something similar to the buffer object in the C API call.
What I've Tried
I tried extending the TangoCameraPreview
class and saving the image there, but I only get a black background.
public class CameraSurfaceView extends TangoCameraPreview {
private boolean takeSnapShot = false;
public void takeSnapShot() {
takeSnapShot = true;
}
/**
* Grabs a copy of the surface (which is rendering the Tango color camera)
* https://stackoverflow.com/questions/14620055/how-to-take-a-screenshot-of-androids-surface-view
*/
public void screenGrab2(){
int width = this.getWidth();
int height = this.getHeight();
long fileprefix = System.currentTimeMillis();
View v= getRootView();
v.setDrawingCacheEnabled(true);
// this is the important code :)
// Without it the view will have a dimension of 0,0 and the bitmap will be null
v.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED),
MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED));
v.layout(0, 0, width, height);
v.buildDrawingCache(true);
Bitmap image = v.getDrawingCache();
//TODO: make seperate subdirctories for each exploitation sessions
String targetPath =Environment.getExternalStorageDirectory() + "/RavenEye/Photos/";
String imageFileName = fileprefix + ".jpg";
if(!(new File(targetPath)).exists()) {
new File(targetPath).mkdirs();
}
try {
File targetDirectory = new File(targetPath);
File photo=new File(targetDirectory, imageFileName);
FileOutputStream fos=new FileOutputStream(photo.getPath());
image.compress(CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
} catch (FileNotFoundException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
} catch (IOException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
}
}
/**
* Grabs a copy of the surface (which is rendering the Tango color camera)
*/
public void screenGrab(){
int width = this.getWidth();
int height = this.getHeight();
long fileprefix = System.currentTimeMillis();
Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(image);
canvas.drawBitmap(image, 0, 0, null);
//TODO: make seperate subdirctories for each exploitation sessions
String targetPath =Environment.getExternalStorageDirectory() + "/RavenEye/Photos/";
String imageFileName = fileprefix + ".jpg";
if(!(new File(targetPath)).exists()) {
new File(targetPath).mkdirs();
}
try {
File targetDirectory = new File(targetPath);
File photo=new File(targetDirectory, imageFileName);
FileOutputStream fos=new FileOutputStream(photo.getPath());
image.compress(CompressFormat.JPEG, 100, fos);
fos.flush();
fos.close();
Log.i(this.getClass().getCanonicalName(), "Grabbed an image in target path:" + targetPath);
} catch (FileNotFoundException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
} catch (IOException e) {
Log.e(CameraPreview.class.getName(),"Exception " + e);
e.printStackTrace();
}
}
@Override
public void onFrameAvailable() {
super.onFrameAvailable();
if(takeSnapShot) {
screenGrab();
takeSnapShot = false;
}
}
public CameraSurfaceView(Context context) {
super(context);
// TODO Auto-generated constructor stub
}
}
Where I'm Heading
I'm preparing to root the device, and then using the onFrameAvailable
method to cue an external root process such as one of these:
I'm hoping I can find a way to avoid the root hack.
Thank you in advance!
I haven't tried on the latest release, but it was the absence of this functionality that drove me to the C API where I could get image data - a recent post, I think on the G+ page, seemed to indicate that the Unity API now returns image data as well - for a company that wants to keep scolding us when we don't use Java, it certainly is an odd lag :-)
OK, I figured out a way to make it work.
Update: My working solution is here:
https://github.com/stevehenderson/GoogleTango_AR_VideoCapture
I essentially set up a "man (renderer) in the middle" attack on the rendering pipeline. This approach intercepts the
SetRenderer
call from theTangoCameraPreview
base class, and allows one to get access to the base renderer'sOnDraw()
method and the GL context. I then add additional methods to this extended renderer that allow reading of the GL buffer.General approach
1) Extend the
TangoCameraPreview
class (e.g. in my exampleReadableTangoCameraPreview
). Override thesetRenderer(GLSurfaceView.Renderer renderer)
, keeping a reference to the base renderer, and replacing the renderer with your own "wrapped"GLSUrface.Renderer
renderer that will add methods to render the backbuffer to an image on the device.2) Create your own
GLSurfaceView.Renderer
Interface (e.g. myScreenGrabRenderer
class ) that implements all theGLSurfaceView.Renderer
methods, passing them on to the base renderer captured in Step 1. Also, add a few new methods to "cue" when you want to grab the image.3) Implement the
ScreenGrabRenderer
described in step 2 above.4) Use a callback interface (my
TangoCameraScreengrabCallback
) to communicate when an image has been copiedIt works pretty well, and allows one to grab the camera bits in an image without rooting the device.
Note: I haven't had the need to closely synchronize my captured images with the point cloud. So I haven't checked the latency. For best results, you may need to invoke the C methods proposed by Mark.
Here's what each of my classes looks like..
ReadableTangoCameraPreview Class
ScreenGrabRenderer Interface
(Overloads the TangoCameraPreview default Renderer)
TangoCameraScreengrabCallback Interface (not required unless you want to pass info back from the screen grab renderer)