I want to detect faces on camera previews. I saw this example in OpenCV samples:
@Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(mRgba, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME);
if (mCascade != null) {
int height = mGray.rows();
int faceSize = Math.round(height * FdActivity.minFaceSize);
List<Rect> faces = new LinkedList<Rect>();
mCascade.detectMultiScale(mGray, faces, 1.1, 2, 2 // TODO: objdetect.CV_HAAR_SCALE_IMAGE
, new Size(faceSize, faceSize));
for (Rect r : faces)
Core.rectangle(mRgba, r.tl(), r.br(), new Scalar(0, 255, 0, 255), 3);
}
Bitmap bmp = Bitmap.createBitmap(mRgba.cols(), mRgba.rows(), Bitmap.Config.ARGB_8888);
if (Utils.matToBitmap(mRgba, bmp))
return bmp;
bmp.recycle();
return null;
}
I rewrote this code for my project (input byte[] data from onPreviewFrame() from PreviewCallback):
public Highlighting[] get(byte[] data) {
matYuv = new Mat(480, 320, CvType.CV_8UC1);
matYuv.put(0, 0, data);
Imgproc.cvtColor(matYuv, matRgb, Imgproc.COLOR_YUV420sp2RGB, 4);
Highlighting[] hl = null;
Imgproc.cvtColor(matRgb, matGray, Imgproc.COLOR_RGB2GRAY, 0);
if (cascade != null) {
int faceSize = 50;
List<Rect> faces = new LinkedList<Rect>();
cascade.detectMultiScale(matGray, faces, 1.1, 2, 2, new Size(
faceSize, faceSize));
hl = new Highlighting[faces.size()];
int i = 0;
for (Rect r : faces) {
hl[i] = new Highlighting((int) r.tl().x, (int) r.tl().y,
(int) r.br().x, (int) r.br().y, "");
i++;
}
Log.i("FACES", String.valueOf(faces.size()));
}
return hl;
}
But i have problem, my code doesn't work as original - it doesn't detect faces. Could it be problem in converting byte array?
480+240 (as height) results from YUV 420 format.
this format has Y-plane with 480x320, U and V plane with each 0.5*resolution of Y plane (lookup YUV formats for details). As all 3 Planes of one Frame are stored in one Image, you have to allocate enough Space.