Currently I'm working on an app for Android phones. We want to detect features of a face. The programm should be able to detect the positions of the eyes, the nose, the mouth and the edge of the face.
Accuracy should be fine but doesn't need to be perfect. It's okay to loose some accuracy to speed things up. All the faces will be frontal and we will know the approximate positions of the features before. We don't need live detection. The features should be extracted from saved images. The detection time should be only as long as it doesn't disturbe the user experience. So maybe even 2 or 3 seconds are okay.
With this assumptions it shouldn't be too hard to find a library which enables us to achieve this. But my question is, what is the best approach? What's your suggestion? It's the first time for me developing for Android and I don't want to run in the wrong direction. Is it a good idea to us a library or is it better (faster/higher accuracy) to implement some existing algorithm on my own?
I googled a lot and I found many interesting things. There is also a face detection in the Android API. But the returned face class (http://developer.android.com/reference/android/media/FaceDetector.Face.html) only contains the position of the eyes. This is to less for our applicaton. Then there is also OpenCV for Android or JavaCV. What do you think is a good idea to work with? For what library there are good documentations, tutorials?
OpenCV has a tutorial for this purpose, unfortunately is C++ only so you would have to convert it to Android.
You can also try FaceDetection API in Android, this is a simple example if you are detecting images from a drawable or sdcard images. Or the more recent Camera.Face API which works with the camera image.
If you want image from your camera at dynamic time than first read How to take picture from camera., but I would recommend you to check the official OpenCV Android samples and use them.
Updated:
Mad Hatter Example use the approach of Camera with SurfaceView. Its promisingly fast. Have a look at Mad Hatter.
The relevant code, in case the link goes down, is this:
public class FaceDetectionListener implements Camera.FaceDetectionListener {
@Override
public final void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length > 0) {
for (Face face : faces) {
if (face != null) {
// do something
}
}
}
}
}
I'm working on a similar project. I did some testing with the FaceDetection API and can tell you that it is not going to help you if you want to detect the eyes, nose, mouth and edges. This API only allow you to detect the eyes. It is useless if you want to implement face recognition because you need more features than just the eyes during the face detect part.
A comment on your first reply: you actually do need face detect. Finding features is part of face detection and getting these features is the first step in a face recognition app. With OpenCV you can use Haar-like features for getting these features (eyes, nose, mouth, etc.).
However I've found it somewhat complicated to use the openCV functions with a separate .cpp file. There is a thing called JNIEXPORT which allows you to edit an Android gallery image with OpenCV functions inside a .cpp file. OpenCV has a sample Haar-like feature detect .cpp file which can be used for face detection (and recognition as a second step with an other algorithm).
Are you developing on windows or linux? I'm using windows and haven't managed to use the tutorial you linked to set up OpenCV with it. However I do have a working windows OpenCV environment in Eclipse and got all samples from OpenCV 2.3.1 working. Maybe we can help each other out and share some information/results? please let me know.
I have found a good solution for face emotion detection provided by this Microsoft API. This API returns a JSON response and emotion graph. You can try this API for a good result.
Emotion API
Emotion Recognition Recognizes the emotions expressed by one or more people in an image, as well as returns a bounding box for the
face. The emotions detected are happiness, sadness, surprise, anger,
fear, contempt, and disgust or neutral.
- The supported input image formats includes JPEG, PNG, GIF(the first frame), BMP. Image file size should be no larger than 4MB.
- If a user has already called the Face API, they can submit the face rectangles as an optional input. Otherwise, Emotion API will first
compute the rectangles.
- The detectable face size range is 36x36 to 4096x4096 pixels. Faces out of this range will not be detected.
- For each image, the maximum number of faces detected is 64 and the faces are ranked by face rectangle size in descending order. If no
face is detected, an empty array will be returned.
- Some faces may not be detected due to technical challenges, e.g. very large face angles (head-pose), large occlusion. Frontal and
near-frontal faces have the best results. -The emotions contempt and
disgust are experimental.
https://www.microsoft.com/cognitive-services/en-us/emotion-api
it is a nice query. I guess if you get the feature points for eyes then we can calculate other points also by knowing the estimated distance of other points from eyes.
See this paper to know more about what I am trying to say: http://klucv2.googlecode.com/svn/trunk/docs/detection%20of%20facial%20feature%20points%20using%20anthropometric%20face%20model.pdf
I hope this helps.
Take a look at the new Android face API, which includes facial landmark detection. There is a tutorial here:
https://developers.google.com/vision/detect-faces-tutorial