How to adaptively add and use face images collecte

2020-02-10 10:17发布

问题:

My current project is to build a face authentication system. The constraint I have is: during enrollment, the user gives single image for training. However, I can add and use images given by the user while authentication.

The reason I want to add more images into training is, the user environment is not restricted - different lighting conditions, different distance from camera, from different MP cameras. The only relief is the pose is almost frontal.

I think, the above problem is similar to the face tagging app widely available. Can anyone suggest a method to use the available images adaptively and smartly??

--Thanks

回答1:

To make your classifier robust you need to use condition independent features. For example, you cannot use face color since it depends on lighting conditions and state of a person itself. However, you can use distance between eyes since it is independent of any changes.

I would suggest building some model of such independent features and retrain classifier each time person starts authentication session. Best model I can think of is Active Appearance Model (one of implementations).



回答2:

I would recommend that you give SOM(self-organizing maps) a close look. I think it contains the solutions to all the problems and constraints you have mentioned.

You can employ it for the single image per person problem. Also, using the multiple SOM-face strategy, you can adapt it for cases when additional images are available for training. Whats pretty neat about the whole concept is that when a new face is encountered, only the new one rather than the whole original database is needed to be re-learned.

A few links which you might find helpful along the way:

http://en.wikipedia.org/wiki/Self-organizing_map (wiki)

http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/tnn05.pdf (An interesting research paper which demonstrates the above mentioned technique)

Good Luck