I'm a trying to compare fingerprints. Here's what I got so far.
Get the raw image from a digitalPersona sensor. image
Binarize it. image
Skeletonize it. I used Hall's algorithm because that is the only one I got to work more or less properly. You can still see some flaws. image
Strip the convex hull (inverted Jarvis algorithm) and get all the ridge endings as an array of (i, j) coordinates (points with one neighbour only). I also have a script to get the bifurcations (3 neighbours). representation of ridge endings only
What I need : knowing that I can get these so-called minutiae (ridge endings and also bifurcations, which are not represented on the image) and also a direction for each point (e.g. direction of ridge for the ridge ending), how do I match two sets of minutiae and give a similarity score ?
Tools and languages used: digitalParsona U.are.U 4500 scanner, fprint library on linux, C for image aquisition, Python3 with PIL for image processing.
My thoughts so far :
- I can neglect the finger rotation for now, but I probably need to normalize the sets to compensate the (x, y) shifts between two images of the same finger.
- Maybe I could transform the sets so that they have the same barycenter, but I don't know how well that will work.
- I could create a matrix of intensities for each set (e.g 5 on a point, 4 around the point, etc) so as to get something that could be represented like this but with more spikes (best image I could find right now) and if A minus B gives me a pretty flat surface (mathematically, that could be a low sum of squares of values in the resulting matrix) I will know that the images look alike.
How do you think I could compare to such sets ?
I tried to show you that I really did put some effort in this and am not trying to get you to do the work for me ;)
If you have any questions I'll be happy to answer them. Thanks for your attention!