-->

I processed fingerprints. How do I compare the res

2019-07-21 15:07发布

问题:

I'm a trying to compare fingerprints. Here's what I got so far.

  1. Get the raw image from a digitalPersona sensor. image

  2. Binarize it. image

  3. Skeletonize it. I used Hall's algorithm because that is the only one I got to work more or less properly. You can still see some flaws. image

  4. Strip the convex hull (inverted Jarvis algorithm) and get all the ridge endings as an array of (i, j) coordinates (points with one neighbour only). I also have a script to get the bifurcations (3 neighbours). representation of ridge endings only

What I need : knowing that I can get these so-called minutiae (ridge endings and also bifurcations, which are not represented on the image) and also a direction for each point (e.g. direction of ridge for the ridge ending), how do I match two sets of minutiae and give a similarity score ?

Tools and languages used: digitalParsona U.are.U 4500 scanner, fprint library on linux, C for image aquisition, Python3 with PIL for image processing.

My thoughts so far :

  • I can neglect the finger rotation for now, but I probably need to normalize the sets to compensate the (x, y) shifts between two images of the same finger.
  • Maybe I could transform the sets so that they have the same barycenter, but I don't know how well that will work.
  • I could create a matrix of intensities for each set (e.g 5 on a point, 4 around the point, etc) so as to get something that could be represented like this but with more spikes (best image I could find right now) and if A minus B gives me a pretty flat surface (mathematically, that could be a low sum of squares of values in the resulting matrix) I will know that the images look alike.

How do you think I could compare to such sets ?

I tried to show you that I really did put some effort in this and am not trying to get you to do the work for me ;)

If you have any questions I'll be happy to answer them. Thanks for your attention!

回答1:

The critical thing to realize is that your first steps are anything but exact. You will have errors in finding ridge endings and bifurcations.

However, not all errors are equally likely. A quick glance told me that a ridge ending too close to another ridge might be misclassified as a bifurcation, which is a qualitative error. A second expected error would be to misplace the ridge ending along the ridge. However, the error orthogonal to the ridge likely is small. You might also miss a few.

So, ignoring the orientation and offset problems, each minutia in one image should be paired with a minutia in the other image, based on approximate location. To do so, define a likelihood function on a pair of minutia, based on their distance, local ridge direction, type (ending/bifurcation). Iteratively form pairs between the two images, matching the most likely points, until they become too unlikely.

The first few pairs will give you a good estimate of the typical distances; this can also be used to refine the image offset estimations. Local ridge direction estimates near minutiae are probably not reliable to estimate orientation.

When all likely pairs are matched up, their associated likelihoods can be combined to come up with a combined likelihood that the two sets of minutiae match.

On a side note, the whole notion of minutiae appears to me as a legacy from older manual fingerprint matching techniques. Personally the first step I'd do would be to take the gradients of the greyscale image, and look for ridges and valleys in the gradient field. And instead of looking for the minutiae, I'd be looking at points in the image where the direction of the gradient (modulo 180 degrees) is quickly varying.