How to efficiently find similar strings in a uniqu

2020-07-13 07:49发布

Background: I have a list that contains 13,000 records of human names, some of them are duplicates and I want to find out the similar ones to do the manual duplication process.

For an array like:

["jeff","Jeff","mandy","king","queen"] 

What would be an efficient way to get:

[["jeff","Jeff"]]

Explanation ["jeff","Jeff"] since their Levenshtein distance is 1(which can be variable like 3).

/* 
Working but a slow solution
*/
function extractSimilarNames(uniqueNames) {
  let similarNamesGroup = [];

  for (let i = 0; i < uniqueNames.length; i++) {
    //compare with the rest of the array
    const currentName = uniqueNames[i];

    let suspiciousNames = [];

    for (let j = i + 1; j < uniqueNames.length; j++) {
      const matchingName = uniqueNames[j];
      if (isInLevenshteinRange(currentName, matchingName, 1)) {
        suspiciousNames.push(matchingName);
        removeElementFromArray(uniqueNames, matchingName);
        removeElementFromArray(uniqueNames, currentName);
        i--;
        j--;
      }
    }
    if (suspiciousNames.length > 0) {
      suspiciousNames.push(currentName);
    }
  }
  return similarNamesGroup;
}

I want to find the similarity via Levenshtein distance, not only the lower/uppercase similarity

I already find one of the fastest Levenshtein implementation but it still takes me to 35 mins to get the result of 13000 items list.

5条回答
Explosion°爆炸
2楼-- · 2020-07-13 08:09

Approaches to remove similar names:

  1. Use phonetical representation of the words. cmudict It works with python nltk. You can find which names are close to each other phonetically.
  2. Try different forms of stemming or simplifications. I would try most aggressive stemmers like Porter stemmer.
  3. Levenshtein trie. You can create trie data structure that will help to find word with minimum distance to searched item, this is used for full text search in some search engines. As far as I know it's already implemented in Java. In your case you need to search one item then add it to the structure on every step, you need to make sure that item that you search is not in the structure yet.

  4. Manual naive approach. Find all suitable representations of every word/name, put all representations to map and find representations that have more than 1 word. If you have around 15 different representations of one word you will need only 280K iterations to generate this object (much faster than compare each word to another, which requires around 80M comparisons with 13K names).

-- Edit --

If there is a choice I would use something like Python or Java instead of JS for this. It's only my opinion based on: I don't know all requirements, it's common to use Java/Python for natural language processing, task looks more like heavy data processing than front end.

查看更多
冷血范
3楼-- · 2020-07-13 08:14

If we remove one character from "Jeff" at different positions we end up at "eff", "Jff", "Jef" and "Jef". If we do the same with "jeff", we get "eff", "jff", "Jef" and "jef". Now if you look closely, you'll see that both strings produce "eff" as a result, which means that we could create a Map of those combinations to their original version, then for each string generate all combinations and look them up in the Map. Through the lookup, you'll get results that are similar, e.g. "abc" and "cab" but they do not necessarily have a levenshtein distance of 1, so we have to check that afterwards.

Now why is that better?

Well iterating all names is O(n) (n being the number of words), creating all combinations is O(m) (m being the average number of characters in a word) and looking up in a Map is O(1), therefore this runs in O(n * m), whereas your algorithm is O(n * n * m), which means for 10.000 words, mine is 10.000 times faster (or my calculation is wrong :))

  // A "OneToMany" Map
  class MultiMap extends Map {
    set(k, v) {
      if(super.has(k)) {
        super.get(k).push(v);
       } else super.set(k, [v]);
     }
     get(k) {
        return super.get(k) || [];
     }
  }

  function* oneShorter(word) {
    for(let pos = 0; pos < word.length; pos++)
       yield word.substr(0, pos) + word.substr(pos + 1);
  }

  function findDuplicates(names) {
    const combos = new MultiMap();
    const duplicates = [];

    const check = (name, combo) => {
      const dupes = combos.get(combo);
      for(const dupe of dupes) {
         if((isInLevenshteinRange(name, combo, 1))
         duplicates.push([name, dupe]);
      }
      combos.set(combo, name);
    };

    for(const name of names) {
      check(name, name);

      for(const combo of oneShorter(name)) {
         check(name, combo);
      }
    }

     return duplicates;
 }
查看更多
▲ chillily
4楼-- · 2020-07-13 08:16

Your problem is not the speed of the Levenshtein distance implementation. Your problem is that you have to compare each word with each other word. This means you make 13000² comparisons (and each time calculate the Levenshtein distance).

So my approach would be to try to reduce the number of comparisons.

Here are some ideas:

  • words are only similar if their lengths differ less than 20% (just my estimation)
    → we can group by length and only compare words with other words of length ±20%

  • words are only similar if they share a lot of letters
    → we can create a list of e.g. 3-grams (all lower case) that refer to the words they are part of.
    → only compare (e.g. with Levenshtein distance) a word with other words that have several 3-grams in common with it.

查看更多
趁早两清
5楼-- · 2020-07-13 08:21

As in your working code you use Levenshtein distance 1 only, I will assume no other distances need to be found.

I will propose a similar solution as Jonas Wilms posted, with these differences:

  • No need to call a isLevenshtein function
  • Produces only unique pairs
  • Each pair is lexically ordered

// Sample data with lots of similar names
const names = ["Adela","Adelaida","Adelaide","Adele","Adelia","AdeLina","Adeline",
               "Adell","AdellA","Adelle","Ardelia","Ardell","Ardella","Ardelle",
               "Ardis","Madeline","Odelia","ODELL","Odessa","Odette"];

const map = {};
const pairs = new Set;
for (const name of names) {
    for (const i in name+"_") { // Additional iteration to NOT delete a character
        const key = (name.slice(0, i) + name.slice(+i + 1, name.length)).toLowerCase();
        // Group words together where the removal from the same index leads to the same key
        if (!map[key]) map[key] = Array.from({length: key.length+1}, () => new Set);
        // If NO character was removed, put the word in EACH group
        for (const set of (+i < name.length ? [map[key][i]] : map[key])) {
            if (set.has(name)) continue;
            for (let similar of set) pairs.add(JSON.stringify([similar, name].sort()));
            set.add(name);
        }
    }
}
const result = [...pairs].sort().map(JSON.parse); // sort is optional
console.log(result);

I tested this on a set of 13000 names, including at least 4000 different names, and it produced 8000 pairs in about 0.3 seconds.

查看更多
贪生不怕死
6楼-- · 2020-07-13 08:22

I have yet a completely different way of approaching this problem, but I believe I am presenting a pretty fast (but debatable as to how correct/incorrect) it is. My approach is to map the strings to numeric values, sort those values once, and then run through that list once, comparing neighboring values to each other. Like this:

// Test strings (provided by OP) with some additions
var strs = ["Jeff","mandy","jeff","king","queen","joff", "Queen", "jff", "tim", "Timmo", "Tom", "Rob", "Bob"] 

// Function to convert a string into a numeric representation
// to aid with string similarity comparison
function atoi(str, maxLen){
  var i = 0;
  for( var j = 0; j < maxLen; j++ ){
    if( str[j] != null ){
      i += str.toLowerCase().charCodeAt(j)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
    } else {
      // Normalize the string with a pad char
      // up to the maxLen (update the value, but don't actually
      // update the string...)
      i += '-'.charCodeAt(0)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
    }
  }
  valMap.push({
     str,
     i 
  })
  return i;
}

Number.prototype.inRange = function(min, max){ return(this >= min && this <= max) }

var valMap = []; // Array of string-value pairs

var maxLen = strs.map((s) => s.length).sort().pop() // maxLen of all strings in the array
console.log('maxLen', maxLen)
strs.forEach((s) => atoi(s, maxLen)) // Map strings to values

var similars = [];
var subArr = []
var margin = 0.05;
valMap.sort((a,b) => a.i > b.i ? 1 : -1) // Sort the map...
valMap.forEach((entry, idx) => {  
  if( idx > 0 ){
      var closeness = Math.abs(entry.i / valMap[idx-1].i);
      if( closeness.inRange( 1 - margin, 1 + margin ) ){
        if( subArr.length == 0 ) subArr.push(valMap[idx-1].str)
        subArr.push(entry.str)
        if( idx == valMap.length - 1){
          similars.push(subArr)
        }
      } else {
        if( subArr.length > 0 ) similars.push(subArr)
        subArr = []
      }
  }
})
console.log('similars', similars)

I'm treating each string as if each were a "64-bit number", where each "bit" could take on the alphanumeric values, with 'a' representing 0. Then I sort that once. Then, if similar values are encountered to the previous one (i.e., if the ratio of the two is near 1), then I deduce I have similar strings.

The other thing I do is check for the max string length, and normalize all the strings to that length in the calculation of the "64-bit value".

--- EDIT: even more stress testing --- And yet, here is some additional testing, which pulls a large list of names and performs the processing rather quickly (~ 50ms on 20k+ names, with lots of false positives). Regardless, this snippet should make it easier to troubleshoot:

var valMap = []; // Array of string-value pairs

/* Extensions */
Number.prototype.inRange = function(min, max){ return(this >= min && this <= max) }

/* Methods */
// Function to convert a string into a numeric representation
// to aid with string similarity comparison
function atoi(str, maxLen){
  var i = 0;
  for( var j = 0; j < maxLen; j++ ){
    if( str[j] != null ){
      i += str.toLowerCase().charCodeAt(j)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
    } else {
      // Normalize the string with a pad char
      // up to the maxLen (update the value, but don't actually
      // update the string...)
      i += '-'.charCodeAt(0)*Math.pow(64,maxLen-j) - 'a'.charCodeAt(0)*Math.pow(64,maxLen-j)
    }
  }
  valMap.push({ str, i })
  return i;
}

function findSimilars(strs){
  var maxLen = strs.map((s) => s.length).sort().pop() // maxLen of all strings in the array
  console.log('maxLen', maxLen)
  strs.forEach((s) => atoi(s, maxLen)) // Map strings to values

  var similars = [];
  var subArr = []
  var margin = 0.05;
  valMap.sort((a,b) => a.i > b.i ? 1 : -1) // Sort the map...
  valMap.forEach((entry, idx) => {  
    if( idx > 0 ){
        var closeness = Math.abs(entry.i / valMap[idx-1].i);
        if( closeness.inRange( 1 - margin, 1 + margin ) ){
          if( subArr.length == 0 ) subArr.push(valMap[idx-1].str)
          subArr.push(entry.str)
          if( idx == valMap.length - 1){
            similars.push(subArr)
          }
        } else {
          if( subArr.length > 0 ) similars.push(subArr)
          subArr = []
        }
    }
  })
  console.log('similars', similars)
}

// Stress test with 20k+ names 
$.get('https://raw.githubusercontent.com/dominictarr/random-name/master/names.json')
.then((resp) => {
  var strs = JSON.parse(resp);
  console.time('processing')
  findSimilars(strs)
  console.timeEnd('processing')
})
.catch((err) => { console.err('Err retrieving JSON'); })
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>

(For some reason, when I run this in JSFiddle, I get it to run in ~50ms, but in the Stackoverflow snippet, it's closer to 1000ms.)

查看更多
登录 后发表回答