How to split text without spaces into list of word

2019-01-02 16:50发布

Input: "tableapplechairtablecupboard..." many words

What would be an efficient algorithm to split such text to the list of words and get:

Output: ["table", "apple", "chair", "table", ["cupboard", ["cup", "board"]], ...]

First thing that cames to mind is to go through all possible words (starting with first letter) and find the longest word possible, continue from position=word_position+len(word)

P.S.
We have a list of all possible words.
Word "cupboard" can be "cup" and "board", select longest.
Language: python, but main thing is the algorithm itself.

13条回答
深知你不懂我心
2楼-- · 2019-01-02 17:03

Here is solution using recursive search:

def find_words(instring, prefix = '', words = None):
    if not instring:
        return []
    if words is None:
        words = set()
        with open('/usr/share/dict/words') as f:
            for line in f:
                words.add(line.strip())
    if (not prefix) and (instring in words):
        return [instring]
    prefix, suffix = prefix + instring[0], instring[1:]
    solutions = []
    # Case 1: prefix in solution
    if prefix in words:
        try:
            solutions.append([prefix] + find_words(suffix, '', words))
        except ValueError:
            pass
    # Case 2: prefix not in solution
    try:
        solutions.append(find_words(suffix, prefix, words))
    except ValueError:
        pass
    if solutions:
        return sorted(solutions,
                      key = lambda solution: [len(word) for word in solution],
                      reverse = True)[0]
    else:
        raise ValueError('no solution')

print(find_words('tableapplechairtablecupboard'))
print(find_words('tableprechaun', words = set(['tab', 'table', 'leprechaun'])))

yields

['table', 'apple', 'chair', 'table', 'cupboard']
['tab', 'leprechaun']
查看更多
君临天下
3楼-- · 2019-01-02 17:05

Based on the excellent work in the top answer, I've created a pip package for easy use.

>>> import wordninja
>>> wordninja.split('derekanderson')
['derek', 'anderson']

To install, run pip install wordninja.

The only differences are minor. This returns a list rather than a str, it works in python3, it includes the word list and properly splits even if there are non-alpha chars (like underscores, dashes, etc).

Thanks again to Generic Human!

https://github.com/keredson/wordninja

查看更多
琉璃瓶的回忆
4楼-- · 2019-01-02 17:05

Unutbu's solution was quite close but I find the code difficult to read, and it didn't yield the expected result. Generic Human's solution has the drawback that it needs word frequencies. Not appropriate for all use case.

Here's a simple solution using a Divide and Conquer algorithm.

  1. It tries to minimize the number of words E.g. find_words('cupboard') will return ['cupboard'] rather than ['cup', 'board'] (assuming that cupboard, cup and board are in the dictionnary)
  2. The optimal solution is not unique, the implementation below returns a solution. find_words('charactersin') could return ['characters', 'in'] or maybe it will return ['character', 'sin'] (as seen below). You could quite easily modify the algorithm to return all optimal solutions.
  3. In this implementation solutions are memoized so that it runs in a reasonable time.

The code:

words = set()
with open('/usr/share/dict/words') as f:
    for line in f:
        words.add(line.strip())

solutions = {}
def find_words(instring):
    # First check if instring is in the dictionnary
    if instring in words:
        return [instring]
    # No... But maybe it's a result we already computed
    if instring in solutions:
        return solutions[instring]
    # Nope. Try to split the string at all position to recursively search for results
    best_solution = None
    for i in range(1, len(instring) - 1):
        part1 = find_words(instring[:i])
        part2 = find_words(instring[i:])
        # Both parts MUST have a solution
        if part1 is None or part2 is None:
            continue
        solution = part1 + part2
        # Is the solution found "better" than the previous one?
        if best_solution is None or len(solution) < len(best_solution):
            best_solution = solution
    # Remember (memoize) this solution to avoid having to recompute it
    solutions[instring] = best_solution
    return best_solution

This will take about about 5sec on my 3GHz machine:

result = find_words("thereismassesoftextinformationofpeoplescommentswhichisparsedfromhtmlbuttherearenodelimitedcharactersinthemforexamplethumbgreenappleactiveassignmentweeklymetaphorapparentlytherearethumbgreenappleetcinthestringialsohavealargedictionarytoquerywhetherthewordisreasonablesowhatsthefastestwayofextractionthxalot")
assert(result is not None)
print ' '.join(result)

the reis masses of text information of peoples comments which is parsed from h t m l but there are no delimited character sin them for example thumb green apple active assignment weekly metaphor apparently there are thumb green apple e t c in the string i also have a large dictionary to query whether the word is reasonable so whats the fastest way of extraction t h x a lot

查看更多
长期被迫恋爱
5楼-- · 2019-01-02 17:10

Expanding on @miku's suggestion to use a Trie, an append-only Trie is relatively straight-forward to implement in python:

class Node:
    def __init__(self, is_word=False):
        self.children = {}
        self.is_word = is_word

class TrieDictionary:
    def __init__(self, words=tuple()):
        self.root = Node()
        for word in words:
            self.add(word)

    def add(self, word):
        node = self.root
        for c in word:
            node = node.children.setdefault(c, Node())
        node.is_word = True

    def lookup(self, word, from_node=None):
        node = self.root if from_node is None else from_node
        for c in word:
            try:
                node = node.children[c]
            except KeyError:
                return None

        return node

We can then build a Trie-based dictionary from a set of words:

dictionary = {"a", "pea", "nut", "peanut", "but", "butt", "butte", "butter"}
trie_dictionary = TrieDictionary(words=dictionary)

Which will produce a tree that looks like this (* indicates beginning or end of a word):

* -> a*
 \\\ 
  \\\-> p -> e -> a*
   \\              \-> n -> u -> t*
    \\
     \\-> b -> u -> t*
      \\             \-> t*
       \\                 \-> e*
        \\                     \-> r*
         \
          \-> n -> u -> t*

We can incorporate this into a solution by combining it with a heuristic about how to choose words. For example we can prefer longer words over shorter words:

def using_trie_longest_word_heuristic(s):
    node = None
    possible_indexes = []

    # O(1) short-circuit if whole string is a word, doesn't go against longest-word wins
    if s in dictionary:
        return [ s ]

    for i in range(len(s)):
        # traverse the trie, char-wise to determine intermediate words
        node = trie_dictionary.lookup(s[i], from_node=node)

        # no more words start this way
        if node is None:
            # iterate words we have encountered from biggest to smallest
            for possible in possible_indexes[::-1]:
                # recurse to attempt to solve the remaining sub-string
                end_of_phrase = using_trie_longest_word_heuristic(s[possible+1:])

                # if we have a solution, return this word + our solution
                if end_of_phrase:
                    return [ s[:possible+1] ] + end_of_phrase

            # unsolvable
            break

        # if this is a leaf, append the index to the possible words list
        elif node.is_word:
            possible_indexes.append(i)

    # empty string OR unsolvable case 
    return []

We can use this function like this:

>>> using_trie_longest_word_heuristic("peanutbutter")
[ "peanut", "butter" ]

Because we maintain our position in the Trie as we search for longer and longer words, we traverse the trie at most once per possible solution (rather than 2 times for peanut: pea, peanut). The final short-circuit saves us from walking char-wise through the string in the worst-case.

The final result is only a handful of inspections:

'peanutbutter' - not a word, go charwise
'p' - in trie, use this node
'e' - in trie, use this node
'a' - in trie and edge, store potential word and use this node
'n' - in trie, use this node
'u' - in trie, use this node
't' - in trie and edge, store potential word and use this node
'b' - not in trie from `peanut` vector
'butter' - remainder of longest is a word

A benefit of this solution is in the fact that you know very quickly if longer words exist with a given prefix, which spares the need to exhaustively test sequence combinations against a dictionary. It also makes getting to an unsolvable answer comparatively cheap to other implementations.

The downsides of this solution are a large memory footprint for the trie and the cost of building the trie up-front.

查看更多
看淡一切
6楼-- · 2019-01-02 17:13

For German language there is CharSplit which uses machine learning and works pretty good for strings of a few words.

https://github.com/dtuggener/CharSplit

查看更多
零度萤火
7楼-- · 2019-01-02 17:20

The answer by https://stackoverflow.com/users/1515832/generic-human is great. But the best implementation of this I've ever seen was written Peter Norvig himself in his book 'Beautiful Data'.

Before I paste his code, let me expand on why Norvig's method is more accurate (although a little slower and longer in terms of code).

1) The data is a bit better - both in terms of size and in terms of precision (he uses a word count rather than a simple ranking) 2) More importantly, it's the logic behind n-grams that really makes the approach so accurate.

The example he provides in his book is the problem of splitting a string 'sitdown'. Now a non-bigram method of string split would consider p('sit') * p ('down'), and if this less than the p('sitdown') - which will be the case quite often - it will NOT split it, but we'd want it to (most of the time).

However when you have the bigram model you could value p('sit down') as a bigram vs p('sitdown') and the former wins. Basically, if you don't use bigrams, it treats the probability of the words you're splitting as independent, which is not the case, some words are more likely to appear one after the other. Unfortunately those are also the words that are often stuck together in a lot of instances and confuses the splitter.

Here's the link to the data (it's data for 3 separate problems and segmentation is only one. Please read the chapter for details): http://norvig.com/ngrams/

and here's the link to the code: http://norvig.com/ngrams/ngrams.py

These links have been up a while, but I'll copy paste the segmentation part of the code here anyway

import re, string, random, glob, operator, heapq
from collections import defaultdict
from math import log10

def memo(f):
    "Memoize function f."
    table = {}
    def fmemo(*args):
        if args not in table:
            table[args] = f(*args)
        return table[args]
    fmemo.memo = table
    return fmemo

def test(verbose=None):
    """Run some tests, taken from the chapter.
    Since the hillclimbing algorithm is randomized, some tests may fail."""
    import doctest
    print 'Running tests...'
    doctest.testfile('ngrams-test.txt', verbose=verbose)

################ Word Segmentation (p. 223)

@memo
def segment(text):
    "Return a list of words that is the best segmentation of text."
    if not text: return []
    candidates = ([first]+segment(rem) for first,rem in splits(text))
    return max(candidates, key=Pwords)

def splits(text, L=20):
    "Return a list of all possible (first, rem) pairs, len(first)<=L."
    return [(text[:i+1], text[i+1:]) 
            for i in range(min(len(text), L))]

def Pwords(words): 
    "The Naive Bayes probability of a sequence of words."
    return product(Pw(w) for w in words)

#### Support functions (p. 224)

def product(nums):
    "Return the product of a sequence of numbers."
    return reduce(operator.mul, nums, 1)

class Pdist(dict):
    "A probability distribution estimated from counts in datafile."
    def __init__(self, data=[], N=None, missingfn=None):
        for key,count in data:
            self[key] = self.get(key, 0) + int(count)
        self.N = float(N or sum(self.itervalues()))
        self.missingfn = missingfn or (lambda k, N: 1./N)
    def __call__(self, key): 
        if key in self: return self[key]/self.N  
        else: return self.missingfn(key, self.N)

def datafile(name, sep='\t'):
    "Read key,value pairs from file."
    for line in file(name):
        yield line.split(sep)

def avoid_long_words(key, N):
    "Estimate the probability of an unknown word."
    return 10./(N * 10**len(key))

N = 1024908267229 ## Number of tokens

Pw  = Pdist(datafile('count_1w.txt'), N, avoid_long_words)

#### segment2: second version, with bigram counts, (p. 226-227)

def cPw(word, prev):
    "Conditional probability of word, given previous word."
    try:
        return P2w[prev + ' ' + word]/float(Pw[prev])
    except KeyError:
        return Pw(word)

P2w = Pdist(datafile('count_2w.txt'), N)

@memo 
def segment2(text, prev='<S>'): 
    "Return (log P(words), words), where words is the best segmentation." 
    if not text: return 0.0, [] 
    candidates = [combine(log10(cPw(first, prev)), first, segment2(rem, first)) 
                  for first,rem in splits(text)] 
    return max(candidates) 

def combine(Pfirst, first, (Prem, rem)): 
    "Combine first and rem results into one (probability, words) pair." 
    return Pfirst+Prem, [first]+rem 
查看更多
登录 后发表回答