Scrabble word finder with wildcards

2020-02-08 09:30发布

I’ve got a problem and it seems some before me have had similar problems but I haven’t been able to find a working solution for me.

I’m currently building a mobile web application using C#, MySQL, HTML5 and Javascript. The application will be used to help users to find possible words to play while playing games like Scrabble.

The problem I’ve got: How do I get the right words from a MySQL database containing a dictionary from user letter input?

More details: - Users can input any number of letters and also use wildcards (representing any letter) . - If the user inputs “TEST” the result can’t contain words with more than 1 E and S and words with more than 2 T, a result with “TESTER” in it would be bad. - The result can’t contain words with more letters than inputted.

UPDATE: Seems Trie is the solution to my problem as suggested by Eric Lippert here.
Problem is I'm a beginner with both C# and MySQL, so here are some follow up questions:

  1. How do I create a Trie from my MySQL dictionary? (400k+ words)
  2. How do I store the Trie for quick and future access?
  3. How do I access the Trie and extract words from it with C#?

Thank you very much for the help!

标签: c# mysql regex
3条回答
男人必须洒脱
2楼-- · 2020-02-08 10:04

How do I get the right words from a MySQL database containing a dictionary from user letter input?

You don't. A relational database table is not a suitable data structure for solving this problem as efficiently as you need to.

What you do instead is you build a trie data structure out of the dictionary (or, if you're really buff, you build a dawg -- a directed acyclic word graph -- which is a sort of compressed trie.)

Once you have a trie/dawg it becomes very inexpensive to test every word in the dictionary against a given rack, because you can "prune out" whole huge branches of the dictionary that the rack cannot possibly match.

Let's look at a small example. Suppose you have the dictionary "OP, OPS, OPT, OPTS, POT, POTS, SOP, SOPS, STOP, STOPS" From that you build this trie: (Nodes with a $ are those that are marked as "word can end here".

           ^root^
           /  |  \
         O    P    S
         |    |   / \
         P$   O  O   T   
        / \   |  |   |
       T$  S$ T$ P$  O
       |      |  |   |
       S$     S$ S$  P$
                     |
                     S$

and you have the rack "OPS" -- what do you do?

First you say "can I go down the O branch?" Yes, you can. So now the problem is matching "PS" against the O branch. Can you go down the P subbranch? Yes. Does it have an end-of-word marker? Yes, so OP is a match. Now the problem is matching "S" against the OP branch. Can you go down the T branch? No. Can you go down the S branch? Yes. Now you have the empty rack and you have to match it against the OPS branch. Does it have an end-of-word marker? Yes! So OPS matches also. Now backtrack up to the root.

Can you go down the P branch? Yes. Now the problem is to match OS against the P branch. Go down the PO branch and match S -- that fails. Backtrack to the root.

And again, you see how this goes. Eventually we go down the SOP branch and find an end-of-word on SOP, so "SOP" matches this rack. We don't go down the ST branch because we don't have a T.

We've tried every possible word in the dictionary and discovered that OP, OPS and SOP all match. But we never had to investigate OPTS, POTS, STOP or STOPS because we didn't have a T.

You see how this data structure makes it very efficient? Once you have determined that you do not have the letters on the rack to make the beginning of a word, you don't have to investigate any dictionary words that start with that beginning. If you have PO but no T, you don't have to investigate POTSHERD or POTATO or POTASH or POTLATCH or POTABLE; all those expensive and fruitless searches go away very quickly.

Adapting the system to deal with "wild" tiles is pretty straightforward; if you have OPS?, then just run the search algorithm 26 times, on OPSA, OPSB, OPSC... It should be fast enough that doing it 26 times is cheap (or doing it 26 x 26 times if you have two blanks.)

This is the basic algorithm that professional Scrabble AI programs use, though of course they also have to deal with things like board position, rack management and so on, which complicate the algorithms somewhat. This simple version of the algorithm will be plenty fast enough to generate all the possible words on a rack.

Don't forget that of course you only have to compute the trie/dawg once if the dictionary is not changing over time. It can be time-consuming to build the trie out of the dictionary, so you might want to do so once and then figure out some way to store the trie on disk in a form that is amenable to rebuilding it quickly from disk.

You can optimize the memory usage by building a DAWG out of the trie. Notice how there is a lot of repetition because in English, lots of words end the same, just as lots of words begin the same. The trie does a great job of sharing nodes at the beginning but a lousy job of sharing them at the end. You can notice for example that the "S$ with no children" pattern is extremely common, and turn the trie into:

           ^root^
          / |  \
        O   P    S
        |   |   / \
        P$  O  O   T   
       /  \ |  |   |
      T$  | T$ P$  O
      |    \ | |   |
       \    \| /   P$
        \    |/    |
         \   |    /
          \  |   /  
           \ |  /
            \| /  
             |/
             |       
             S$

Saving a whole pile of nodes. And then you might notice that two words now end in O-P$-S$, and two words end in T$-S$, so you can compress it further to:

           ^root^
           / | \
          O  P  S
          |  | / \
          P$ O \  T   
         /  \|  \ |
         |   |   \|
         |   |    O
         |   T$   |
          \  |    P$
           \ |   /
            \|  /  
             | /
             |/   
             S$

And now we have the minimal DAWG for this dictionary.

Further reading:

http://dl.acm.org/citation.cfm?id=42420

http://archive.msdn.microsoft.com/dawg1

http://www.gtoal.com/wordgames/scrabble.html

查看更多
Ridiculous、
3楼-- · 2020-02-08 10:07

Here is how I would solve the problem (assuming of course that you have control of the DB, and can modify tables/add tables, or even control the original load of the DB).

My solution would use 2 tables -> one table would just be a list of every possible letter combination from your dictionary with the component letters sorted alphabetically. (IE TEST would be ESTT, TESTER would be ERSTT, DAD would be ADD).

The second table would have every word and a reference to the key for table one.

Table One - LetterInWord

Index Letters
1     ESTT
2     ESTTER
3     EST
4     ADD
5     APST

In table one, you insert the words letters in alphabetical order - test becomes estt

Table Two - Words

Index LetterInWordIndex  Word
1     1                  TEST
2     2                  TESTER
3     3                  SET
4     4                  ADD
5     4                  DAD
6     5                  SPAT
7     5                  PAST               

In table 2 you insert the word with appropriate word and index reference.

This will be a one to many relationship -> One entry in LetterInWord table could have multiple entries in Words table

Non-wild card look up: Say my input letters are SETT Sort them alphabetically.

Then in the look up, you select all "Letters" from LetterInWord where Letters = value and join on table Words - your output in one query is a list of all words that only contain those letters

Now for wild cards: Say my input letters are EST* Remember the length - 4 Strip out the wildcards - you get EST (make sure you sort this alphabetically) Now look for all cases where Letters contains EST and Letters Length <= 4 joined on Words table

That would return TEST, REST, SET, etc

I'm not sure if that is the most efficient method, but it works. I have used it in the past to do word look ups from dictionaries, and it has reasonable performance with minimal complexity.

查看更多
地球回转人心会变
4楼-- · 2020-02-08 10:11

This will be very difficult to do if all you have is the dictionary. If you have the ability to make a new table or new columns, I would :

Create a table with a column for the word, plus 26 columns (one for each letter) Run a stored proc/backend process that counts the occurences of each letter in a word, and puts them into the appropriate column.

Then (ignoring wildcards) you can do

Select word from dictionary where tcount <=2 and ecount <=1 and scount <=1

for wildcards you could do and length <= number_of_letters

Actually always use the length clause, because you will then be able to index on it to improve performance.

Anything else is going to be exceptionally slow during the query

查看更多
登录 后发表回答