Background
Looking to automate creating Domains in JasperServer. Domains are a "view" of data for creating ad hoc reports. The names of the columns must be presented to the user in a human readable fashion.
Problem
There are over 2,000 possible pieces of data from which the organization could theoretically want to include on a report. The data are sourced from non-human-friendly names such as:
payperiodmatchcode labordistributioncodedesc dependentrelationship actionendoption actionendoptiondesc addresstype addresstypedesc historytype psaddresstype rolename bankaccountstatus bankaccountstatusdesc bankaccounttype bankaccounttypedesc beneficiaryamount beneficiaryclass beneficiarypercent benefitsubclass beneficiaryclass beneficiaryclassdesc benefitactioncode benefitactioncodedesc benefitagecontrol benefitagecontroldesc ageconrolagelimit ageconrolnoticeperiod
Question
How would you automatically change such names to:
- pay period match code
- labor distribution code desc
- dependent relationship
Ideas
Use Google's Did you mean engine, however I think it violates their TOS:
lynx -dump «url» | grep "Did you mean" | awk ...
Languages
Any language is fine, but text parsers such as Perl would probably be well-suited. (The column names are English-only.)
Unnecessary Prefection
The goal is not 100% perfection in breaking words apart; the following outcome is acceptable:
- enrollmenteffectivedate -> Enrollment Effective Date
- enrollmentenddate -> Enroll Men Tend Date
- enrollmentrequirementset -> Enrollment Requirement Set
No matter what, a human will need to double-check the results and correct many. Whittling a set of 2,000 results down to 600 edits would be a dramatic time savings. To fixate on some cases having multiple possibilities (e.g., therapistname) is to miss the point altogether.
Sometimes, bruteforcing is acceptable:
Output:
See also A Spellchecker Used to Be a Major Feat of Software Engineering.
I reduced your list to 32 atomic terms that I was concerned about and put them in longest-first arrangement in a regex:
Peter Norvig has a great python script that has a word segmentation function using unigram/bigram statistics. You want to take a look at the logic for the function segment2 in ngrams.py. Details are in the chapter Natural Language Corpus Data from the book Beautiful Data (Segaran and Hammerbacher, 2009). http://norvig.com/ngrams/
Here is a Lua program that tries longest matches from a dictionary:
Two things occur to me:
anag
which finds anagrams. After all, "time piece" is an anagram of "timepiece" ... now you just have to weed out the false positives.Given that some words could be substrings of others, especially with multiple words smashed together, I think simple solutions like regexes are out. I'd go with a full-on parser, my experience being with ANTLR. If you want to stick with perl, I've had good luck using ANTLR parsers generated as Java through Inline::Java.