What is a simple way to generate keywords from a t

2019-03-08 11:47发布

问题:

I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs.

Has anyone done anything like that? Do you known a Perl or Python library that does that?

Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too.

回答1:

You could try using the perl module Lingua::EN::Tagger for a quick and easy solution.

A more complicated module Lingua::EN::Semtags::Engine uses Lingua::EN::Tagger with a WordNet database to get a more structured output. Both are pretty easy to use, just check out the documentation on CPAN or use perldoc after you install the module.



回答2:

The name for the "high frequency English words" is stop words and there are many lists available. I'm not aware of any python or perl libraries, but you could encode your stop word list in a binary tree or hash (or you could use python's frozenset), then as you read each word from the input text, check if it is in your 'stop list' and filter it out.

Note that after you remove the stop words, you'll need to do some stemming to normalize the resulting text (remove plurals, -ings, -eds), then remove all the duplicate "keywords".



回答3:

To find the most frequently-used words in a text, do something like this:

#!/usr/bin/perl -w

use strict;
use warnings 'all';

# Read the text:
open my $ifh, '<', 'text.txt'
  or die "Cannot open file: $!";
local $/;
my $text = <$ifh>;

# Find all the words, and count how many times they appear:
my %words = ( );
map { $words{$_}++ }
  grep { length > 1 && $_ =~ m/^[\@a-z-']+$/i }
    map { s/[",\.]//g; $_ }
      split /\s/, $text;

print "Words, sorted by frequency:\n";
my (@data_line);
format FMT = 
@<<<<<<<<<<<<<<<<<<<<<<...     @########
@data_line
.
local $~ = 'FMT';

# Sort them by frequency:
map { @data_line = ($_, $words{$_}); write(); }
  sort { $words{$b} <=> $words{$a} }
    grep { $words{$_} > 2 }
      keys(%words);

Example output looks like this:

john@ubuntu-pc1:~/Desktop$ perl frequency.pl 
Words, sorted by frequency:
for                                   32
Jan                                   27
am                                    26
of                                    21
your                                  21
to                                    18
in                                    17
the                                   17
Get                                   13
you                                   13
OTRS                                  11
today                                 11
PSM                                   10
Card                                  10
me                                     9
on                                     9
and                                    9
Offline                                9
with                                   9
Invited                                9
Black                                  8
get                                    8
Web                                    7
Starred                                7
All                                    7
View                                   7
Obama                                  7


回答4:

In Perl there's Lingua::EN::Keywords.



回答5:

The simplest way to do what you want is this...

>>> text = "this is some of the sample text"
>>> words = [word for word in set(text.split(" ")) if len(word) > 3]
>>> words
['this', 'some', 'sample', 'text']

I don't know of any standard module that does this, but it wouldn't be hard to replace the limit on three letter words with a lookup into a set of common English words.



回答6:

One liner solution (words longer than two chars which occurred more than two times):

perl -ne'$h{$1}++while m/\b(\w{3,})\b/g}{printf"%-20s %5d\n",$_,$h{$_}for sort{$h{$b}<=>$h{$a}}grep{$h{$_}>2}keys%h'

EDIT: If one wants to sort alphabetically words with same frequency can use this enhanced one:

perl -ne'$h{$1}++while m/\b(\w{3,})\b/g}{printf"%-20s %5d\n",$_,$h{$_}for sort{$h{$b}<=>$h{$a}or$a cmp$b}grep{$h{$_}>2}keys%h'


回答7:

TF-IDF (Term Frequency - Inverse Document Frequency) is designed for this.

Basically it asks, which words are frequent in this document, compared to all documents?

It will give a lower score to words that appear in all documents, and a higher score to words that appear in a given document frequently.

You can see a worksheet of the calculations here:

https://docs.google.com/spreadsheet/ccc?key=0AreO9JhY28gcdFMtUFJrc0dRdkpiUWlhNHVGS1h5Y2c&usp=sharing

(switch to TFIDF tab at bottom)

Here is a python library:

https://github.com/hrs/python-tf-idf



回答8:

I think the most accurate way that still maintains a semblance of simplicity would be to count the word frequencies in your source, then weight them according to their frequencies in common English (or whatever other language) usage.

Words that appear less frequently in common use, like "coffeehouse" are more likely to be a keyword than words that appear more often, like "dog." Still, if your source mentions "dog" 500 times and "coffeehouse" twice it's more likely that "dog" is a keyword even though it's a common word.

Deciding on the weighting scheme would be the difficult part.