What is a simple way to generate keywords from a t

2019-03-08 11:34发布

I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs.

Has anyone done anything like that? Do you known a Perl or Python library that does that?

Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too.

8条回答
放我归山
2楼-- · 2019-03-08 11:56
看我几分像从前
3楼-- · 2019-03-08 11:57

To find the most frequently-used words in a text, do something like this:

#!/usr/bin/perl -w

use strict;
use warnings 'all';

# Read the text:
open my $ifh, '<', 'text.txt'
  or die "Cannot open file: $!";
local $/;
my $text = <$ifh>;

# Find all the words, and count how many times they appear:
my %words = ( );
map { $words{$_}++ }
  grep { length > 1 && $_ =~ m/^[\@a-z-']+$/i }
    map { s/[",\.]//g; $_ }
      split /\s/, $text;

print "Words, sorted by frequency:\n";
my (@data_line);
format FMT = 
@<<<<<<<<<<<<<<<<<<<<<<...     @########
@data_line
.
local $~ = 'FMT';

# Sort them by frequency:
map { @data_line = ($_, $words{$_}); write(); }
  sort { $words{$b} <=> $words{$a} }
    grep { $words{$_} > 2 }
      keys(%words);

Example output looks like this:

john@ubuntu-pc1:~/Desktop$ perl frequency.pl 
Words, sorted by frequency:
for                                   32
Jan                                   27
am                                    26
of                                    21
your                                  21
to                                    18
in                                    17
the                                   17
Get                                   13
you                                   13
OTRS                                  11
today                                 11
PSM                                   10
Card                                  10
me                                     9
on                                     9
and                                    9
Offline                                9
with                                   9
Invited                                9
Black                                  8
get                                    8
Web                                    7
Starred                                7
All                                    7
View                                   7
Obama                                  7
查看更多
戒情不戒烟
4楼-- · 2019-03-08 12:01

The simplest way to do what you want is this...

>>> text = "this is some of the sample text"
>>> words = [word for word in set(text.split(" ")) if len(word) > 3]
>>> words
['this', 'some', 'sample', 'text']

I don't know of any standard module that does this, but it wouldn't be hard to replace the limit on three letter words with a lookup into a set of common English words.

查看更多
\"骚年 ilove
5楼-- · 2019-03-08 12:03

I think the most accurate way that still maintains a semblance of simplicity would be to count the word frequencies in your source, then weight them according to their frequencies in common English (or whatever other language) usage.

Words that appear less frequently in common use, like "coffeehouse" are more likely to be a keyword than words that appear more often, like "dog." Still, if your source mentions "dog" 500 times and "coffeehouse" twice it's more likely that "dog" is a keyword even though it's a common word.

Deciding on the weighting scheme would be the difficult part.

查看更多
够拽才男人
6楼-- · 2019-03-08 12:06

The name for the "high frequency English words" is stop words and there are many lists available. I'm not aware of any python or perl libraries, but you could encode your stop word list in a binary tree or hash (or you could use python's frozenset), then as you read each word from the input text, check if it is in your 'stop list' and filter it out.

Note that after you remove the stop words, you'll need to do some stemming to normalize the resulting text (remove plurals, -ings, -eds), then remove all the duplicate "keywords".

查看更多
贼婆χ
7楼-- · 2019-03-08 12:10

You could try using the perl module Lingua::EN::Tagger for a quick and easy solution.

A more complicated module Lingua::EN::Semtags::Engine uses Lingua::EN::Tagger with a WordNet database to get a more structured output. Both are pretty easy to use, just check out the documentation on CPAN or use perldoc after you install the module.

查看更多
登录 后发表回答