Find the most frequently occuring words in a text

2020-02-23 04:25发布

问题:

Can someone help me with how to find the most frequently used two and three words in a text using R?

My text is...

text <- c("There is a difference between the common use of the term phrase and its technical use in linguistics. In common usage, a phrase is usually a group of words with some special idiomatic meaning or other significance, such as \"all rights reserved\", \"economical with the truth\", \"kick the bucket\", and the like. It may be a euphemism, a saying or proverb, a fixed expression, a figure of speech, etc. In grammatical analysis, particularly in theories of syntax, a phrase is any group of words, or sometimes a single word, which plays a particular role within the grammatical structure of a sentence. It does not have to have any special meaning or significance, or even exist anywhere outside of the sentence being analyzed, but it must function there as a complete grammatical unit. For example, in the sentence Yesterday I saw an orange bird with a white neck, the words an orange bird with a white neck form what is called a noun phrase, or a determiner phrase in some theories, which functions as the object of the sentence. Theorists of syntax differ in exactly what they regard as a phrase; however, it is usually required to be a constituent of a sentence, in that it must include all the dependents of the units that it contains. This means that some expressions that may be called phrases in everyday language are not phrases in the technical sense. For example, in the sentence I can't put up with Alex, the words put up with (meaning \'tolerate\') may be referred to in common language as a phrase (English expressions like this are frequently called phrasal verbs\ but technically they do not form a complete phrase, since they do not include Alex, which is the complement of the preposition with.")

回答1:

Your text is:

text <- c("There is a difference between the common use of the term phrase and its technical use in linguistics. In common usage, a phrase is usually a group of words with some special idiomatic meaning or other significance, such as \"all rights reserved\", \"economical with the truth\", \"kick the bucket\", and the like. It may be a euphemism, a saying or proverb, a fixed expression, a figure of speech, etc. In grammatical analysis, particularly in theories of syntax, a phrase is any group of words, or sometimes a single word, which plays a particular role within the grammatical structure of a sentence. It does not have to have any special meaning or significance, or even exist anywhere outside of the sentence being analyzed, but it must function there as a complete grammatical unit. For example, in the sentence Yesterday I saw an orange bird with a white neck, the words an orange bird with a white neck form what is called a noun phrase, or a determiner phrase in some theories, which functions as the object of the sentence. Theorists of syntax differ in exactly what they regard as a phrase; however, it is usually required to be a constituent of a sentence, in that it must include all the dependents of the units that it contains. This means that some expressions that may be called phrases in everyday language are not phrases in the technical sense. For example, in the sentence I can't put up with Alex, the words put up with (meaning \'tolerate\') may be referred to in common language as a phrase (English expressions like this are frequently called phrasal verbs\ but technically they do not form a complete phrase, since they do not include Alex, which is the complement of the preposition with.")

In Natural Language Processing, 2-word phrases are referred to as "bi-gram", and 3-word phrases are referred to as "tri-gram", and so forth. Generally, a given combination of n-words is called an "n-gram".

First, we install the ngram package (available on CRAN)

# Install package "ngram"
install.packages("ngram")

Then, we will find the most frequent two-word and three-word phrases

library(ngram)

# To find all two-word phrases in the test "text":
ng2 <- ngram(text, n = 2)

# To find all three-word phrases in the test "text":
ng3 <- ngram(text, n = 3)

Finally, we will print the objects (ngrams) using various methods as below:

print(ng, output="truncated")

print(ngram(x), output="full")

get.phrasetable(ng)

ngram::ngram_asweka(text, min=2, max=3)

We can also use Markov Chains to babble new sequences:

# if we are using ng2 (bi-gram)
lnth = 2 
babble(ng = ng2, genlen = lnth)

# if we are using ng3 (tri-gram)
lnth = 3  
babble(ng = ng3, genlen = lnth)


回答2:

Simplest?

require(quanteda)

# bi-grams
topfeatures(dfm(text, ngrams = 2, verbose = FALSE))
##      of_the     a_phrase the_sentence       may_be         as_a       in_the    in_common    phrase_is 
##           5            4            4            3            3            3            2            2 
##  is_usually     group_of 
##           2            2 

# for tri-grams
topfeatures(dfm(text, ngrams = 3, verbose = FALSE))
##     a_phrase_is   group_of_words    of_a_sentence  of_the_sentence   for_example_in   example_in_the 
##               2                2                2                2                2                2 
## in_the_sentence   an_orange_bird orange_bird_with      bird_with_a 
#               2                2                2                2 


回答3:

The tidytext package makes this sort of thing pretty simple:

library(tidytext)
library(dplyr)

data_frame(text = text) %>% 
    unnest_tokens(word, text) %>%    # split words
    anti_join(stop_words) %>%    # take out "a", "an", "the", etc.
    count(word, sort = TRUE)    # count occurrences

# Source: local data frame [73 x 2]
# 
#           word     n
#          (chr) (int)
# 1       phrase     8
# 2     sentence     6
# 3        words     4
# 4       called     3
# 5       common     3
# 6  grammatical     3
# 7      meaning     3
# 8         alex     2
# 9         bird     2
# 10    complete     2
# ..         ...   ...

If the question is asking for counts of bigrams and trigrams, tokenizers::tokenize_ngrams is useful:

library(tokenizers)

tokenize_ngrams(text, n = 3L, n_min = 2L, simplify = TRUE) %>%    # tokenize bigrams and trigrams
    as_data_frame() %>%    # structure
    count(value, sort = TRUE)    # count

# Source: local data frame [531 x 2]
# 
#           value     n
#          (fctr) (int)
# 1        of the     5
# 2      a phrase     4
# 3  the sentence     4
# 4          as a     3
# 5        in the     3
# 6        may be     3
# 7    a complete     2
# 8   a phrase is     2
# 9    a sentence     2
# 10      a white     2
# ..          ...   ...


回答4:

Here's a simple base R approach for the 5 most frequent words:

head(sort(table(strsplit(gsub("[[:punct:]]", "", text), " ")), decreasing = TRUE), 5)

#     a    the     of     in phrase 
#    21     18     12     10      8 

What it returns is an integer vector with the frequency count and the names of the vector correspond to the words that were counted.

  • gsub("[[:punct:]]", "", text) to remove punctuation since you don't want to count that, I guess
  • strsplit(gsub("[[:punct:]]", "", text), " ") to split the string on spaces
  • table() to count unique elements' frequency
  • sort(..., decreasing = TRUE) to sort them in decreasing order
  • head(..., 5) to select only the top 5 most frequent words


回答5:

We can split the words and use table to summarize the frequency:

words <- strsplit(text, "[ ,.\\(\\)\"]")
sort(table(words, exclude = ""), decreasing = T)


标签: r n-gram