My file has over 4M rows and I need a more efficient way of converting my data to a corpus and document term matrix such that I can pass it to a bayesian classifier.
Consider the following code:
library(tm)
GetCorpus <-function(textVector)
{
doc.corpus <- Corpus(VectorSource(textVector))
doc.corpus <- tm_map(doc.corpus, tolower)
doc.corpus <- tm_map(doc.corpus, removeNumbers)
doc.corpus <- tm_map(doc.corpus, removePunctuation)
doc.corpus <- tm_map(doc.corpus, removeWords, stopwords("english"))
doc.corpus <- tm_map(doc.corpus, stemDocument, "english")
doc.corpus <- tm_map(doc.corpus, stripWhitespace)
doc.corpus <- tm_map(doc.corpus, PlainTextDocument)
return(doc.corpus)
}
data <- data.frame(
c("Let the big dogs hunt","No holds barred","My child is an honor student"), stringsAsFactors = F)
corp <- GetCorpus(data[,1])
inspect(corp)
dtm <- DocumentTermMatrix(corp)
inspect(dtm)
The output:
> inspect(corp)
<<VCorpus (documents: 3, metadata (corpus/indexed): 0/0)>>
[[1]]
<<PlainTextDocument (metadata: 7)>>
let big dogs hunt
[[2]]
<<PlainTextDocument (metadata: 7)>>
holds bar
[[3]]
<<PlainTextDocument (metadata: 7)>>
child honor stud
> inspect(dtm)
<<DocumentTermMatrix (documents: 3, terms: 9)>>
Non-/sparse entries: 9/18
Sparsity : 67%
Maximal term length: 5
Weighting : term frequency (tf)
Terms
Docs bar big child dogs holds honor hunt let stud
character(0) 0 1 0 1 0 0 1 1 0
character(0) 1 0 0 0 1 0 0 0 0
character(0) 0 0 1 0 0 1 0 0 1
My question is, what can I use to create a corpus and DTM faster? It seems to be extremely slow if I use over 300k rows.
I have heard that I could use data.table
but I am not sure how.
I have also looked at the qdap
package, but it gives me an error when trying to load the package, plus I don't even know if it will work.
You have a few choices. @TylerRinker commented about
qdap
, which is certainly a way to go.Alternatively (or additionally) you could also benefit from a healthy does of parallelism. There's a nice CRAN page detailing HPC resources in R. It's a bit dated though and the
multicore
package's functionality is now contained withinparallel
.You can scale up your text mining using the multicore
apply
functions of theparallel
package or with cluster computing (also supported by that package, as well as bysnowfall
andbiopara
).Another way to go is to employ a
MapReduce
approach. A nice presentation on combiningtm
andMapReduce
for big data is available here. While that presentation is a few years old, all of the information is still current, valid and relevant. The same authors have a newer academic article on the topic, which focuses on thetm.plugin.dc
plugin. To get around having a Vector Source instead ofDirSource
you can use coercion:If none of those solutions fit your taste, or if you're just feeling adventurous, you might also see how well your GPU can tackle the problem. There's a lot of variation in how well GPUs perform relative to CPUs and this may be a use case. If you'd like to give it a try, you can use
gputools
or the other GPU packages mentioned on the CRAN HPC Task View.Example:
Output:
Which approach?
data.table
is definitely the right way to go. Regex operations are slow, although the ones instringi
are much faster (in addition to being much better). Anything withI went through many iterations of solving problem in creating
quanteda::dfm()
for my quanteda package (see the GitHub repo here). The fastest solution, by far, involves using thedata.table
andMatrix
packages to index the documents and tokenised features, counting the features within documents, and plugging the result straight into a sparse matrix.In the code below, I've taken for an example texts found with the quanteda package, which you can (and should!) install from CRAN or the development version from
I'd be very interested to see how it works on your 4m documents. Based on my experience working with corpuses of that size, it will work pretty well (if you have enough memory).
Note that in all my profiling, I could not improve the speed of the data.table operations through any sort of parallelisation, because of the way they are written in C++.
Core of the quanteda
dfm()
functionHere is the bare bones of the
data.table
based source code, in case any one wants to have a go at improving it. It takes a input a list of character vectors representing the tokenized texts. In the quanteda package, the full-featureddfm()
works directly on character vectors of documents, or corpus objects, directly and implements lowercasing, removal of numbers, and removal of spacing by default (but these can all be modified if wished).That's just a snippet of course but the full source code is easily found on the GitHub repo (
dfm-main.R
).quanteda on your example
How's this for simplicity?
I think you may want to consider a more regex focused solution. These are some of the problems/thinking I'm wrestling with as a developer. I'm currently looking at the
stringi
package heavily for development as it has some consistently named functions that are wicked fast for string manipulation.In this response I'm attempting to use any tool I know of that is faster than the more convenient methods
tm
may give us (and certainly much faster thanqdap
). Here I haven't even explored parallel processing or data.table/dplyr and instead focus on string manipulation withstringi
and keeping the data in a matrix and manipulating with specific packages meant to handle that format. I take your example and multiply it 100000x. Even with stemming, this takes 17 seconds on my machine.