Today I just started writing an script which trains LDA models on large corpora (minimum 30M sentences) using gensim library. Here is the current code that I am using:
from gensim import corpora, models, similarities, matutils
def train_model(fname):
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
dictionary = corpora.Dictionary(line.lower().split() for line in open(fname))
print "DOC2BOW"
corpus = [dictionary.doc2bow(line.lower().split()) for line in open(fname)]
print "running LDA"
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=100, update_every=1, chunksize=10000, asses=1)
running this script on a small corpus (2M sentences) I realized that it needs about 7GB of RAM. And when I try to run it on the larger corpora, it fails because of the memory issue. The problem is obviously due to the fact that I am loading the corpus using this command:
corpus = [dictionary.doc2bow(line.lower().split()) for line in open(fname)]
But, I think there is no other way because I would need it for calling the LdaModel() method:
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=100, update_every=1, chunksize=10000, asses=1)
I searched for a solution to this problem but I could not find anything helpful. I would imagine that it should be a common problem since we mostly train the models on very large corpora (usually wikipedia documents). So, it should be already a solution for it.
Any ideas about this issue and the solution for it?