The LDA topic modeling in the text2vec package is amazing. It is indeed much faster than topicmodel
However, I don't know how to get the probability of each document belongs to each topic as the example below:
V1 V2 V3 V4
1 0.001025237 7.89E-05 7.89E-05 7.89E-05
2 0.002906977 0.002906977 0.014534884 0.002906977
3 0.003164557 0.003164557 0.003164557 0.003164557
4 7.21E-05 7.21E-05 0.000360334 7.21E-05
5 0.000804433 8.94E-05 8.94E-05 8.94E-05
6 5.63E-05 5.63E-05 5.63E-05 5.63E-05
7 0.001984127 0.001984127 0.001984127 0.001984127
8 0.003515625 0.000390625 0.000390625 0.000390625
9 0.000748503 0.000748503 0.003742515 0.003742515
10 0.000141723 0.00297619 0.000141723 0.000708617
This is the code for text2vec lda
ss2 <- as.character(stressor5$weibo)
seg2 <- mmseg4j(ss2)
# Create vocabulary. Terms will be unigrams (simple words).
it_test = itoken(seg2, progressbar = FALSE)
vocab2 <- create_vocabulary(it_test)
pruned_vocab2 = prune_vocabulary(vocab2,
term_count_min = 10,
doc_proportion_max = 0.5,
doc_proportion_min = 0.001)
vectorizer2 <- vocab_vectorizer(pruned_vocab2)
dtm_test = create_dtm(it_test, vectorizer2)
lda_model = LDA$new(n_topics = 1000, vocabulary = vocab2, doc_topic_prior = 0.1, topic_word_prior = 0.01)
doc_topic_distr = lda_model$fit_transform(dtm_test, n_iter = 1000, convergence_tol = 0.01, check_convergence_every_n = 10)