Multinomial Naive Bayes classifier in R

2019-07-29 06:56发布

I am re-asking the question (with the same name) Multinomial Naive Bayes Classifier. That question seems to have accepted an answer which I think is either wrong or I'd like more explanation because I still don't understand.

So far, every Naive Bayes classifier that I've seen in R (including bnlearn and klaR) have implementations that assume that the features have gaussian likelihoods.

Is there an implementation of a Naive Bayes classifier in R that uses multinomial likelihoods (akin to scikit-learn's MultinomialNB)?

In particular -- if it turns out there is some way of calling naive.bayes in either of these modules so the likelihoods are estimated with a multinomial distribution -- I would really appreciate an example of how that's done. I've searched for examples and haven't found any. For example: is this what the usekernal argument is for in klaR.NaiveBayes?

1条回答
一纸荒年 Trace。
2楼-- · 2019-07-29 07:20

I don't know what algorithm the predict method call on naive.bayes models but you can calculate the predictions yourself from the conditional probability tables (mle estimates)

# You may need to get dependencies of gRain from here
#   source("http://bioconductor.org/biocLite.R")
#   biocLite("RBGL")

    library(bnlearn)
    library(gRain)

Using the first example from naive.bayes help page

    data(learning.test)

    # fit model
    bn <- naive.bayes(learning.test, "A")   

    # look at cpt's
    fit <- bn.fit(bn, learning.test)    

    # check that the cpt's (proportions) are the mle of the multinomial dist.
    # Node A:
    all.equal(prop.table(table(learning.test$A)), fit$A$prob)
    # Node B:
    all.equal(prop.table(table(learning.test$B, learning.test$A),2), fit$B$prob)


    # look at predictions - include probabilities 
    pred <- predict(bn, learning.test, prob=TRUE)
    pr <- data.frame(t(attributes(pred)$prob))
    pr <- cbind(pred, pr)

    head(pr, 2)

#   preds          a          b          c
# 1     c 0.29990442 0.33609392 0.36400165
# 2     a 0.80321241 0.17406706 0.02272053

Calculate prediction probabilities from cpt's by running queries - using 'gRain'

    # query using junction tree- algorithm
    jj <- compile(as.grain(fit))

    # Get ptredicted probs for first observation
    net1 <- setEvidence(jj, nodes=c("B", "C", "D", "E", "F"), 
                                         states=c("c", "b", "a", "b", "b"))

    querygrain(net1, nodes="A", type="marginal")

# $A
# A
#        a         b         c 
# 0.3001765 0.3368022 0.3630213 

    # Get ptredicted probs for secondobservation
    net2 <- setEvidence(jj, nodes=c("B", "C", "D", "E", "F"), 
                                         states=c("a", "c", "a", "b", "b"))

    querygrain(net2, nodes="A", type="marginal")

# $A
# A
#         a          b          c 
# 0.80311043 0.17425364 0.02263593 

So these probabilities are pretty close to what you get from bnlearn and are calculated using the mle's,

查看更多
登录 后发表回答