Confidence interval of random effects with lmer

2019-06-14 04:00发布

I am using lmer from lme4 package to calculate confidence interval for variance component .

When I fit the model there is warning messages :

fit <- lmer(Y~X+Z+X:Z+(X|group),data=sim_data)
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  unable to evaluate scaled gradient
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge: degenerate  Hessian with 1 negative eigenvalues

I searched a lot to understand why does the error occur and finally come to a decision that there is difference between error and warning in the R world .

I want to compute confidence interval for the model parameters and run the code which shows error :

 confint.merMod(fit,oldNames=FALSE)
 Computing profile confidence intervals ...
 Error in if (all(x[iu <- upper.tri(x)] == 0)) t(x[!iu]) else t(x)[!iu] : 
 missing value where TRUE/FALSE needed

Is there another way to obtain CI of random effects with lmer?

EDIT :

simfun <- function(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1){
    N <- sum(rep(n_j,J))  
    x <- rnorm(N)         
    z <- rnorm(J)        

    mu <- c(0,0)
    sig <- matrix(c(sig2_0,sig01,sig01,sig2_1),ncol=2)
    u   <- rmvnorm(J,mean=mu,sigma=sig)

    b_0j <- g00 + g01*z + u[,1]
    b_1j <- g10 + g11*z + u[,2]

    y <- rep(b_0j,each=n_j)+rep(b_1j,each=n_j)*x + rnorm(N,0,0.5)
    data <- data.frame(Y=y,X=x,Z=rep(z,each=n_j),group=rep(1:J,each=n_j))
  } 

noncoverage <- function(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1){
    dat <- simfun(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1)
    fit <- lmer(Y~X+Z+X:Z+(X|group),data=dat)
 }

comb1 = replicate(1000,noncoverage(10,5,1,.3,.3,.3,(1/18),0,(1/18)))
comb26 = replicate(1000,noncoverage(100,50,1,.3,.3,.3,(1/8),0,(1/8)))

2条回答
Luminary・发光体
2楼-- · 2019-06-14 04:35

A lmer model shows the results in a matrix. You can access the estimates and standard errors of the model to calculate the confidence intervals.

Since the first row is the estimate of the intercept of the model, the second row is the one that shows the estimate of your manipulated variable. The first column is the estimate of the effect, and the second column is the standard error.

So changing MyModel for the name of your model and rounding the number to two, round(coef(summary(MyModel))[2,1],2)-round(coef(summary(MyModel))[2,2],2)*2 would give you the lower end of the confidence interval, while simply changing the subtraction for an addition in the previous formula would give you the upper end of the estimate: round(coef(summary(MyModel))[2,1],2)+round(coef(summary(MyModel))[2,2],2)*2

查看更多
狗以群分
3楼-- · 2019-06-14 04:36

It depends on what you are looking for from the confidence intervals exactly, but the function sim in the arm package provides a great way to obtain repeated samples from the posterior of an lmer or glmer object to get a sense of the variability in the coefficients of both the fixed and random terms.

In the merTools package, we've written a wrapper that simplifies the process of extracting these values and interacting with them:

library(merTools)
randomSims <- REsim(fit, n.sims = 500)
# and to plot it
plotREsim(REsim(fit, n.sims = 500))

There are a number of other tools for exploring these in merTools. If you want the actual resulting simulations though, you're better off using arm::sim.

查看更多
登录 后发表回答