MLE error in R: non-finite finite-difference value

2019-09-07 08:26发布

问题:

I am working on a loss aversion model in R (beginner) and want to estimate some parameters, from a dataset with 3 columns (loss/gain values (both continous and a column with decisions coded as 0 or 1 (binary)) dropbox.com/s/fpw3obrqcx8ld1q/GrandAverage.RData?dl=0 The part of the code if have to use for this I am using is given below:

set <- GrandAverage[, 5:7];
  Beh.Parameters <- function (lambda, alpha, temp) {
 u = 0.5 * set$Gain^alpha + 0.5 * lambda * set$Loss^alpha
  GambleProbability <- 1 / (1 + exp(-temp * u))

  loglike <- set$Decision*log(GambleProbability) + 
    (1- set$Decision)*log(1-GambleProbability)   

  return(-sum(loglike))
 }

  temp_s <- 0.1 #runif(1, 0.1, 1)

  ML.estim1  <- mle(Beh.Parameters, start = list (lambda = 1, alpha = 1, temp = temp_s), nobs = length(set$Decision))
  ML.estim2  <- mle(Beh.Parameters, start = list(lambda = 0.1, alpha = 0.1,  temp = temp_s), nobs = length(set$Decision))

I use the mle function in order to estimate the 3 parameters (lambda, alpha and temp), without the alpha i receive this output for example:

ML.estim1 Call: mle(minuslogl = Beh.Parameters, start = list(lambda = 1, temp = temp_s), nobs = length(set$Decision)) Coefficients: lambda temp 1.298023 1.041057

When I try to run it without the alpha parameter it works fine but when I include it I received these two errors:

Error in optim(start, f, method = method, hessian = TRUE, ...) : non-finite finite-difference value [2] (for the first MLE) Error in optim(start, f, method = method, hessian = TRUE, ...) : initial value in 'vmmin' is not finite (for the second MLE)

I tried to recode the matrix, singular value decomposition, BFGS etc. Any help is welcome...thanks in advance.

回答1:

Your Loss variable is negative. In R, raising negative values to a fractional power (i.e. set$Loss^alpha where alpha is non-integer) returns NaN values. (The only general alternative is to return a complex-valued answer, which you probably don't want.) Did you mean to code Loss as positive rather than negative? Or maybe you want -abs(set$Loss^alpha) ?

As a general purpose debugging tip, it helps to add

cat(lambda,alpha,temp,-sum(loglike),"\n")

as the second-to-last-line of your objective function so you can better see what's going on.