I am trying to find the local minimum of a function, and the parameters have a fixed sum. For example,
Fx = 10 - 5x1 + 2x2 - x3
and the conditions are as follows,
x1 + x2 + x3 = 15
(x1,x2,x3) >= 0
Where the sum of x1, x2, and x3 have a known value, and they are all greater than zero. In R, it would look something like this,
Fx = function(x) {10 - (5*x[1] + 2*x[2] + x[3])}
opt = optim(c(1,1,1), Fx, method = "L-BFGS-B", lower=c(0,0,0), upper=c(15,15,15))
I also tried to use inequalities with constrOptim to force the sum to be fixed. I still think this may be a plausible work around, but I was unable to make it work. This is a simplified example of the real problem, but any help would be very appreciated.
On this occasion optim
will not work obviously because you have equality constraints. constrOptim
will not work either for the same reason (I tried converting the equality to two inequalities i.e. greater and less than 15 but this didn't work with constrOptim
).
However, there is a package dedicated to this kind of problem and that is Rsolnp
.
You use it the following way:
#specify your function
opt_func <- function(x) {
10 - 5*x[1] + 2 * x[2] - x[3]
}
#specify the equality function. The number 15 (to which the function is equal)
#is specified as an additional argument
equal <- function(x) {
x[1] + x[2] + x[3]
}
#the optimiser - minimises by default
solnp(c(5,5,5), #starting values (random - obviously need to be positive and sum to 15)
opt_func, #function to optimise
eqfun=equal, #equality function
eqB=15, #the equality constraint
LB=c(0,0,0), #lower bound for parameters i.e. greater than zero
UB=c(100,100,100)) #upper bound for parameters (I just chose 100 randomly)
Output:
> solnp(c(5,5,5),
+ opt_func,
+ eqfun=equal,
+ eqB=15,
+ LB=c(0,0,0),
+ UB=c(100,100,100))
Iter: 1 fn: -65.0000 Pars: 14.99999993134 0.00000002235 0.00000004632
Iter: 2 fn: -65.0000 Pars: 14.999999973563 0.000000005745 0.000000020692
solnp--> Completed in 2 iterations
$pars
[1] 1.500000e+01 5.745236e-09 2.069192e-08
$convergence
[1] 0
$values
[1] -10 -65 -65
$lagrange
[,1]
[1,] -5
$hessian
[,1] [,2] [,3]
[1,] 121313076 121313076 121313076
[2,] 121313076 121313076 121313076
[3,] 121313076 121313076 121313076
$ineqx0
NULL
$nfuneval
[1] 126
$outer.iter
[1] 2
$elapsed
Time difference of 0.1770101 secs
$vscale
[1] 6.5e+01 1.0e-08 1.0e+00 1.0e+00 1.0e+00
So the resulting optimal values are:
$pars
[1] 1.500000e+01 5.745236e-09 2.069192e-08
which means that the first parameter is 15 and the rest zero and zero. This is indeed the global minimum in your function since the x2 is adding to the function and 5 * x1 has a much greater (negative) influence than x3 on the outcome. The choice of 15, 0, 0 is the solution and the global minimum to the function according to the constraints.
The function worked great!
This is actually a linear programming problem, so a natural approach would be to use a linear programming solver such as the lpSolve
package. You need to provide an objective function and a constraint matrix and the solver will do the rest:
library(lpSolve)
mod <- lp("min", c(-5, 2, -1), matrix(c(1, 1, 1), nrow=1), "=", 15)
Then you can access the optimal solution and the objective value (adding the constant term 10, which is not provided to the solver):
mod$solution
# [1] 15 0 0
mod$objval + 10
# [1] -65
A linear programming solver should be a good deal quicker than a general nonlinear optimization solver and shouldn't have trouble returning the exact optimal solution (instead of a nearby point that may be subject to rounding errors).