NLopt with univariate optimization

2019-07-01 12:18发布

问题:

Anyone know if NLopt works with univariate optimization. Tried to run following code:

using NLopt

function myfunc(x, grad)
    x.^2
end

opt = Opt(:LD_MMA, 1)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")

But get following error message:

> Error evaluating untitled
LoadError: BoundsError: attempt to access 1-element Array{Float64,1}:
1.234
at index [2]
in myfunc at untitled:8
in nlopt_callback_wrapper at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:415
in optimize! at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:514
in optimize at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:520
in include_string at loading.jl:282
in include_string at /Users/davidzentlermunro/.julia/v0.4/CodeTools/src/eval.jl:32
in anonymous at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:84
in withpath at /Users/davidzentlermunro/.julia/v0.4/Requires/src/require.jl:37
in withpath at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:53
[inlined code] from /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:83
in anonymous at task.jl:58
while loading untitled, in expression starting on line 13

If this isn't possible, does anyone know if a univariate optimizer where I can specify bounds and an initial condition?

回答1:

There are a couple of things that you're missing here.

  1. You need to specify the gradient (i.e. first derivative) of your function within the function. See the tutorial and examples on the github page for NLopt. Not all optimization algorithms require this, but the one that you are using LD_MMA looks like it does. See here for a listing of the various algorithms and which require a gradient.
  2. You should specify the tolerance for conditions you need before you "declare victory" ¹ (i.e. decide that the function is sufficiently optimized). This is the xtol_rel!(opt,1e-4) in the example below. See also the ftol_rel! for another way to specify a different tolerance condition. According to the documentation, for example, xtol_rel will "stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tol multiplied by the absolute value of the parameter." and ftol_rel will "stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tol multiplied by the absolute value of the function value. " See here under the "Stopping Criteria" section for more information on various options here.
  3. The function that you are optimizing should have a unidimensional output. In your example, your output is a vector (albeit of length 1). (x.^2 in your output denotes a vector operation and a vector output). If you "objective function" doesn't ultimately output a unidimensional number, then it won't be clear what your optimization objective is (e.g. what does it mean to minimize a vector? It's not clear, you could minimize the norm of a vector, for instance, but a whole vector - it isn't clear).

Below is a working example, based on your code. Note that I included the printing output from the example on the github page, which can be helpful for you in diagnosing problems.

using NLopt    

count = 0 # keep track of # function evaluations    

function myfunc(x::Vector, grad::Vector)
    if length(grad) > 0
        grad[1] = 2*x[1]
    end    

    global count
    count::Int += 1
    println("f_$count($x)")    

    x[1]^2
end    

opt = Opt(:LD_MMA, 1)    

xtol_rel!(opt,1e-4)    

min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])    

println("got $minf at $minx (returned $ret)")

¹ (In the words of optimization great Yinyu Ye.)