Unexpected behaviour of ftol_abs and ftol_rel in N

2019-08-08 15:58发布

UPDATE: For anyone else who visits this page, it is worth having a look at this SO question and answer as I suspect the solution there is relevant to the problem I was having here.

This question duplicates one I have asked on the julia-users mailing list, but I haven't gotten a response there (admittedly it has only been 4 days), so thought I would ask here.

I am calling the NLopt API from Julia, although I think my question is independent of the Julia language.

I am trying to solve an optimisation problem using COBYLA but in many cases I am failing to trigger the stopping criteria. My problem is reasonably complicated, but I can reproduce the problem behaviour with a much simpler example.

Specifically, I try to minimize x1^2 + x2^2 + 1 using COBYLA, and I set both ftol_rel and ftol_abs to 0.5. My objective function includes a statement to print the current value to the console, so I can watch the convergence. The final five values printed to the console during convergence are:

1.161
1.074
1.004
1.017
1.038

My understanding is that any of these steps should have triggered the stopping criteria. All steps are less than 0.5, so that should trigger ftol_abs. Further, each value is approximately 1, and 0.5*1 = 0.5, so all steps should also have triggered ftol_rel. In fact, this behaviour is true of the last 8 steps in the convergence routine.

NLopt has been around for a while, so I'm guessing the problem is with my understanding of how ftol_abs and ftol_rel work, rather than being a bug.

Can anyone shed any light on why the stopping criteria are not being triggered much earlier?

If it is of any use, the following Julia code snippet can be used to reproduce everything I have just stated:

using NLopt
function objective_function(param::Vector{Float64}, grad::Vector{Float64})
    obj_func_value = param[1]^2 + param[2]^2 + 1.0
    println("Objective func value = " * string(obj_func_value))
    println("Parameter value = " * string(param))
    return(obj_func_value)
end
opt1 = Opt(:LN_COBYLA, 2)
lower_bounds!(opt1, [-10.0, -10.0])
upper_bounds!(opt1, [10.0, 10.0])
ftol_rel!(opt1, 0.5)
ftol_abs!(opt1, 0.5)
min_objective!(opt1, objective_function)
(fObjOpt, paramOpt, flag) = optimize(opt1, [9.0, 9.0])

标签: julia nlopt
1条回答
一纸荒年 Trace。
2楼-- · 2019-08-08 16:26

Presumably, ftol_rel and ftol_abs are supposed to supply numerically guaranteed errors. The earlier values are close enough but the algorithm might not be able to guarantee it. For example, the gradient or Hessian at the evaluation point might supply such a numerical guarantee. So, it continues a bit further.

To be sure, it's best to look at the optimization algorithm source. If I manage this, I will add it to this answer.

Update: The COBYLA algorithm approximates the gradient (vector derivative) numerically using several evaluation points. As mentioned, this is used to model what the error might be. The errors can really be mathematically guaranteed only for functions restricted to some nice family (e.g. polynomials with some bound on degree).

Take home message: It's OK. Not a bug, but the best the algorithm can do. Let it have those extra iterations.

查看更多
登录 后发表回答