WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION
We establish or refute the optimality of inexact second-order methods for unconstrained nonconvex optimization from the point of view of worst-case evaluation complexity, improving and generalizing our previous results. To this aim, we consider a new general class of inexact second-order algorithms for unconstrained optimization that includes regularization and trust-region variations of Newton’s method as well as of their linesearch variants. For each method in this class and arbitrary accuracy threshold ∊ ∈ (0, 1), we exhibit a smooth objective function with bounded range, whose gradient is globally Lipschitz continuous and whose Hessian is α–Hölder continuous (for given α ∈ [0, 1]), for which the method in question takes at least ⌊∊−(2+α)/(1+α)⌋ function evaluations to generate a first iterate whose gradient is smaller than ∊ in norm. Moreover, we also construct another function on which Newton’s takes ⌊∊−2⌋ evaluations, but whose Hessian is Lipschitz continuous on the path of iterates. These examples provide lower bounds on the worst-case evaluation complexity of methods in our class when applied to smooth problems satisfying the relevant assumptions. Furthermore, for α = 1, this lower bound is of the same order in ∊ as the upper bound on the worst-case evaluation complexity of the cubic regularization method and other algorithms in a class of methods recently proposed by Curtis, Robinson and Samadi or by Royer and Wright, thus implying that these methods have optimal worst-case evaluation complexity within a wider class of second-order methods, and that Newton’s method is suboptimal.