A necessary condition for the guarantee of the superiorization method

We study a method that involves principally convex feasibility-seeking and makes secondary efforts of objective function value reduction. This is the well-known superiorization method (SM), where the iterates of an asymptotically convergent iterative feasibility-seeking algorithm are perturbed by objective function nonascent steps. We investigate the question under what conditions a sequence generated by an SM … Read more

Newtonian Methods with Wolfe Linesearch in Nonsmooth Optimization and Machine Learning

This paper introduces and develops coderivative-based Newton methods with Wolfe linesearch conditions to solve various classes of problems in nonsmooth optimization and machine learning. We first propose a generalized regularized Newton method with Wolfe linesearch (GRNM-W) for unconstrained $C^{1,1}$ minimization problems (which are second-order nonsmooth) and establish global as well as local superlinear convergence of … Read more

A Decomposition Framework for Nonlinear Nonconvex Two-Stage Optimization

We propose a new decomposition framework for continuous nonlinear constrained two-stage optimization, where both first- and second-stage problems can be nonconvex. A smoothing technique based on an interior-point formulation renders the optimal solution of the second-stage problem differentiable with respect to the first-stage parameters. As a consequence, efficient off-the-shelf optimization packages can be utilized. We … Read more

Fully Adaptive Zeroth-Order Method for Minimizing Functions with Compressible Gradients

We propose an adaptive zeroth-order method for minimizing differentiable functions with L-Lipschitz continuous gradients. The method is designed to take advantage of the eventual compressibility of the gradient of the objective function, but it does not require knowledge of the approximate sparsity level s or the Lipschitz constant L of the gradient. We show that … Read more

Descent Scheme for a Class of Bilevel Programming Problems

In this paper, a class of bilevel programming problems is studied, in which the lower level is a quadratic programming problem, and the upper level problem consists of a nonlinear objective function with coupling constraints. An iterative process is developed to generate a sequence of points, which converges to the solution of this problem. In … Read more

A stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints

The Augmented Lagrangian Method (ALM) is one of the most common approaches for solving linear and nonlinear constrained problems. However, for non-convex objectives, handling non-linear inequality constraints remains challenging. In this paper, we propose a stochastic ALM with Backtracking Line Search that performs on a subset (mini-batch) of randomly selected points for the solving of … Read more

Restarting nonlinear conjugate gradient methods

In unconstrained optimization, due to the nonlinearity of the objective function or rounding errors in finite precision arithmetic, it can happen that NaN or infinite step sizes appear in the nonlinear conjugate gradient (NCG) method, or otherwise the step violates the sufficient descent condition (SDC). In this case the conjugate gradient (CG) direction must often … Read more

A class of diagonal quasi-Newton penalty decomposition algorithms for sparse bound-constrained nonconvex optimization

This paper discusses an improved quasi-Newton penalty decomposition algorithm for the cardinality bound-constrained optimization problems whose simple bounds on the variables are assumed to be finite. Until an approximate stationary point is found, this algorithm approximates the solutions of a sequence of penalty subproblems by a two-block decomposition scheme. This scheme finds an approximate solution … Read more

Extended Triangle Inequalities for Nonconvex Box-Constrained Quadratic Programming

Let Box_n = {x in R^n : 0<= x <= e }, and let QPB_n denote the convex hull of {(1, x’)'(1, x’) : x  in Box_n}. The quadratic programming problem min{x’Q x + q’x : x in Box_n} where Q is not positive semidefinite (PSD), is equivalent to a linear optimization problem over QPB_n … Read more

SMOP: Stochastic trust region method for multi-objective problems

The problem considered is a multi-objective optimization problem, in which the goal is to find an optimal value of a vector function representing various criteria. The aim of this work is to develop an algorithm which utilizes the trust region framework with probabilistic model functions, able to cope with noisy problems, using inaccurate functions and … Read more