skip to main content
Volume 89, Issue 3Dec 2024Current Issue
Publisher:
  • Kluwer Academic Publishers
  • 101 Philip Drive Assinippi Park Norwell, MA
  • United States
ISSN:0926-6003
Reflects downloads up to 06 Jan 2025Bibliometrics
Skip Table Of Content Section
research-article
An inexact regularized proximal Newton method without line search
Abstract

In this paper, we introduce an inexact regularized proximal Newton method (IRPNM) that does not require any line search. The method is designed to minimize the sum of a twice continuously differentiable function f and a convex (possibly non-smooth ...

research-article
Inexact log-domain interior-point methods for quadratic programming
Abstract

This paper introduces a framework for implementing log-domain interior-point methods (LDIPMs) using inexact Newton steps. A generalized inexact iteration scheme is established that is globally convergent and locally quadratically convergent ...

research-article
A stochastic moving ball approximation method for smooth convex constrained minimization
Abstract

In this paper, we consider constrained optimization problems with convex objective and smooth convex functional constraints. We propose a new stochastic gradient algorithm, called the Stochastic Moving Ball Approximation (SMBA) method, to solve ...

research-article
Stochastic zeroth order descent with structured directions
Abstract

We introduce and analyze Structured Stochastic Zeroth order Descent (S-SZD), a finite difference approach that approximates a stochastic gradient on a set of ld orthogonal directions, where d is the dimension of the ambient space. These ...

research-article
MultiSQP-GS: a sequential quadratic programming algorithm via gradient sampling for nonsmooth constrained multiobjective optimization
Abstract

In this paper, we propose a method for solving constrained nonsmooth multiobjective optimization problems which is based on a Sequential Quadratic Programming (SQP) type approach and the Gradient Sampling (GS) technique. We consider the ...

research-article
Scaled-PAKKT sequential optimality condition for multiobjective problems and its application to an Augmented Lagrangian method
Abstract

Based on the recently introduced Scaled Positive Approximate Karush–Kuhn–Tucker condition for single objective problems, we derive a sequential necessary optimality condition for multiobjective problems with equality and inequality constraints as ...

research-article
A family of conjugate gradient methods with guaranteed positiveness and descent for vector optimization
Abstract

In this paper, we seek a new modification way to ensure the positiveness of the conjugate parameter and, based on the Dai-Yuan (DY) method in the vector setting, propose an associated family of conjugate gradient (CG) methods with guaranteed ...

research-article
A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
Abstract

In this paper we consider finding an approximate second-order stationary point (SOSP) of general nonconvex conic optimization that minimizes a twice differentiable function subject to nonlinear equality constraints and also a convex conic ...

research-article
A power-like method for finding the spectral radius of a weakly irreducible nonnegative symmetric tensor
Abstract

The Perron–Frobenius theorem says that the spectral radius of a weakly irreducible nonnegative tensor is the unique positive eigenvalue corresponding to a positive eigenvector. With this fact in mind, the purpose of this paper is to find the ...

research-article
Nonsmooth projection-free optimization with functional constraints
Abstract

This paper presents a subgradient-based algorithm for constrained nonsmooth convex optimization that does not require projections onto the feasible set. While the well-established Frank–Wolfe algorithm and its variants already avoid projections, ...

research-article
Robust approximation of chance constrained optimization with polynomial perturbation
Abstract

This paper proposes a robust approximation method for solving chance constrained optimization (CCO) of polynomials. Assume the CCO is defined with an individual chance constraint that is affine in the decision variables. We construct a robust ...

Comments