Generalized Conditional Gradient with Augmented Lagrangian for Composite Minimization
In this paper we propose a splitting scheme which hybridizes the generalized conditional gradient with a proximal step and which we call the CGALP algorithm for minimizing the sum of three proper convex and lower-semicontinuous functions in real Hilbert ...
Stochastic Three Points Method for Unconstrained Smooth Minimization
In this paper we consider the unconstrained minimization problem of a smooth function in $\mathbb{R}^n$ in a setting where only function evaluations are possible. We design a novel randomized derivative-free algorithm---the stochastic three points (STP) ...
Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives
In this paper, we study $p$-order methods for unconstrained minimization of convex functions that are $p$-times differentiable ($p\geq 2$) with $\nu$-Hölder continuous $p$th derivatives. We propose tensor schemes with and without acceleration. For the ...
Randomized Gradient Boosting Machine
The Gradient Boosting Machine (GBM) introduced by Friedman [J. H. Friedman, Ann. Statist., 29 (2001), pp. 1189--1232] is a powerful supervised learning algorithm that is very widely used in practice---it routinely features as a leading algorithm in ...
Convex Analysis in $\mathbb{Z}^n$ and Applications to Integer Linear Programming
In this paper, we compare the definitions of convex sets and convex functions in finite dimensional integer spaces introduced by Adivar and Fang, Borwein, and Giladi, respectively. We show that their definitions of convex sets and convex functions are ...
Distributionally Robust Stochastic Dual Dynamic Programming
We consider a multistage stochastic linear program that lends itself to solution by stochastic dual dynamic programming (SDDP). In this context, we consider a distributionally robust variant of the model with a finite number of realizations at each stage. ...
Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates
In this paper, we propose two novel non-stationary first-order primal-dual algorithms to solve non-smooth composite convex optimization problems. Unlike existing primal-dual schemes where the parameters are often fixed, our methods use predefined and dynamic ...
A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
In this paper, we propose a unified two-phase scheme to accelerate any high-order regularized tensor approximation approach on the smooth part of a composite convex optimization model. The proposed scheme has the advantage of not needing to assume any ...
An Equivalence between Critical Points for Rank Constraints Versus Low-Rank Factorizations
Two common approaches in low-rank optimization problems are either working directly with a rank constraint on the matrix variable or optimizing over a low-rank factorization so that the rank constraint is implicitly ensured. In this paper, we study the ...
The Convex Hull of a Quadratic Constraint over a Polytope
A quadratically constrained quadratic program (QCQP) is an optimization problem in which the objective function is a quadratic function and the feasible region is defined by quadratic constraints. Solving nonconvex QCQP to global optimality is a well-...
Approximate Matrix and Tensor Diagonalization by Unitary Transformations: Convergence of Jacobi-Type Algorithms
We propose a gradient-based Jacobi algorithm for a class of maximization problems on the unitary group, with a focus on approximate diagonalization of complex matrices and tensors by unitary transformations. We provide weak convergence results, and prove ...
Second-Order Guarantees of Distributed Gradient Algorithms
We consider distributed smooth nonconvex unconstrained optimization over net- works, modeled as a connected graph. We examine the behavior of distributed gradient-based algorithms near strict saddle points. Specifically, we establish that (i) the renowned ...
Convergence of Inexact Forward--Backward Algorithms Using the Forward--Backward Envelope
This paper deals with a general framework for inexact forward--backward algorithms aimed at minimizing the sum of an analytic function and a lower semicontinuous, subanalytic, convex term. Such a framework relies on an implementable inexactness condition for ...
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
This paper studies noisy low-rank matrix completion: given partial and noisy entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably one of the most popular paradigms to tackle this problem is ...
Solving Multiobjective Mixed Integer Convex Optimization Problems
Multiobjective mixed integer convex optimization refers to mathematical programming problems where more than one convex objective function needs to be optimized simultaneously and some of the variables are constrained to take integer values. We present a ...
Contracting Proximal Methods for Smooth Convex Optimization
In this paper, we propose new accelerated methods for smooth convex optimization, called contracting proximal methods. At every step of these methods, we need to minimize a contracted version of the objective function augmented by a regularization term in the ...
Globally Convergent Type-I Anderson Acceleration for Nonsmooth Fixed-Point Iterations
We consider the application of the type-I Anderson acceleration to solving general nonsmooth fixed-point problems. By interleaving with safeguarding steps and employing a Powell-type regularization and a restart checking for strong linear independence of ...
Robust Spectral Risk Optimization When Information on Risk Spectrum Is Incomplete
A spectral risk measure (SRM) is a weighted average of value at risk where the weighting function (also known as risk spectrum or distortion function) characterizes a decision maker's risk attitude. In this paper, we consider the case where the decision ...
Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
We study the iteration complexity of the optimistic gradient descent-ascent (OGDA) method and the extragradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be ...
Newton-like Inertial Dynamics and Proximal Algorithms Governed by Maximally Monotone Operators
The introduction of the Hessian damping in the continuous version of Nesterov's accelerated gradient method provides, by temporal discretization, fast proximal gradient algorithms where the oscillations are significantly attenuated. We will extend ...
Fair Packing and Covering on a Relative Scale
Fair resource allocation is a fundamental optimization problem with applications in operations research, networking, and economic and game theory. Research in these areas has led to the general acceptance of a class of $\alpha$-fair utility functions ...
Stochastic Conditional Gradient++: (Non)Convex Minimization and Continuous Submodular Maximization
In this paper, we consider the general nonoblivious stochastic optimization where the underlying stochasticity may change during the optimization procedure and depends on the point at which the function is evaluated. We develop Stochastic Frank--Wolfe++ (...
Scalable Algorithms for the Sparse Ridge Regression
Sparse regression and variable selection for large-scale data have been rapidly developed in the past decades. This work focuses on sparse ridge regression, which enforces the sparsity by use of the $L_{0}$ norm. We first prove that the continuous relaxation ...
Generalized Subdifferentials of Spectral Functions over Euclidean Jordan Algebras
This paper is devoted to the study of generalized subdifferentials of spectral functions over Euclidean Jordan algebras. Spectral functions appear often in optimization problems playing the role of “regularizer,” ``barrier,” ``penalty function,” and many ...