Abstract
In the conic formulation of a convex optimization problem the constraints are expressed as linear inequalities with respect to a possibly non-polyhedral convex cone. This makes it possible to formulate elegant extensions of interior-point methods for linear programming to general nonlinear convex optimization. Recent research on cone programming algorithms has particularly focused on three convex cones, for which symmetric primal-dual methods have been developed: the nonnegative orthant, the second-order cone, and the positive semidefinite matrix cone. Although not all convex constraints can be expressed in terms of the three standard cones, cone programs associated with these cones are sufficiently general to serve as the basis of convex modeling packages. They are also widely used in machine learning. The main difficulty in the implementation of interior-point methods for cone programming is the complexity of the linear equations that need to be solved at each iteration. These equations are usually dense, unlike the equations that arise in linear programming, and it is therefore difficult to develop general-purpose strategies for exploiting problem structure based solely on sparse matrix methods. In this chapter we give an overview of ad hoc techniques that can be used to exploit non-sparse structure in specific classes of applications. We illustrate the methods with examples from machine learning and present numerical results with CVXOPT, a software package that supports the rapid development of customized interior-point methods.
Original language | English |
---|---|
Title of host publication | Optimization For Machine Learning |
Number of pages | 26 |
Publisher | MIT Press |
Publication date | 2011 |
ISBN (Print) | 9780262016469 |
Publication status | Published - 2011 |
Externally published | Yes |