We introduce a novel primal-dual flow for affine constrained convex optimization problems. As a modification of the standard saddle-point system, our flow model is proved to possess the exponential decay property, in terms of a tailored Lyapunov function. Then two primal-dual methods are obtained from numerical discretizations of the continuous problem, and global nonergodic linear convergence rate is established via a discrete Lyapunov function. Instead of solving the subproblem of the primal variable, we apply the semi-smooth Newton iteration to the inner problem with respect to the multiplier, provided that there are some additional properties such as semi-smoothness and sparsity. Finally, numerical tests on the linearly constrained l1-l2 minimization and the tot al-variation based image denoising model have been provided.
Keywords: Convex optimization, linear constraint, dynamical system, Lyapunov function, exponential decay, discretization, nonergodic linear rate, primal-dual algorithm, semi-smooth Newton method, $$1-$$2 minimization, total-variation model
@article{COCV_2022__28_1_A33_0,
author = {Luo, Hao},
title = {A primal-dual flow for affine constrained convex optimization},
journal = {ESAIM: Control, Optimisation and Calculus of Variations},
year = {2022},
publisher = {EDP-Sciences},
volume = {28},
doi = {10.1051/cocv/2022032},
mrnumber = {4431917},
zbl = {1500.90048},
language = {en},
url = {https://www.numdam.org/articles/10.1051/cocv/2022032/}
}
TY - JOUR AU - Luo, Hao TI - A primal-dual flow for affine constrained convex optimization JO - ESAIM: Control, Optimisation and Calculus of Variations PY - 2022 VL - 28 PB - EDP-Sciences UR - https://www.numdam.org/articles/10.1051/cocv/2022032/ DO - 10.1051/cocv/2022032 LA - en ID - COCV_2022__28_1_A33_0 ER -
Luo, Hao. A primal-dual flow for affine constrained convex optimization. ESAIM: Control, Optimisation and Calculus of Variations, Tome 28 (2022), article no. 33. doi: 10.1051/cocv/2022032
[1] , and , Local linear convergence of the ADMM/Douglas-Rachford algorithms without strong convexity and application to statistical imaging. SIAM J. Imaging Sci. 9 (2016) 842–868. | MR | Zbl | DOI
[2] , , and , Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. Series B 168 (2018) 123–175. | MR | Zbl | DOI
[3] , and , The heavy ball with friction method, I. The continuous dynamical system: Global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2 (2000) 1–34. | MR | Zbl | DOI
[4] , and , Fast convex optimization via inertial dynamics with Hessian driven damping. J. Differ. Equ. 261 (2016). | MR | Zbl | DOI
[5] , First-Order Methods in Optimization, volume 1 of MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics and the Mathematical Optimization Society (2017). | MR | Zbl
[6] and , A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2 (2009) 183–202. | MR | Zbl | DOI
[7] and , Gradient-based algorithms with applications to signal-recovery problems, in D. Palomar and Y. Eldar, editors, Convex Optimization in Signal Processing and Communications. Cambridge University Press, Cambridge (2009) 42–88. | MR | Zbl | DOI
[8] . Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (2014). | MR
[9] and , On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration. J.Math. Imaging Vis. 44 (2012) 236–253. | MR | Zbl | DOI
[10] , , and , , Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3 (2010) 1–122. | Zbl | DOI
[11] , Operateurs Maximaux Monotones: Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert, North-Holland Publishing Co., North-Holland Mathematics Studies, No. 5. Notas de Matematica (50) (1973). | MR | Zbl
[12] , and , Linearized Bregman iterations for compressed sensing. Math. Comput. 78 (2009) 1515–1536. | MR | Zbl | DOI
[13] and , Nonsmooth Analysis and Optimization. Preprint (2020). | arXiv
[14] and , An introduction to compressive sampling. IEEE Signal Process. Mag. (2008) 21–30. | DOI
[15] , An algorithm for total variation minimization and applications. J. Math. Imag. Vision 20 (2004) 89–97. | MR | Zbl | DOI
[16] and , A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40 (2011) 120–145. | MR | Zbl | DOI
[17] and , An introduction to continuous optimization for imaging. Acta Numer. 25 (2016) 161–319. | MR | Zbl | DOI
[18] and , On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159 (2016) 253–287. | MR | Zbl | DOI
[19] and , A unified convergence analysis of first order convex optimization methods via strong Lyapunov functions. Preprint (2021). | arXiv | Zbl
[20] and , First order optimization methods based on Hessian-driven Nesterov accelerated gradient flow. Preprint (2019). | arXiv | Zbl
[21] , and , Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 (1999) 33–61. | MR | Zbl | DOI
[22] , and , Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM J. Control Optim. 55 (2017) 486–511. | MR | Zbl | DOI
[23] , and , Asymptotic convergence of constrained primal-dual dynamics. Syst. Control Lett. 87 (2016) 10–15. | MR | Zbl | DOI
[24] , Optimization and Nonsmooth Analysis. Number 5 in Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (1987). | MR
[25] and , Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions. Preprint (2015). | arXiv | MR
[26] and , On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66 (2016) 889–916. | MR | Zbl | DOI
[27] and . Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Number 16 in Classics in applied mathematics. Society for Industrial and Applied Mathematics, Philadelphia (1996). | MR
[28] and , Nonlinear Evolution and Difference Equations of Monotone Type in Hilbert Spaces. CRC Press, Boca Raton (2019), 1st edition. | MR | Zbl | DOI
[29] and , On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82 (1956) 421–439. | MR | Zbl | DOI
[30] , Augmented Lagrangian and alternating direction methods for convex optimization: a tutorial and some illustrative computational results, Technical report, Rutgers University (2012).
[31] , and , A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3 (2010) 1015–1046. | MR | Zbl | DOI
[32] and , Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 2. Springer, New York (2006). | MR | Zbl
[33] and , Stability of primal-dual gradient dynamics and applications to network optimization. Automatica 46 (2010) 1974–1981. | MR | Zbl | DOI
[34] , and , ADMM and accelerated ADMM as continuous dynamical systems. 35th Int. Conf. Mach. Learn. ICML 2018 4 (2018) 2528–2536. | Zbl
[35] and , On decomposition-coordination methods using an augmented Lagrangian, In Studies in Mathematics and Its Applications, volume 15 of Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. North-Holland Publishing, Amsterdam (1983). | Zbl
[36] and , A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2 (1976) 17–40. | Zbl | DOI
[37] and , Linear convergence and metric selection for Douglas-Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62 (2017) 532–544. | MR | Zbl | DOI
[38] and , Newton and quasi-Newton methods for normal maps with polyhedral sets. J. Optim. Theory Appl. 94 (1997) 659–676. | MR | Zbl | DOI
[39] , and , Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming. Preprint (2015). | arXiv | Zbl
[40] , vol. 17 of Systemes dynamiques dissipatifs et applications, Recherches en Mathematiques Appliquees [Research in Applied Mathematics]. Masson, Paris (1991). | MR | Zbl
[41] , and , Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems. Preprint (2020). | arXiv | MR | Zbl
[42] , and , An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems. J. Math. Imag. Vis. 58 (2017) 279–293. | MR | Zbl | DOI
[43] , and , On the convergence of primal-dual hybrid gradient algorithm. SIAM J. Imaging Sci. 7 (2014) 2526–2537. | MR | Zbl | DOI
[44] and , Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imag. Sci. 5 (2012) 119–149. | MR | Zbl | DOI
[45] , and , Accelerated linearized Bregman method. J. Sci. Comput. 54 (2013) 428–453. | MR | Zbl | DOI
[46] , , and , Approximate first-order primal-dual algorithms for saddle point problems. Math. Comput. 90 (2021) 1227–1262. | MR | Zbl | DOI
[47] , and , Inexact accelerated augmented Lagrangian methods. Comput. Optim. Appl. 62 (2015) 373–404. | MR | Zbl | DOI
[48] , , and , Accelerated Bregman method for linearly constrained - minimization. J. Sci. Comput. 56 (2013) 515–534. | MR | Zbl | DOI
[49] and , Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138 (2013) 115–139. | MR | Zbl | DOI
[50] , , and , Robust subspace correction methods for nearly singular systems. Math. Models Methods Appl. Sci. 17 (2007) 1937–1963. | MR | Zbl | DOI
[51] , and , On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope. Math. Program. 179 (2020) 419–446. | MR | Zbl | DOI
[52] , and , Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization. Print (2017). | arXiv | Zbl
[53] , and , An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for linear programming. SIAM J. Optim. 30 (2020) 2410–2440. | MR | Zbl | DOI
[54] and , A control-theoretic perspective on optimal high-order optimization. Preprint (2019). | arXiv | MR | Zbl
[55] , , and , Partial error bound conditions and the linear convergence rate of the alternating direction method of multipliers. SIAM J. Numer. Anal. 56 (2018) 2095–2123. | MR | Zbl | DOI
[56] , An O(sr)-resolution ODE framework for discrete-time optimization algorithms and applications to convex-concave saddle-point problems. (2020). | arXiv
[57] , Accelerated differential inclusion for convex optimization. Optimization (2021) . | DOI | MR
[58] and , From differential equation solvers to accelerated first-order methods for convex optimization. Math. Program. (2021) . | DOI | MR
[59] , Gradient methods for minimizing composite functions. Math. Program. Ser. B 140 (2013) 125–161. | MR | Zbl | DOI
[60] , , , and , A sparse semismooth Newton based augmented Lagrangian method for large-scale support vector machines. Preprint (2019). | arXiv
[61] and , On the equivalence of the primal-dual hybrid gradient method and Douglas–Rachford splitting. Math. Program. (2020). | DOI | MR | Zbl
[62] , , , and , An iterative regularization method for total variation-based image restoration. SIAM J. Multiscale Model. Simul. 4 (2019) 460–489. | MR | Zbl | DOI
[63] and , Proximal algorithms. Found. Trends Optim. 1 (2014).
[64] , , and , An algorithm for minimizing the Mumford-Shah functional. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, Kyoto (2009) 1133–1140.
[65] and , The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3 (1955) 28–41. | MR | Zbl | DOI
[66] , Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18 (1993) 227–244. | MR | Zbl | DOI
[67] and , A nonsmooth version of Newton’s method. Math. Program. 58 (1993) 353–367. | MR | Zbl | DOI
[68] , and , Nonlinear total variation based noise removal algorithms. Phys. D 60 (1992) 259–268. | MR | Zbl | DOI
[69] , Iterative Methods for Sparse Linear Systems, 2nd. Society for Industrial and Applied Mathematics, USA (2003). | MR | Zbl
[70] and , Faster Lagrangian-based methods in convex optimization. (2020). | arXiv | MR | Zbl
[71] , and , A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17 (2016) 1–43. | MR | Zbl
[72] and , Accelerated Uzawa methods for convex optimization. Math. Comput. 86 (2016) 1821–1845. | MR | Zbl | DOI
[73] , A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm. Optim. Lett. (2021) . | DOI | MR | Zbl
[74] , Proximal alternating penalty algorithms for nonsmooth constrained convex optimization. Comput. Optim. Appl. 72 (2019) 1–43. | MR | Zbl | DOI
[75] and , Constrained convex minimization via model-based excessive gap. In In Proc. the Neural Information Processing Systems (NIPS), Vol. 27. Montreal, Canada (2014) 721–729.
[76] , and , A smooth primal-dual optimization framework for nonsmooth composite convex minimization. SIAM J. Optim. 28 (2018) 96–134. | MR | Zbl | DOI
[77] and , Augmented Lagrangian-based decomposition methods with non-ergodic optimal rates. (2018). | arXiv
[78] and , Non-stationary first-order primal-dual algorithms with faster convergence rates. SIAM J. Optim. 30 (2020) 2866–2896. | MR | Zbl | DOI
[79] , Inertial, corrected, primal-dual proximal splitting. SIAM J. Optim. 30 (2020) 1391–1420. | MR | Zbl | DOI
[80] , and , A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. 113 (2016) E7351–E7358. | MR | Zbl
[81] , and , A Lyapunov analysis of accelerated methods in optimization. J. Mach. Learn. Res. 22 (2021) 1–34. | MR | Zbl
[82] , Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27 (2017) 1459–1484. | MR | Zbl | DOI
[83] and , Algebraic multigrid methods. Acta Numer. 26 (2017) 591–721. | MR | Zbl | DOI
[84] and , Linear convergence of the alternating direction method of multipliers for a class of convex optimization problems. SIAM J. Numer. Anal. 54 (2016) 625–640. | MR | Zbl | DOI
[85] and , Self equivalence of the alternating direction method of multipliers. (2015). | arXiv | MR | Zbl
[86] , Analysis and generalizations of the linearized Bregman method. SIAM J. Imag. Sci. 3 (2010) 856–877. | MR | Zbl | DOI
[87] , and , Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis. J. Mach. Learn. Res. 21 (2020) 1–74. | MR | Zbl
[88] , and , Dynamical primal-dual accelerated method with applications to network optimization. Print (2019), pp. 1–22 | arXiv | Zbl
[89] and , An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report CAM Report 08-34, UCLA, Los Angeles, CA, USA (2008).
Cité par Sources :





