A primal-dual flow for affine constrained convex optimization
ESAIM: Control, Optimisation and Calculus of Variations, Tome 28 (2022), article no. 33

We introduce a novel primal-dual flow for affine constrained convex optimization problems. As a modification of the standard saddle-point system, our flow model is proved to possess the exponential decay property, in terms of a tailored Lyapunov function. Then two primal-dual methods are obtained from numerical discretizations of the continuous problem, and global nonergodic linear convergence rate is established via a discrete Lyapunov function. Instead of solving the subproblem of the primal variable, we apply the semi-smooth Newton iteration to the inner problem with respect to the multiplier, provided that there are some additional properties such as semi-smoothness and sparsity. Finally, numerical tests on the linearly constrained l1-l2 minimization and the tot al-variation based image denoising model have been provided.

DOI : 10.1051/cocv/2022032
Classification : 37M99, 37N40, 65K05, 90C25
Keywords: Convex optimization, linear constraint, dynamical system, Lyapunov function, exponential decay, discretization, nonergodic linear rate, primal-dual algorithm, semi-smooth Newton method, $$1-$$2 minimization, total-variation model
@article{COCV_2022__28_1_A33_0,
     author = {Luo, Hao},
     title = {A primal-dual flow for affine constrained convex optimization},
     journal = {ESAIM: Control, Optimisation and Calculus of Variations},
     year = {2022},
     publisher = {EDP-Sciences},
     volume = {28},
     doi = {10.1051/cocv/2022032},
     mrnumber = {4431917},
     zbl = {1500.90048},
     language = {en},
     url = {https://www.numdam.org/articles/10.1051/cocv/2022032/}
}
TY  - JOUR
AU  - Luo, Hao
TI  - A primal-dual flow for affine constrained convex optimization
JO  - ESAIM: Control, Optimisation and Calculus of Variations
PY  - 2022
VL  - 28
PB  - EDP-Sciences
UR  - https://www.numdam.org/articles/10.1051/cocv/2022032/
DO  - 10.1051/cocv/2022032
LA  - en
ID  - COCV_2022__28_1_A33_0
ER  - 
%0 Journal Article
%A Luo, Hao
%T A primal-dual flow for affine constrained convex optimization
%J ESAIM: Control, Optimisation and Calculus of Variations
%D 2022
%V 28
%I EDP-Sciences
%U https://www.numdam.org/articles/10.1051/cocv/2022032/
%R 10.1051/cocv/2022032
%G en
%F COCV_2022__28_1_A33_0
Luo, Hao. A primal-dual flow for affine constrained convex optimization. ESAIM: Control, Optimisation and Calculus of Variations, Tome 28 (2022), article no. 33. doi: 10.1051/cocv/2022032

[1] T. Aspelmeier, C. Charitha and D. R. Luke, Local linear convergence of the ADMM/Douglas-Rachford algorithms without strong convexity and application to statistical imaging. SIAM J. Imaging Sci. 9 (2016) 842–868. | MR | Zbl | DOI

[2] H. Attouch, Z. Chbani, J. Peypouquet and P. Redont, Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. Series B 168 (2018) 123–175. | MR | Zbl | DOI

[3] H. Attouch, X. Goudou and P. Redont, The heavy ball with friction method, I. The continuous dynamical system: Global exploration of the local minima of a real-valued function by asymptotic analysis of a dissipative dynamical system. Commun. Contemp. Math. 2 (2000) 1–34. | MR | Zbl | DOI

[4] H. Attouch, J. Peypouquet and P. Redont, Fast convex optimization via inertial dynamics with Hessian driven damping. J. Differ. Equ. 261 (2016). | MR | Zbl | DOI

[5] A. Beck, First-Order Methods in Optimization, volume 1 of MOS-SIAM Series on Optimization. Society for Industrial and Applied Mathematics and the Mathematical Optimization Society (2017). | MR | Zbl

[6] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2 (2009) 183–202. | MR | Zbl | DOI

[7] A. Beck and M. Teboulle, Gradient-based algorithms with applications to signal-recovery problems, in D. Palomar and Y. Eldar, editors, Convex Optimization in Signal Processing and Communications. Cambridge University Press, Cambridge (2009) 42–88. | MR | Zbl | DOI

[8] D. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (2014). | MR

[9] S. Bonettini and V. Ruggiero, On the convergence of primal-dual hybrid gradient algorithms for total variation image restoration. J.Math. Imaging Vis. 44 (2012) 236–253. | MR | Zbl | DOI

[10] S. Boyd, N. Parikh, E. Chu and J. Peleato, B. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3 (2010) 1–122. | Zbl | DOI

[11] H. Brezis, Operateurs Maximaux Monotones: Et Semi-Groupes De Contractions Dans Les Espaces De Hilbert, North-Holland Publishing Co., North-Holland Mathematics Studies, No. 5. Notas de Matematica (50) (1973). | MR | Zbl

[12] J.-F. Cai, S. Osher and Z. Shen, Linearized Bregman iterations for compressed sensing. Math. Comput. 78 (2009) 1515–1536. | MR | Zbl | DOI

[13] C. Clason and T. Valkonen, Nonsmooth Analysis and Optimization. Preprint (2020). | arXiv

[14] E. Candes and M. Wakin, An introduction to compressive sampling. IEEE Signal Process. Mag. (2008) 21–30. | DOI

[15] A. Chambolle, An algorithm for total variation minimization and applications. J. Math. Imag. Vision 20 (2004) 89–97. | MR | Zbl | DOI

[16] A. Chambolle and T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40 (2011) 120–145. | MR | Zbl | DOI

[17] A. Chambolle and T. Pock, An introduction to continuous optimization for imaging. Acta Numer. 25 (2016) 161–319. | MR | Zbl | DOI

[18] A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159 (2016) 253–287. | MR | Zbl | DOI

[19] L. Chen and H. Luo, A unified convergence analysis of first order convex optimization methods via strong Lyapunov functions. Preprint (2021). | arXiv | Zbl

[20] L. Chen and H. Luo, First order optimization methods based on Hessian-driven Nesterov accelerated gradient flow. Preprint (2019). | arXiv | Zbl

[21] S. Chen, D. Donoho and M. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20 (1999) 33–61. | MR | Zbl | DOI

[22] A. Cherukuri, B. Gharesifard and J. Cortés, Saddle-point dynamics: conditions for asymptotic stability of saddle points. SIAM J. Control Optim. 55 (2017) 486–511. | MR | Zbl | DOI

[23] A. Cherukuri, E. Mallada and J. Cortés, Asymptotic convergence of constrained primal-dual dynamics. Syst. Control Lett. 87 (2016) 10–15. | MR | Zbl | DOI

[24] F. Clarke, Optimization and Nonsmooth Analysis. Number 5 in Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (1987). | MR

[25] D. Davis and W. Yin, Faster convergence rates of relaxed Peaceman-Rachford and ADMM under regularity assumptions. Preprint (2015). | arXiv | MR

[26] W. Deng and W. Yin, On the global and linear convergence of the generalized alternating direction method of multipliers. J. Sci. Comput. 66 (2016) 889–916. | MR | Zbl | DOI

[27] J. Dennis and R. Schnabel. Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Number 16 in Classics in applied mathematics. Society for Industrial and Applied Mathematics, Philadelphia (1996). | MR

[28] B. Djafari-Rouhani and H. Khatibzadeh, Nonlinear Evolution and Difference Equations of Monotone Type in Hilbert Spaces. CRC Press, Boca Raton (2019), 1st edition. | MR | Zbl | DOI

[29] J. Douglas and H. H. Rachford, On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82 (1956) 421–439. | MR | Zbl | DOI

[30] J. Eckstein, Augmented Lagrangian and alternating direction methods for convex optimization: a tutorial and some illustrative computational results, Technical report, Rutgers University (2012).

[31] E. Esser, X. Zhang and T. F. Chan, A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3 (2010) 1015–1046. | MR | Zbl | DOI

[32] F. Facchinei and J. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 2. Springer, New York (2006). | MR | Zbl

[33] D. Feijer and F. Paganini, Stability of primal-dual gradient dynamics and applications to network optimization. Automatica 46 (2010) 1974–1981. | MR | Zbl | DOI

[34] G. Franca, D. Robinson and R. Vidal, ADMM and accelerated ADMM as continuous dynamical systems. 35th Int. Conf. Mach. Learn. ICML 2018 4 (2018) 2528–2536. | Zbl

[35] M. Fortin and R. Glowinski, On decomposition-coordination methods using an augmented Lagrangian, In Studies in Mathematics and Its Applications, volume 15 of Augmented Lagrangian Methods: Applications to the Numerical Solution of Boundary-Value Problems. North-Holland Publishing, Amsterdam (1983). | Zbl

[36] D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Comput. Math. Appl. 2 (1976) 17–40. | Zbl | DOI

[37] P. Giselsson and S. Boyd, Linear convergence and metric selection for Douglas-Rachford splitting and ADMM. IEEE Trans. Automat. Contr. 62 (2017) 532–544. | MR | Zbl | DOI

[38] J. Han and D. Sun, Newton and quasi-Newton methods for normal maps with polyhedral sets. J. Optim. Theory Appl. 94 (1997) 659–676. | MR | Zbl | DOI

[39] D. Han, D. Sun and L. Zhang, Linear rate convergence of the alternating direction method of multipliers for convex composite quadratic and semi-definite programming. Preprint (2015). | arXiv | Zbl

[40] A. Haraux, vol. 17 of Systemes dynamiques dissipatifs et applications, Recherches en Mathematiques Appliquees [Research in Applied Mathematics]. Masson, Paris (1991). | MR | Zbl

[41] X. He, R. Hu and Y. Fang, Convergence rates of inertial primal-dual dynamical methods for separable convex optimization problems. Preprint (2020). | arXiv | MR | Zbl

[42] B. He, F. Ma and X. Yuan, An algorithmic framework of generalized primal-dual hybrid gradient methods for saddle point problems. J. Math. Imag. Vis. 58 (2017) 279–293. | MR | Zbl | DOI

[43] B. He, Y. You and X. Yuan, On the convergence of primal-dual hybrid gradient algorithm. SIAM J. Imaging Sci. 7 (2014) 2526–2537. | MR | Zbl | DOI

[44] B. He and X. Yuan, Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imag. Sci. 5 (2012) 119–149. | MR | Zbl | DOI

[45] B. Huang, S. Ma and D. Goldfarb, Accelerated linearized Bregman method. J. Sci. Comput. 54 (2013) 428–453. | MR | Zbl | DOI

[46] F. Jiang, X. Cai, Z. Wu and D. Han, Approximate first-order primal-dual algorithms for saddle point problems. Math. Comput. 90 (2021) 1227–1262. | MR | Zbl | DOI

[47] M. Kang, M. Kang and M. Jung, Inexact accelerated augmented Lagrangian methods. Comput. Optim. Appl. 62 (2015) 373–404. | MR | Zbl | DOI

[48] M. Kang, S. Yun, H. Woo and M. Kang, Accelerated Bregman method for linearly constrained 1 - 2 minimization. J. Sci. Comput. 56 (2013) 515–534. | MR | Zbl | DOI

[49] G. Lan and R. Monteiro, Iteration-complexity of first-order penalty methods for convex programming. Math. Program. 138 (2013) 115–139. | MR | Zbl | DOI

[50] Y. J. Lee, J. Wu, J. Xu and L. Zikatanov, Robust subspace correction methods for nearly singular systems. Math. Models Methods Appl. Sci. 17 (2007) 1937–1963. | MR | Zbl | DOI

[51] D. Li, X. Sun and K. Toh, On the efficient computation of a generalized Jacobian of the projector over the Birkhoff polytope. Math. Program. 179 (2020) 419–446. | MR | Zbl | DOI

[52] H. Li, C. Fang and Z. Lin, Convergence rates analysis of the quadratic penalty method and its applications to decentralized distributed optimization. Print (2017). | arXiv | Zbl

[53] X. Li, D. Sun and K. Toh, An asymptotically superlinearly convergent semismooth Newton augmented Lagrangian method for linear programming. SIAM J. Optim. 30 (2020) 2410–2440. | MR | Zbl | DOI

[54] T. Lin and M. I. Jordan, A control-theoretic perspective on optimal high-order optimization. Preprint (2019). | arXiv | MR | Zbl

[55] Y. Liu, X. Yuan, S. Zeng and J. Zhang, Partial error bound conditions and the linear convergence rate of the alternating direction method of multipliers. SIAM J. Numer. Anal. 56 (2018) 2095–2123. | MR | Zbl | DOI

[56] H. Lu, An O(sr)-resolution ODE framework for discrete-time optimization algorithms and applications to convex-concave saddle-point problems. (2020). | arXiv

[57] H. Luo, Accelerated differential inclusion for convex optimization. Optimization (2021) . | DOI | MR

[58] H. Luo and L. Chen, From differential equation solvers to accelerated first-order methods for convex optimization. Math. Program. (2021) . | DOI | MR

[59] Y. Nesterov, Gradient methods for minimizing composite functions. Math. Program. Ser. B 140 (2013) 125–161. | MR | Zbl | DOI

[60] D. Niu, C. Wang, P. Tang, Q. Wang and E. Song, A sparse semismooth Newton based augmented Lagrangian method for large-scale support vector machines. Preprint (2019). | arXiv

[61] D. O’Connor and L. Vandenberghe, On the equivalence of the primal-dual hybrid gradient method and Douglas–Rachford splitting. Math. Program. (2020). | DOI | MR | Zbl

[62] S. Osher, M. Burger, D. Goldfarb, J. Xu and W. Yin, An iterative regularization method for total variation-based image restoration. SIAM J. Multiscale Model. Simul. 4 (2019) 460–489. | MR | Zbl | DOI

[63] N. Parikh and S. Boyd, Proximal algorithms. Found. Trends Optim. 1 (2014).

[64] T. Pock, D. Cremers, H. Bischof and A. Chambolle, An algorithm for minimizing the Mumford-Shah functional. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, Kyoto (2009) 1133–1140.

[65] D. W. Peaceman and H. H. Rachford, The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3 (1955) 28–41. | MR | Zbl | DOI

[66] L. Qi, Convergence analysis of some algorithms for solving nonsmooth equations. Math. Oper. Res. 18 (1993) 227–244. | MR | Zbl | DOI

[67] L. Qi and J. Sun, A nonsmooth version of Newton’s method. Math. Program. 58 (1993) 353–367. | MR | Zbl | DOI

[68] L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D 60 (1992) 259–268. | MR | Zbl | DOI

[69] Y. Saad, Iterative Methods for Sparse Linear Systems, 2nd. Society for Industrial and Applied Mathematics, USA (2003). | MR | Zbl

[70] S. Sabach and M. Teboulle, Faster Lagrangian-based methods in convex optimization. (2020). | arXiv | MR | Zbl

[71] W. Su, S. Boyd and E. Candès, A differential equation for modeling Nesterov’s accelerated gradient method: theory and insights. J. Mach. Learn. Res. 17 (2016) 1–43. | MR | Zbl

[72] M. Tao and X. Yuan, Accelerated Uzawa methods for convex optimization. Math. Comput. 86 (2016) 1821–1845. | MR | Zbl | DOI

[73] Q. Tran-Dinh, A unified convergence rate analysis of the accelerated smoothed gap reduction algorithm. Optim. Lett. (2021) . | DOI | MR | Zbl

[74] Q. Tran-Dinh, Proximal alternating penalty algorithms for nonsmooth constrained convex optimization. Comput. Optim. Appl. 72 (2019) 1–43. | MR | Zbl | DOI

[75] Q. Tran-Dinh and V. Cevher, Constrained convex minimization via model-based excessive gap. In In Proc. the Neural Information Processing Systems (NIPS), Vol. 27. Montreal, Canada (2014) 721–729.

[76] Q. Tran-Dinh, O. Fercoq and V. Cevher, A smooth primal-dual optimization framework for nonsmooth composite convex minimization. SIAM J. Optim. 28 (2018) 96–134. | MR | Zbl | DOI

[77] Q. Tran-Dinh and Y. Zhu, Augmented Lagrangian-based decomposition methods with non-ergodic optimal rates. (2018). | arXiv

[78] Q. Tran-Dinh and Y. Zhu, Non-stationary first-order primal-dual algorithms with faster convergence rates. SIAM J. Optim. 30 (2020) 2866–2896. | MR | Zbl | DOI

[79] T. Valkonen, Inertial, corrected, primal-dual proximal splitting. SIAM J. Optim. 30 (2020) 1391–1420. | MR | Zbl | DOI

[80] A. Wibisono, A. Wilson and M. Jordan, A variational perspective on accelerated methods in optimization. Proc. Natl. Acad. Sci. 113 (2016) E7351–E7358. | MR | Zbl

[81] A. C. Wilson, B. Recht and M. I. Jordan, A Lyapunov analysis of accelerated methods in optimization. J. Mach. Learn. Res. 22 (2021) 1–34. | MR | Zbl

[82] Y. Xu, Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming. SIAM J. Optim. 27 (2017) 1459–1484. | MR | Zbl | DOI

[83] J. Xu and L. Zikatanov, Algebraic multigrid methods. Acta Numer. 26 (2017) 591–721. | MR | Zbl | DOI

[84] W. H. Yang and D. Han, Linear convergence of the alternating direction method of multipliers for a class of convex optimization problems. SIAM J. Numer. Anal. 54 (2016) 625–640. | MR | Zbl | DOI

[85] M. Yan and W. Yin, Self equivalence of the alternating direction method of multipliers. (2015). | arXiv | MR | Zbl

[86] W. Yin, Analysis and generalizations of the linearized Bregman method. SIAM J. Imag. Sci. 3 (2010) 856–877. | MR | Zbl | DOI

[87] X. Yuan, S. Zeng and J. Zhang, Discerning the linear convergence of ADMM for structured convex optimization through the lens of variational analysis. J. Mach. Learn. Res. 21 (2020) 1–74. | MR | Zbl

[88] X. Zeng, J. Lei and J. Chen, Dynamical primal-dual accelerated method with applications to network optimization. Print (2019), pp. 1–22 | arXiv | Zbl

[89] M. Zhu and T. Chan, An efficient primal-dual hybrid gradient algorithm for total variation image restoration. Technical report CAM Report 08-34, UCLA, Los Angeles, CA, USA (2008).

Cité par Sources :