We consider best approximation problems in a nonlinear subset ℳ of a Banach space of functions (𝒱,∥•∥). The norm is assumed to be a generalization of the L 2-norm for which only a weighted Monte Carlo estimate ∥•∥$$ can be computed. The objective is to obtain an approximation v ∈ ℳ of an unknown function u ∈ 𝒱 by minimizing the empirical norm ∥u − v∥$$. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the specified nonlinear least squares setting. Several model classes are examined and the analytical statements about the RIP are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
Keywords: Weighted nonlinear least squares, error analysis, convergence rates, weighted sparsity, tensor networks
@article{M2AN_2022__56_1_79_0,
author = {Eigel, Martin and Schneider, Reinhold and Trunschke, Philipp},
title = {Convergence bounds for empirical nonlinear least-squares},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis },
pages = {79--104},
year = {2022},
publisher = {EDP-Sciences},
volume = {56},
number = {1},
doi = {10.1051/m2an/2021070},
mrnumber = {4376270},
language = {en},
url = {https://www.numdam.org/articles/10.1051/m2an/2021070/}
}
TY - JOUR AU - Eigel, Martin AU - Schneider, Reinhold AU - Trunschke, Philipp TI - Convergence bounds for empirical nonlinear least-squares JO - ESAIM: Mathematical Modelling and Numerical Analysis PY - 2022 SP - 79 EP - 104 VL - 56 IS - 1 PB - EDP-Sciences UR - https://www.numdam.org/articles/10.1051/m2an/2021070/ DO - 10.1051/m2an/2021070 LA - en ID - M2AN_2022__56_1_79_0 ER -
%0 Journal Article %A Eigel, Martin %A Schneider, Reinhold %A Trunschke, Philipp %T Convergence bounds for empirical nonlinear least-squares %J ESAIM: Mathematical Modelling and Numerical Analysis %D 2022 %P 79-104 %V 56 %N 1 %I EDP-Sciences %U https://www.numdam.org/articles/10.1051/m2an/2021070/ %R 10.1051/m2an/2021070 %G en %F M2AN_2022__56_1_79_0
Eigel, Martin; Schneider, Reinhold; Trunschke, Philipp. Convergence bounds for empirical nonlinear least-squares. ESAIM: Mathematical Modelling and Numerical Analysis , Tome 56 (2022) no. 1, pp. 79-104. doi: 10.1051/m2an/2021070
, Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18, (2017) 661–701 | MR
and , Compressive hermite interpolation: sparse, high-dimensional approximation from gradient-augmented measurements. Constr. Approx. 50, (2019) 167–207 | MR
, , and , Breaking the coherence barrier: a new theory for compressed sensing. Forum Math. Sigma 5, (2017) | MR
and , Iterative methods based on soft thresholding of hierarchical tensors. Found. Comput. Math. 17, (2017) 1037–1083 | MR
, and , Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. SIAM J. Math. Data Sci. 2, (2020) 631–657 | MR
, On the convergence rate of sparse grid least squares regression. In: Sparse Grids and Applications-Miami, (2016. Springer (2018)) 19–41 | MR
, , , , and , Adaptive importance sampling: the past, the present, and the future. IEEE Signal Process. Mag. 34, (2017) 60–79
, Extension theorems for sobolev spaces. In: The Maz’ya Anniversary Collection, edited by , and . Birkhäuser Basel (1999). | Zbl | MR
and , New tight frames of curvelets and optimal representations of objects with piecewise singularities. Commun. Pure Appl. Math. 57, (2003) 219–266 | Zbl | MR
and , The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, (2010) 2053–2080 | MR
, and , Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, (2006) 1207–1223 | Zbl | MR
, , and , Manifold constrained low-rank decomposition. In: 2017 IEEE International Conference on Computer Vision Workshops (ICCVW). (2017), 1800–1808
, , , and , Discrete least squares polynomial approximation with random evaluations - application to parametric and stochastic elliptic PDEs. ESAIM: M2AN 49, (2015) 815–837 | MR | Zbl | Numdam
and , Optimal weighted least-squares methods. SMAI J. Comput. Math. 3, (2017) 181–203 | MR
and , On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, (2001) 1–50 | Zbl | MR
and , Learning Theory: An Approximation Theory Viewpoint. Cambridge Monographs on Applied and Computational Mathematics: Cambridge University Press (2007) | Zbl | MR
, , and , A weighted $$-minimization approach for wavelet reconstruction of signals and images. Preprint (2019). | arXiv
, Tail bounds via generic chaining. Electron. J. Probab. 20, (2015) 1–29 | MR
and , Compressed Sensing and Dictionary Learning. London, London: Springer (2019), 525–547
, , and , Variational Monte Carlo - bridging concepts of machine learning and high-dimensional partial differential equations. Adv. Comput. Math. 45, (2019) 2503–2532 | MR
and , Compressed Sensing: Theory and Applications. Cambridge University Press (2012) | Zbl | MR
, , , and , Tensor network approaches for learning non-linear dynamical laws. Preprint (2020). | arXiv
, and , Table of Integrals, Series, and Products. Academic Press (2014) | Zbl
and , An introduction to hierarchical rank and TT-rank of tensors with examples. Comput. Methods Appl. Math. 11, (2011) 291–304 | Zbl | MR
and , Stable ALS approximation in the TT-format for rank-adaptive tensor completion. Numer. Math. 143, (2019) 855–904 | MR
, , and , A Distribution-Free Theory of Nonparametric Regression. New York: Springer (2002) | Zbl | MR
, Tensor Spaces and Numerical Tensor Calculus, Vol. 42. Springer Science& Business Media (2012) | Zbl | MR
, The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 6, (1927) 164–189 | JFM
, and , On the minimax risk of dictionary learning. IEEE Trans. Inf. Theory 62, (2016) 1501–1515 | MR
, Pointwise bounds for orthonormal basis elements in hilbert spaces (2011).
, , and , A theoretical analysis of deep neural networks and parametric PDEs. Constr. Approx., (2021), DOI | DOI | MR
, and , Manifold constrained low-rank and joint sparse learning for dynamic cardiac MRI. IEEE Access 8, (2020) 142622–142631
, , and , Analysis of discrete projection on polynomial spaces with random evaluations. Found. Comput. Math. 14, (2014) 419–456 | Zbl | MR
, and , Convergence estimates in probability and in expectation for discrete least squares with noisy evaluations at random points. J. Multivariate Anal. 142, (2015) 167–182 | MR
NIST Digital Library of Mathematical Functions. | Zbl
, , and , Reproducing kernels of sobolev spaces on and applications to embedding constants and tractability. Anal. App. 16, (2018) 693–715 | MR
, Shearlet approximation of functions with discontinuous derivatives. J. Approx. Theory 207, (2016) 127–138 | MR
, Compressive sensing and structured random matrices. Theor. Found Numer. Methods Sparse Recover. 9, (2010) 1–92 | Zbl | MR
and , Interpolation via weighted minimization. Appl. Comput. Harmonic Anal. 40, (2016) 321–351 | MR
, and , Low rank tensor recovery via iterative hard thresholding. Linear Algebra App. 523, (2017) 220–262 | MR
and , Stable recovery of low-dimensional cones in hilbert spaces: one RIP to rule them all. Appl. Comput. Harmonic Anal. 45, (2018) 170–205 | MR
, User-friendly tail bounds for sums of random matrices. Found. Comput. Math. 12, (2012) 389–434 | Zbl | MR
and , Necessary and sufficient conditions for the uniform convergence of means to their expectations. Theory Prob. App. 26, (1982) 532–553 | Zbl | MR
, On the role of sparsity in compressed sensing and random matrix theory. In: 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). IEEE (2009) 189–192
and , On tensor completion via nuclear norm minimization. Found. Comput. Math. 16, (2015) 1031–1068 | MR
Cité par Sources :





