High-dimensional gaussian model selection on a gaussian design
Annales de l'I.H.P. Probabilités et statistiques, Volume 46 (2010) no. 2, p. 480-524

We consider the problem of estimating the conditional mean of a real gaussian variable Y=∑i=1pθiXi+ɛ where the vector of the covariates (Xi)1≤ip follows a joint gaussian distribution. This issue often occurs when one aims at estimating the graph or the distribution of a gaussian graphical model. We introduce a general model selection procedure which is based on the minimization of a penalized least squares type criterion. It handles a variety of problems such as ordered and complete variable selection, allows to incorporate some prior knowledge on the model and applies when the number of covariates p is larger than the number of observations n. Moreover, it is shown to achieve a non-asymptotic oracle inequality independently of the correlation structure of the covariates. We also exhibit various minimax rates of estimation in the considered framework and hence derive adaptivity properties of our procedure.

Nous nous intéressons à l'estimation de l'espérance conditionnelle d'une variable gaussienne. Ce problème est courant lorsque l'on veut estimer le graphe ou la distribution d'un modèle graphique gaussien. Dans cet article, nous introduisons une procédure de sélection de modèle basée sur la minimisation d'un critère des moindres carrés pénalisés. Cette méthode générale permet de traiter un grand nombre de problèmes comme la sélection ordonnée ou la sélection complête de variables. De plus, elle reste valable dans un cadre de « grande dimension »: lorsque le nombre de covariables est bien plus élevé que le nombre d'observations. L'estimateur obtenue vérifie une inégalité oracle non-asymptotique et ce quelque soit la corrélation entre les covariables. Nous calculons également des vitesses minimax d'estimation dans ce cadre et montrons que notre procédure vérifie diverses propriétés d'adaptation.

DOI : https://doi.org/10.1214/09-AIHP321
Classification:  62J05,  62G08
Keywords: model selection, linear regression, oracle inequalities, gaussian graphical models, minimax rates of estimation
@article{AIHPB_2010__46_2_480_0,
     author = {Verzelen, Nicolas},
     title = {High-dimensional gaussian model selection on a gaussian design},
     journal = {Annales de l'I.H.P. Probabilit\'es et statistiques},
     publisher = {Gauthier-Villars},
     volume = {46},
     number = {2},
     year = {2010},
     pages = {480-524},
     doi = {10.1214/09-AIHP321},
     zbl = {1191.62076},
     mrnumber = {2667707},
     language = {en},
     url = {http://www.numdam.org/item/AIHPB_2010__46_2_480_0}
}
Verzelen, Nicolas. High-dimensional gaussian model selection on a gaussian design. Annales de l'I.H.P. Probabilités et statistiques, Volume 46 (2010) no. 2, pp. 480-524. doi : 10.1214/09-AIHP321. http://www.numdam.org/item/AIHPB_2010__46_2_480_0/

[1] H. Akaike. Statistical predictor identification. Ann. Inst. Statist. Math. 22 (1970) 203-217. | MR 286233 | Zbl 0259.62076

[2] H. Akaike. A new look at the statistical model identification. IEEE Trans. Automat. Control 19 (1974) 716-723. | MR 423716 | Zbl 0314.62039

[3] S. Arlot. Model selection by resampling penalization. Electron. J. Stat. 3 (2009) 557-624. | MR 2519533

[4] Y. Baraud, C. Giraud and S. Huet. Gaussian model selection with an unknown variance. Ann. Statist. 37 (2009) 630-672. | MR 2502646 | Zbl 1162.62051

[5] P. Bickel, Y. Ritov and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist. 37 (2009) 1705-1732. | MR 2533469 | Zbl 1173.62022

[6] L. Birgé. A new lower bound for multiple hypothesis testing. IEEE Trans. Inform. Theory 51 (2005) 1611-1615. | MR 2241522 | Zbl 1283.62030

[7] L. Birgé and P. Massart. Minimum contrast estimators on sieves: Exponential bounds and rates of convergence. Bernoulli 4 (1998) 329-375. | MR 1653272 | Zbl 0954.62033

[8] L. Birgé and P. Massart. Gaussian model selection. J. Eur. Math. Soc. (JEMS) 3 (2001) 203-268. | MR 1848946 | Zbl 1037.62001

[9] L. Birgé and P. Massart. Minimal penalties for Gaussian model selection. Probab. Theory Related Fields 138 (2007) 33-73. | MR 2288064 | Zbl 1112.62082

[10] F. Bunea, A. Tsybakov and M. Wegkamp. Aggregation for Gaussian regression. Ann. Statist. 35 (2007) 1674-1697. | MR 2351101 | Zbl 1209.62065

[11] F. Bunea, A. Tsybakov and M. Wegkamp. Sparsity oracle inequalities for the Lasso. Electron. J. Stat. 1 (2007) 169-194 (electronic). | MR 2312149 | Zbl 1146.62028

[12] E. J. Candes and T. Tao. Decoding by linear programming. IEEE Trans. Inform. Theory 51 (2005) 4203-4215. | MR 2243152

[13] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Statist. 35 (2007) 2313-2351. | MR 2382644 | Zbl 1139.62019

[14] E. Candès and Y. Plan. Near-ideal model selection by l1 minimization. Ann. Statist. To appear, 2009. | MR 2543688 | Zbl 1173.62053

[15] R. G. Cowell, A. P. Dawid, S. L. Lauritzen and D. J. Spiegelhalter. Probabilistic Networks and Expert Systems. Statistics for Engineering and Information Science. Springer, New York, 1999. | MR 1697175 | Zbl 1120.68444

[16] N. A. C. Cressie. Statistics for Spatial Data. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. Wiley, New York, 1993. (Revised reprint of the 1991 edition, Wiley.) | MR 1239641 | Zbl 0799.62002

[17] K. R. Davidson and S. J. Szarek. Local operator theory, random matrices and Banach spaces. In Handbook of the Geometry of Banach Spaces, Vol. I 317-366. North-Holland, Amsterdam, 2001. | MR 1863696 | Zbl 1067.46008

[18] C. Giraud. Estimation of Gaussian graphs by model selection. Electron. J. Stat. 2 (2008) 542-563. | MR 2417393

[19] T. Gneiting. Power-law correlations, related models for long-range dependence and their simulation. J. Appl. Probab. 37 (2000) 1104-1109. | MR 1808873 | Zbl 0972.62079

[20] M. Kalisch and P. Bühlmann. Estimating high-dimensional directed acyclic graphs with the PC-algorithm. J. Mach. Learn. Res. 8 (2007) 613-636.

[21] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Ann. Statist. 28 (2000) 1302-1338. | MR 1805785 | Zbl 1105.62328

[22] S. L. Lauritzen. Graphical Models. Oxford Statistical Science Series 17. The Clarendon Press, Oxford University Press, New York, 1996. | MR 1419991 | Zbl 0907.62001

[23] C. L. Mallows. Some comments on Cp. Technometrics 15 (1973) 661-675. | Zbl 0269.62061

[24] P. Massart. Concentration Inequalities and Model Selection. Lecture Notes in Mathematics 1896. Springer, Berlin, 2007. (Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6-23, 2003, with a foreword by Jean Picard.) | MR 2319879 | Zbl 1170.60006

[25] N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the Lasso. Ann. Statist. 34 (2006) 1436-1462. | MR 2278363 | Zbl 1113.62082

[26] V. H. De La Peña and E. Giné. Decoupling. Probability and Its Applications. Springer, New York, 1999. (From dependence to independence, randomly stopped processes. U-statistics and processes. Martingales and beyond.) | MR 1666908 | Zbl 0918.60021

[27] D. Von Rosen. Moments for the inverted Wishart distribution. Scand. J. Statist. 15 (1988) 97-109. | MR 968156 | Zbl 0663.62063

[28] H. Rue and L. Held. Gaussian Markov Random Fields: Theory and Applications. Monographs on Statistics and Applied Probability 104. Chapman & Hall/CRC, London, 2005. | MR 2130347 | Zbl 1093.60003

[29] K. Sachs, O. Perez, D. Pe'Er, D. A. Lauffenburger and G. P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science 308 (2005) 523-529.

[30] J. Schäfer and K. Strimmer. An empirical Bayes approach to inferring large-scale gene association network. Bioinformatics 21 (2005) 754-764.

[31] G. Schwarz. Estimating the dimension of a model. Ann. Statist. 6 (1978) 461-464. | MR 468014 | Zbl 0379.62005

[32] R. Shibata. An optimal selection of regression variables. Biometrika 68 (1981) 45-54. | MR 614940 | Zbl 0464.62054

[33] C. Stone. An asymptotically optimal histogram selection rule. In Proceedings of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, Vol. II (Berkeley, Calif., 1983) 513-520. Wadsworth Statist./Probab. Ser. Wadsworth, Belmont, CA, 1985. | MR 822050

[34] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 (1996) 267-288. | MR 1379242 | Zbl 0850.62538

[35] A. Tsybakov. Optimal rates of aggregation. In 16th Annual Conference on Learning Theory 2777 303-313. Springer, Heidelberg, 2003. | Zbl 1208.62073

[36] N. Verzelen and F. Villers. Goodness-of-fit tests for high-dimensional Gaussian linear models. Ann. Statist. To appear, 2009. | MR 2604699 | Zbl 1183.62074

[37] M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting. Technical Report 725, Department of Statistics, UC Berkeley, 2007.

[38] A. Wille, P. Zimmermann, E. Vranova, A. Fürholz, O. Laule, S. Bleuler, L. Hennig, A. Prelic, P. Von Rohr, L. Thiele, E. Zitzler, W. Gruissem and P. Bühlmann. Sparse graphical Gaussian modelling of the isoprenoid gene network in arabidopsis thaliana. Genome Biology 5 (2004), no. R92.

[39] P. Zhao and B. Yu. On model selection consistency of Lasso. J. Mach. Learn. Res. 7 (2006) 2541-2563. | MR 2274449

[40] H. Zou. The adaptive Lasso and its oracle properties. J. Amer. Statist. Assoc. 101 (2006) 1418-1429. | MR 2279469 | Zbl 1171.62326