Dynamical systems/Probability theory
Asymptotic description of stochastic neural networks. II. Characterization of the limit law
[Description asymptotique de réseaux de neurones stochastiques. II. Caractérisation de la loi limite]
Comptes Rendus. Mathématique, Tome 352 (2014) no. 10, pp. 847-852.

Nous prolongeons le développement, commencé en [8], de la description asymptotique de certains réseaux de neurones stochastiques. Nous utilisons le principe de grandes déviations (PGD) et la bonne fonction de taux H que nous y annoncions pour démontrer l'existence d'un unique minimimum, μe, de H, une mesure stationnaire sur l'ensemble TZ des trajectoires. Nous caractérisons cette mesure par ses deux maginales, à l'instant 0, et du temps 1 au temps T. La seconde marginale est une mesure gaussienne stationnaire. Avec un oeil sur les applications, nous montrons comment calculer de manière inductive sa moyenne et son opérateur de covariance. Nous montrons aussi comment utiliser le PGD pour établir des résultats de convergence en moyenne et presque sûrement.

We continue the development, started in [8], of the asymptotic description of certain stochastic neural networks. We use the Large Deviation Principle (LDP) and the good rate function H announced there to prove that H has a unique minimum μe, a stationary measure on the set of trajectories TZ. We characterize this measure by its two marginals, at time 0, and from time 1 to T. The second marginal is a stationary Gaussian measure. With an eye on applications, we show that its mean and covariance operator can be inductively computed. Finally, we use the LDP to establish various convergence results, averaged, and quenched.

Reçu le :
Accepté le :
Publié le :
DOI : 10.1016/j.crma.2014.08.017
Faugeras, Olivier 1 ; Maclaurin, James 1

1 Inria Sophia-Antipolis Méditerranée, NeuroMathComp Group, France
@article{CRMATH_2014__352_10_847_0,
     author = {Faugeras, Olivier and Maclaurin, James},
     title = {Asymptotic description of stochastic neural networks. {II.} {Characterization} of the limit law},
     journal = {Comptes Rendus. Math\'ematique},
     pages = {847--852},
     publisher = {Elsevier},
     volume = {352},
     number = {10},
     year = {2014},
     doi = {10.1016/j.crma.2014.08.017},
     language = {en},
     url = {http://www.numdam.org/articles/10.1016/j.crma.2014.08.017/}
}
TY  - JOUR
AU  - Faugeras, Olivier
AU  - Maclaurin, James
TI  - Asymptotic description of stochastic neural networks. II. Characterization of the limit law
JO  - Comptes Rendus. Mathématique
PY  - 2014
SP  - 847
EP  - 852
VL  - 352
IS  - 10
PB  - Elsevier
UR  - http://www.numdam.org/articles/10.1016/j.crma.2014.08.017/
DO  - 10.1016/j.crma.2014.08.017
LA  - en
ID  - CRMATH_2014__352_10_847_0
ER  - 
%0 Journal Article
%A Faugeras, Olivier
%A Maclaurin, James
%T Asymptotic description of stochastic neural networks. II. Characterization of the limit law
%J Comptes Rendus. Mathématique
%D 2014
%P 847-852
%V 352
%N 10
%I Elsevier
%U http://www.numdam.org/articles/10.1016/j.crma.2014.08.017/
%R 10.1016/j.crma.2014.08.017
%G en
%F CRMATH_2014__352_10_847_0
Faugeras, Olivier; Maclaurin, James. Asymptotic description of stochastic neural networks. II. Characterization of the limit law. Comptes Rendus. Mathématique, Tome 352 (2014) no. 10, pp. 847-852. doi : 10.1016/j.crma.2014.08.017. http://www.numdam.org/articles/10.1016/j.crma.2014.08.017/

[1] Bressloff, P. Stochastic neural field theory and the system-size expansion, SIAM J. Appl. Math., Volume 70 (2009), pp. 1488-1521

[2] Buice, M.; Cowan, J. Field-theoretic approach to fluctuation effects in neural networks, Phys. Rev. E, Volume 75 (2007)

[3] Buice, M.; Cowan, J.; Chow, C. Systematic fluctuation expansion for neural network activity equations, Neural Comput., Volume 22 (2010), pp. 377-426

[4] Chiyonobu, T.; Kusuoka, S. The large deviation principle for hypermixing processes, Probab. Theory Relat. Fields, Volume 78 (1988), pp. 627-649

[5] ElBoustani, S.; Destexhe, A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states, Neural Comput., Volume 21 (2009), pp. 46-100

[6] Faugeras, O.; MacLaurin, J. Large deviations of an ergodic synchronous neural network with learning, 2014 (arXiv depot, INRIA Sophia Antipolis, France) | arXiv

[7] Faugeras, O.; Maclaurin, J. Asymptotic description of neural networks with correlated synaptic weights, INRIA, March 2014 (Rapport de recherche RR-8495)

[8] Faugeras, O.; Maclaurin, J. Asymptotic description of stochastic neural networks. I. Existence of a large deviation principle, C. R. Acad. Sci. Paris, Ser. I, Volume 352 (2014) no. 10, pp. 841-846

[9] Ginzburg, I.; Sompolinsky, H. Theory of correlations in stochastic neural networks, Phys. Rev. E, Volume 50 (1994)

[10] Moynot, O. Étude mathématique de la dynamique des réseaux neuronaux aléatoires récurrents, Université Paul-Sabatier, Toulouse, 1999 (Ph.D. thesis)

Cité par Sources :