Approximation theory of artificial neural networks

školitel: prof. RNDr. Jan Vybíral, Ph.D.
e-mail: zobrazit e-mail
typ práce: dizertační práce
zaměření: MI_MM, MI_AMSM, MINF
klíčová slova: Approximation theory, artificial neural networks, decay rates, function spaces, Barron spaces, curse of dimensionality
popis: In the last years, we have witnessed great expansion of use of ANNs in nearly every field of human activity. On the other hand, their use in critical applications like transport, financial sector, or security requires that we deepen our knowledge about their properties and that we can understand their performance and its limits. This motivated many authors to undergo a study of many aspects of ANNs, including their connection to approximation theory. Starting with Yarotsky, they were able to characterize the decay rates of the worst-case approximation of functions from certain function spaces in dependence on the width and the length of the ANN used. Moreover, the role of adaptive selection of the architecture of the ANN, the limits of continuous dependence of the weights of the ANN on the approximated function f, as well as lower bounds on the possible performance of ANN (using for example the notion of the Vapnik--Chervonenkis dimension) are now well understood. The introduction of Barron spaces in [2] allowed to explain the success of ANNs when dealing with high-dimensional problems. Recently, a univariate system of ANNs forming a Riesz basis of L_2([0,1]) was introduced in [3] and later on generalized to higher dimensions in [4]. Surprisingly, the properties of this system are independent on the dimension d, which makes them an appealing tool for the analysis of multivariate ANNs. The student will try to generalize the analysis from square integrable functions to functions from Sobolev spaces with limited smoothness and to Barron and other function spaces.
literatura: [1] D. Yarotsky, Error bounds for approximations with deep ReLU networks, Neural networks 94 (2017), 103-114 [2] A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inform. Theory 39 (1993), no. 3, 930-945 [3] I. Daubechies, R. DeVore, S. Foucart, B. Hanin, and G. Petrova, Nonlinear approximation and (deep) ReLU networks, Constr. Approx. 55 (2022), no. 1, 127-172 [4] C. Schneider and J. Vybiral, A multivariate Riesz basis of ReLU neural networks, Appl. Comput. Harmon. Anal. 68 (2024), 101605
naposledy změněno: 02.04.2025 22:08:35

za obsah této stránky zodpovídá: Ľubomíra Dvořáková | naposledy změněno: 12.9.2011
Trojanova 13, 120 00 Praha 2, tel. +420 770 127 494
České vysoké učení technické v Praze | Fakulta jaderná a fyzikálně inženýrská | Katedra matematiky