prof. RNDr. Jan Vybíral, Ph.D.

e-mail: zobrazit e-mail
telefon: +420 778 546 165
místnost: 33c
www: http://kmlinux.fjfi.cvut.cz/~vybirja2/
 
rozvrh

Structured matrices in compressed sensing

školitel: doc. RNDr. Jan Vybíral, Ph.D.
e-mail: zobrazit e-mail
typ práce: bakalářská práce, diplomová práce
zaměření: MI_AMSM, MINF
klíčová slova: Random matrices, compressed sensing, Johnson-Lindenstrauss Lemma
popis: The student will review some basic properties of random matrices in the area of compressed sensing. Then (s)he will discuss the role of structured random matrices - the speed up of matrix-vector multiplication, the reduced amount of random bits needed, theoretical guarantees for their performance, real-life performance for some specific problems.
literatura: H. Boche, R. Calderbank, and G. Kutyniok, A Survey of Compressed Sensing, First chapter in Compressed Sensing and its Applications, Birkhäuser, Springer, 2015 F. Krahmer and R. Ward, New and improved Johnson–Lindenstrauss embeddings via the restricted isometry property, SIAM Journal on Mathematical Analysis, 2011
poznámka: The preferred language of the thesis is English.
naposledy změněno: 14.07.2022 09:20:08

Impact of delay on pattern formation and stability in the attraction-repulsion model

školitel: Dr. Jan Haškovec, doc. Jan Vybíral
e-mail: zobrazit e-mail
typ práce: bakalářská práce
zaměření: MI_MM, MI_AMSM, MINF, APIN
klíčová slova: attraction-repulsion model, pattern formation, delay differential equations
přiložený soubor: ikona pdf
popis: See the pdf file for description.
naposledy změněno: 05.10.2022 20:58:57

Bases of ReLU Neural Networks

školitel: doc. RNDr. Jan Vybíral, Ph.D.
e-mail: zobrazit e-mail
typ práce: bakalářská práce, diplomová práce
zaměření: MI_MM, MI_AMSM, MINF
klíčová slova: ReLU, Neural Networks, Riesz Basis, Frame
popis: The astonishing performance of neural networks in multivariate problems still lacks satisfactory mathematical explanation. In a recent work of I. Daubechies and her co-authors, they proposed a univariate system of piecewise linear functions, which resemble very much the trigonometric system and which form the so-called Riesz basis. Moreover, these functions are easily reproducible as ReLU Neural Networks. In a follow-up work of C. Schneider and J. Vybiral, this was generalized to the multivariate setting. The task of this work will be to investigate further potential improvements of the recent research, both on theoretical as well as practical side. This includes a) optimization of the Riesz constants of the system b) application of an orthonormalization procedure c) numerical implementation of the proposed NN-architecture and the study of its performance in approximation of multivariate functions.
literatura: C. Schneider and J. Vybiral, Multivariate Riesz basis of ReLU neural networks, submitted I. Daubechies, R. DeVore, S. Foucart, B. Hanin, and G. Petrova, Nonlinear Approximation and (Deep) ReLU Networks, Constr. Appr. 55 (2022), 127–172 P. Beneventano, P. Cheridito, R. Graeber, A. Jentzen, and B. Kuckuck, Deep neural network approximation theory for high-dimensional functions, available at arXiv:2112.14523
naposledy změněno: 10.03.2023 14:14:49

Irregularities of distributions: dispersion and discrepancy

školitel: prof. RNDr. Jan Vybíral, Ph.D.
e-mail: zobrazit e-mail
typ práce: dizertační práce
zaměření: MI_MM, MI_AMSM, MINF
klíčová slova: Discrepancy, dispersion, randomized algorithms, numerical integration
popis: In many applications (like sampling, numerical integration, function recovery and many others) it is necessary to construct a set of points, which are uniformly distributed across a given domain. There are many ways, how to measure the quality of the uniform distribution of a point set. We will study two of then, dispersion and discrepancy. Dispersion of a point set is the volume of the largest box, which avoids the given point set. The more classical notion of discrepancy measures the deviation of the number of points in any rectangle from its normalized volume. We shall review the existing lower and upper bounds of dispersion and discrepancy, using the methods from combinatorics, harmonic analysis, functional analysis, and probability theory. The student will try to attack some of the old problems on the field as well as some of the modern modifications - like the spherical discrepancy and dispersion, line dispersion, or star discrepancy. We will be interested in both upper bounds, which usually require a construction of a good point set, as well as lower bounds, which often rely on some abstract tools.
literatura: [1] J. Matoušek, Geometric discrepancy, Springer-Verlag, Berlin, 2010 [2] M. Ullrich and J. Vybiral, An upper bound on the minimal dispersion, J. Complexity 45 (2018), 120–126. [3] B. Bukh and T. Chao, Empty axis-parallel boxes, Int. Math. Res. Notices 18 (2022), 13811–13828 [4] D. Bilyk and M. T. Lacey, On the small ball inequality in three dimensions, Duke Math. J. 143 (2008), no. 1, 81–115.
naposledy změněno: 02.04.2025 21:59:23

Approximation theory of artificial neural networks

školitel: prof. RNDr. Jan Vybíral, Ph.D.
e-mail: zobrazit e-mail
typ práce: dizertační práce
zaměření: MI_MM, MI_AMSM, MINF
klíčová slova: Approximation theory, artificial neural networks, decay rates, function spaces, Barron spaces, curse of dimensionality
popis: In the last years, we have witnessed great expansion of use of ANNs in nearly every field of human activity. On the other hand, their use in critical applications like transport, financial sector, or security requires that we deepen our knowledge about their properties and that we can understand their performance and its limits. This motivated many authors to undergo a study of many aspects of ANNs, including their connection to approximation theory. Starting with Yarotsky, they were able to characterize the decay rates of the worst-case approximation of functions from certain function spaces in dependence on the width and the length of the ANN used. Moreover, the role of adaptive selection of the architecture of the ANN, the limits of continuous dependence of the weights of the ANN on the approximated function f, as well as lower bounds on the possible performance of ANN (using for example the notion of the Vapnik--Chervonenkis dimension) are now well understood. The introduction of Barron spaces in [2] allowed to explain the success of ANNs when dealing with high-dimensional problems. Recently, a univariate system of ANNs forming a Riesz basis of L_2([0,1]) was introduced in [3] and later on generalized to higher dimensions in [4]. Surprisingly, the properties of this system are independent on the dimension d, which makes them an appealing tool for the analysis of multivariate ANNs. The student will try to generalize the analysis from square integrable functions to functions from Sobolev spaces with limited smoothness and to Barron and other function spaces.
literatura: [1] D. Yarotsky, Error bounds for approximations with deep ReLU networks, Neural networks 94 (2017), 103-114 [2] A. R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Trans. Inform. Theory 39 (1993), no. 3, 930-945 [3] I. Daubechies, R. DeVore, S. Foucart, B. Hanin, and G. Petrova, Nonlinear approximation and (deep) ReLU networks, Constr. Approx. 55 (2022), no. 1, 127-172 [4] C. Schneider and J. Vybiral, A multivariate Riesz basis of ReLU neural networks, Appl. Comput. Harmon. Anal. 68 (2024), 101605
naposledy změněno: 02.04.2025 22:08:35

Databáze V3S

Aplikace V3S eviduje výsledky vědy a výzkumu a další aktivity vědecko-výzkumných pracovníků ve vědecké komunitě. Aplikace V3S slouží k odesílání výsledků do RIV, exportům pro statistické analýzy i k interním hodnocením vědecko-výzkumné činnosti.

Seznam publikaci ve V3S

Články v časopisech

2017

A. Kolleck and J. Vybiral, Non-asymptotic Analysis of l_1-norm Support Vector Machines, IEEE Transactions on Information Theory (2017)
BiBTeX
@ARTICLE{kolleck2017n,
  title = {Non-asymptotic Analysis of l_1-norm Support Vector Machines},
  author = {A. Kolleck and J. Vybiral},
  journal = {IEEE Transactions on Information Theory},
  publisher = {IEEE},
  year = {2017}
}
T. Conrad, M. Genzel, N. Cvetkovic, N. Wulkow, A. Leichtle, J. Vybiral, G. Kutyniok, and Ch. Sch\"utte, Sparse Proteomics Analysis-a compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data, BMC bioinformatics 18 (2017) , 160
BiBTeX
@ARTICLE{conrad2017sp,
  title = {Sparse Proteomics Analysis--a compressed sensing-based approach for feature selection and classification of high-dimensional proteomics mass spectrometry data},
  author = {T. Conrad, M. Genzel, N. Cvetkovic, N. Wulkow, A. Leichtle, J. Vybiral, G. Kutyniok, and Ch. Sch{\"u}tte},
  journal = {BMC bioinformatics},
  publisher = {BioMed Central},
  year = {2017},
  volume = {18},
  number = {1},
  pages = {160}
}
A. Hinrichs, J. Prochno, and J. Vybiral, Entropy numbers of embeddings of Schatten classes, Journal of Functional Analysis 273 (2017) , 3241-3261
BiBTeX
@ARTICLE{hinrichs2017,
  title = {Entropy numbers of embeddings of Schatten classes},
  author = {A. Hinrichs, J. Prochno, and J. Vybiral},
  journal = {Journal of Functional Analysis},
  publisher = {Elsevier},
  year = {2017},
  volume = {273},
  number = {10},
  pages = {3241--3261}
}
L.M. Ghiringhelli, J. Vybiral, E. Ahmetcik, R. Ouyang, S.V. Levchenko, C. Draxl, and M. Scheffler, Learning physical descriptors for materials science by compressed sensing, New Journal of Physics 19 (2017) , 023017
BiBTeX
@ARTICLE{ghiringhelli,
  title = {Learning physical descriptors for materials science by compressed sensing},
  author = {L.M. Ghiringhelli, J. Vybiral, E. Ahmetcik, R. Ouyang, S.V. Levchenko, C. Draxl, and M. Scheffler},
  journal = {New Journal of Physics},
  publisher = {IOP Publishing},
  year = {2017},
  volume = {19},
  number = {2},
  pages = {023017}
}
H. Kempka and J. Vybíral, Volumes of unit balls of mixed sequence spaces, Mathematische Nachrichten 290 (2017) , 1317-1327
BiBTeX
@ARTICLE{kempka2017vo,
  title = {Volumes of unit balls of mixed sequence spaces},
  author = {H. Kempka and J. Vyb{\'\i}ral},
  journal = {Mathematische Nachrichten},
  publisher = {Wiley Online Library},
  year = {2017},
  volume = {290},
  number = {8-9},
  pages = {1317--1327}
}

za obsah této stránky zodpovídá: Radek Fučík | naposledy změněno: 7.8.2011
Trojanova 13, 120 00 Praha 2, tel. +420 770 127 494
České vysoké učení technické v Praze | Fakulta jaderná a fyzikálně inženýrská | Katedra matematiky