Philipp Petersen (University of Vienna)
Thursday, January 20, 2022 - 17:00
Virtual event (Videobroadcast) - link for registration
Max-Planck-Institut fuer Mathematik in den Naturwissenschaften, 04103 Leipzig
Deep learning is nowadays used in a wide range of highly complex applications such as natural language processing or even scientific applications. Its first major breakthrough, however, was achieved by shattering the state-of-the-art in image classification. We revisit the problem of classification by deep neural networks and attempt to find an answer to why deep networks are remarkably effective in this regime. We will interpret the learning of classifiers as finding piecewise constant functions from labelled samples. We then precisely link the hardness of the learning problem to the complexity of the regions. Concretely, we will establish fundamental lower bounds on the learnability of certain regions. Finally, we will show that in many cases, these optimal bounds can be achieved by deep-neural-network-based learning. In quite realistic settings, we will observe that deep neural networks can learn high-dimensional classifiers without a strong dependence of the learning rates on the dimension.
submitted by Saskia Gutzschebauch (Saskia.Gutzschebauch@mis.mpg.de, 0341 9959 50)