Seminar abstracts: WiSe 2025
Realizing asymptotic couples in Hardy fields
Clemens Kinn, 06.11.2025
The set of germs of differentiable real-valued functions forms a ring with the induced addition and multiplication of functions. Hardy fields are subfields of this ring which are closed under taking derivatives and come with a natural ordering. Moreover, every element of a Hardy field has a limit in the extended real line. To compare different sizes of asymptotic growth of germs we equip Hardy fields with the standard valuation. One can associate to each Hardy field its asymptotic couple, which is a pair consisting of the value group of the Hardy field together with a map induced by the logarithmic derivative. These objects can also be defined purely algebraically, and this raises the question which asymptotic couples come from Hardy fields. Rosenlicht gave a partial explicit answer to a relative question: when does an extension of an asymptotic couple of a Hardy field also come from a corresponding Hardy field extension? He showed that this is possible under the assumption that the value groups are of finite rank. We give a self-contained treatment of this result and construct Hardy fields for asymptotic couples of Hardy type with small derivation which are of countable rank.
The set of germs of differentiable real-valued functions forms a ring with the induced addition and multiplication of functions. Hardy fields are subfields of this ring which are closed under taking derivatives and come with a natural ordering. Moreover, every element of a Hardy field has a limit in the extended real line. To compare different sizes of asymptotic growth of germs we equip Hardy fields with the standard valuation. One can associate to each Hardy field its asymptotic couple, which is a pair consisting of the value group of the Hardy field together with a map induced by the logarithmic derivative. These objects can also be defined purely algebraically, and this raises the question which asymptotic couples come from Hardy fields. Rosenlicht gave a partial explicit answer to a relative question: when does an extension of an asymptotic couple of a Hardy field also come from a corresponding Hardy field extension? He showed that this is possible under the assumption that the value groups are of finite rank. We give a self-contained treatment of this result and construct Hardy fields for asymptotic couples of Hardy type with small derivation which are of countable rank.
Adaptive Neural Networks for Time-Independent Linear PDEs
Moritz Maibaum, 06.11.2025
We present a greedy approach for solving time-independent linear PDEs by growing neural networks.
We present a greedy approach for solving time-independent linear PDEs by growing neural networks.
Fourier Analysis on the Group of Signatures
Davide Nobile, 13.11.2025
In this talk, we introduce a framework for Fourier analysis on the group of truncated signatures. After a brief introduction to signatures and their applications, we present some key concepts from representation theory, and show how they can be used to construct the Fourier transform of functions defined on the group of signatures. Further, we show that this transform has desirable properties, including an inversion formula and an analogue of the Plancherel theorem.
In this talk, we introduce a framework for Fourier analysis on the group of truncated signatures. After a brief introduction to signatures and their applications, we present some key concepts from representation theory, and show how they can be used to construct the Fourier transform of functions defined on the group of signatures. Further, we show that this transform has desirable properties, including an inversion formula and an analogue of the Plancherel theorem.
Dispersion of a point set
Matěj Trödler, 13.11.2025
Dispersion of a point set measures the volume of the largest axis-aligned empty box within the unit cube that avoids the given points. Closely related to discrepancy, dispersion quantifies the uniformity of point distributions and has applications in various fields such as numerical integration. We present improved lower bound on dispersion for sufficiently large volumes, which is asymptotically optimal up to a logarithmic factor. A generalization of this method, combined with the superlinear property of dispersion, yields a series of further lower bounds valid in arbitrary dimensions and for arbitrary volumes, establishing new state-of-the-art results.
Dispersion of a point set measures the volume of the largest axis-aligned empty box within the unit cube that avoids the given points. Closely related to discrepancy, dispersion quantifies the uniformity of point distributions and has applications in various fields such as numerical integration. We present improved lower bound on dispersion for sufficiently large volumes, which is asymptotically optimal up to a logarithmic factor. A generalization of this method, combined with the superlinear property of dispersion, yields a series of further lower bounds valid in arbitrary dimensions and for arbitrary volumes, establishing new state-of-the-art results.
Reconstruction of frequency-localized functions from pointwise samples
Andres felipe Lerma Pineda, 20.11.2025
In this talk, we explore two complementary approaches for recovering frequency-localized functions from pointwise data. First, we introduce the Slepian basis and establish a recovery theorem for least-squares approximation using uniformly random samples in low dimensions. Second, we present a recovery guarantee for approximating frequency-localized functions using deep learning. This result, formulated as a practical existence theorem, identifies conditions on the network architecture, training procedure, and data acquisition that are sufficient to ensure accurate approximation. To conclude, we provide numerical examples to illustrate and compare the performance of least-squares methods and deep-learning-based approaches.
In this talk, we explore two complementary approaches for recovering frequency-localized functions from pointwise data. First, we introduce the Slepian basis and establish a recovery theorem for least-squares approximation using uniformly random samples in low dimensions. Second, we present a recovery guarantee for approximating frequency-localized functions using deep learning. This result, formulated as a practical existence theorem, identifies conditions on the network architecture, training procedure, and data acquisition that are sufficient to ensure accurate approximation. To conclude, we provide numerical examples to illustrate and compare the performance of least-squares methods and deep-learning-based approaches.
