Analysis of the Representational Capacity of Neural Ordinary Differential Equations Using Real Analytic Methods and Its Application to Real Data Analysis

Reference No. 2025a025
Type/Category Grant for Young Researchers and Students-Short-term Joint Research
Title of Research Project Analysis of the Representational Capacity of Neural Ordinary Differential Equations Using Real Analytic Methods and Its Application to Real Data Analysis
Principal Investigator Naoya Hatano(Graduate School of Science and Engineering, Chuo University・Postdoc)
Research Period September 3,2025. - September 5,2025.
Keyword(s) of Research Fields Neural network, Neural ODE, Universal approximation theorem, Function spaces
Abstract for Research Report In this study, we aim to rigorously formulate a universal approximation theorem for Neural Ordinary Differential Equations (NODEs) and their variants using function spaces. Furthermore, we will provide a proof of the theorem utilizing modern real analytic techniques and explore its various extensions.
The universal approximation theorem for conventional shallow neural networks (NNs) is well known. A neural network (NN) is a mathematical model that mimics the transmission of signals and the structure of brain cells. The universal approximation theorem states that NNs can uniformly approximate continuous functions, and this was first proven by Cybenko. Subsequently, research has expanded beyond continuous functions to consider approximations of various function classes. A widely studied case is the approximation of functions defined on compact subsets of Euclidean space.
In this research, we focus on the universal approximation theorem for NODEs, which have gained significant attention in recent years. NODEs are a type of NN defined based on the solutions of ordinary differential equations (ODEs). They can be seen as a continuous generalization of deep NNs, where intermediate layers are removed and replaced by a continuous transformation. The universal approximation theorem for NODEs is known in the case where the base space is a compact domain and the function space is a Lebesgue space. We plan to extend this theorem by replacing the compact domain with a manifold or an unbounded domain and by studying Lebesgue spaces on various base domains.
The function spaces considered in this study primarily include Lebesgue spaces, which consist of integrable functions, and Sobolev spaces, which further impose differentiability constraints. The universal approximation theorem can be formulated in terms of density within these function spaces. By comparing the function spaces used in the approximation, we can evaluate the approximation accuracy of NODEs.

The expected outcomes of this research include a mathematical understanding of the approximation accuracy and computational efficiency of NODEs compared to conventional NNs. This deeper understanding of NODE architectures will contribute to solving large-scale data problems and advancing neural network theory.
Organizing Committee Members (Workshop)
Participants (Short-term Joint Usage)
Masahiro Ikeda(Graguate School of Information Science and Technology, Osaka University・Associate Professor)
Ryota Kawasumi(Center for Mathematics and Data Science, Gunma University・Assistant Professor)