ICERM, Providence, Rhode Island
The fundamental problem of approximation theory is to resolve a possibly complicated function, called the target function, by simpler, easier to compute functions called approximants. Increasing the resolution of the target function can generally only be achieved by increasing the complexity of the approximants. The understanding of this trade-off between resolution and complexity is the main goal of approximation theory, a classical subject that goes back to the early results on Taylor's and Fourier's expansions of a function.
Modern problems in approximation, driven by applications in biology, medicine, and engineering, are being formulated in very high dimensions, which brings to the fore new phenomena. One aspect of the high-dimensional regime is a focus on sparse signals, motivated by the fact that many real world signals can be well approximated by sparse ones. The goal of compressed sensing is to reconstruct such signals from their incomplete linear information. Another aspect of this regime is the "curse of dimensionality" for standard smoothness classes, which means that the complexity of approximation depends exponentially on dimension. An important step in solving multivariate problems with large dimension has been made in the last 20 years: sparse representations are used as a way to model the corresponding function classes. This approach automatically entails a need for nonlinear approximation, and greedy approximation, in particular.
This program addresses a broad spectrum of approximation problems, from the approximation of functions in norm, to numerical integration, to computing minima, with a focus on sharp error estimates. It will explore the rich connections to the theory of distributions of point-sets in both Euclidean settings and on manifolds and to the computational complexity of continuous problems. It will address the issues of design of algorithms and of numerical experiments. The program will attract researchers in approximation theory, compressed sensing, optimization theory, discrepancy theory, and information based complexity theory.