January 22, 2014 - 17:39 — Anonymous

Mar 26 2014 - 00:00

Mar 28 2014 - 23:59

Venue:

Eurandom, Eindhoven, the Netherlands

Short description of the event:

Stochastic optimization embodies a collection of methodologies and theory that aim at devising optimal solutions to countless real-world inference problems, particularly when these involve uncertain and missing data. At the heart of stochastic optimization is the idea that many deterministic optimization problems can be addressed in a more powerful and convenient way by introducing intrinsic randomness in the optimization algorithms. Furthermore, this generalization gives rise to a set of techniques that are well suited for settings involving uncertain, incomplete, or missing data. Online (machine) learning is concerned with learning and prediction in a sequential and online fashion. In many settings the goal of online learning is the optimal prediction of a sequence of instances, possibly given a sequence of side-information. For example, the instances might correspond to the daily value of a financial asset or the daily meteorological conditions, and one wants to predict tomorrow’s value of the asset or weather conditions. Interestingly, it is possible to devise very powerful online learning algorithms able to cope with adversarial settings, meaning that powerful adversary can generate a sequence of instances that attempts to “break” the algorithms strategy. However, one can show that these algorithms must necessarily incorporate a randomization of the predictions, and can be often casted as stochastic optimization algorithms. One of the goals of this workshop is to make such connections between online learning and stochastic optimization more transparent.

Particularly important in the present workshop is the quantification of the performance of a given online stochastic optimization/online learning procedure. This challenging question requires several ingredients to be answered adequately; in particular one needs to develop a proper optimality framework. Here parallels with modern statistical theory emerge, as notions such as consistency, convergence rates and minimax bounds, common in statistical theory, all have parallels in stochastic optimization and online learning. Therefore there is plenty of room for cross-fertilization between all these fields, which is the main motivation for this workshop.

The aim of the workshop “Stochastic Optimization and Online Learning” is to introduce these broad fields to young researchers, in particular Ph.D. students, postdoctoral fellows and junior researchers, who are interested and eager to tackle new challenges in the fields of stochastic optimization and online learning.