Information trajectory of optimal learning.
Belavkin, Roman V. (2010) Information trajectory of optimal learning. In: Dynamics of information systems: theory and applications. Hirsch, Michael J. and Pardalos, Panos M. and Murphey, Robert, eds. Springer Optimization and Its Applications (40). Springer, pp. 29-44. ISBN 9781441956880
Full text is not in this repository.
The paper outlines some basic principles of a geometric and non-asymptotic theory of learning systems. An evolution of such a system is represented by points on a statistical manifold, and a topology related to information dynamics is introduced to define trajectories continuous in information. It is shown that optimization of learning with respect to a given utility function leads to an evolution described by a continuous trajectory. Path integrals along the trajectory define the optimal utility and information bounds. Closed form expressions are derived for two important types of utility functions. The presented approach is a generalization of the use of Orlicz spaces in information geometry, and it gives a new, geometric interpretation of the classical information value theory and statistical mechanics. In addition, theoretical predictions are evaluated experimentally by comparing performance of agents learning in a non-stationary stochastic environment.
|Item Type:||Book Section|
|Research Areas:||School of Science and Technology > Computer Science|
School of Science and Technology > Computer Science > Artificial Intelligence group
|Deposited On:||24 Mar 2010 13:40|
|Last Modified:||08 Oct 2014 11:05|
Repository staff only: item control page
Full text downloads (NB count will be zero if no full text documents are attached to the record)
Downloads per month over the past year