What's a Good Prediction? Challenges in evaluating an agent's knowledge

Abstract

Constructing general knowledge by learning task-independent models of the world can help agents solve challenging problems. However, both constructing and evaluating such models remain an open challenge. One of the most common approaches to evaluating models is to assess their accuracy with respect to observable values. However, our existing reliance on estimator accuracy as a proxy for the usefulness of the knowledge has the potential to lead us astray. We demonstrate this conflict between accuracy and usefulness through a series of illustrative examples including both a thought experiment and an empirical example in Minecraft, using the General Value Function framework (GVF). Having identified challenges in assessing an agent’s knowledge, we propose a possible evaluation approach that arises naturally from the online continual learning setting: we suggest evaluating by examining internal learning processes, specifically the relevance of a GVF's features to the prediction task at hand. This paper contributes a first look into evaluation of predictions through their use, an integral component of predictive knowledge which is as of yet unexplored.

Publications