Statistical discrimination in learning agents

Abstract

Undesired bias afflicts both human and algorithmic decision making. One primary example is statistical discrimination---selecting individuals based not on their underlying attributes, but on readily perceptible characteristics that covary with their suitability for the task at hand. Statistical discrimination poses a substantial challenge for cooperation and partner choice, since the costs of partner evaluation may incentivize the development of heuristics that balance the benefits of accuracy with the associated burden. We present a theoretical model to examine how evaluation costs influence statistical discrimination. We then test these predictions using multi-agent reinforcement learning in a partner choice-based social dilemma. As predicted, statistical discrimination emerges in agent policies as a function of covariate bias in the training population and of agent architecture.

Publications