Jump to Content

Meta-in-context learning in large language models

Published
View publication Download

Abstract

Large language models have shown tremendous performance in a variety of tasks. In-context learning is seen as one of the main contributors to their success. Previous work has demonstrated that in-context learning can even solve non-trivial tasks such as supervised and reinforcement learning. We expand on this line of work by investigating the effect of in context learning when the context is presented in a sequential manner. We find that the sequential presentation of related tasks leads to better in-context learning performance, thereby revealing that in-context learning can be used to create better in-context learning algorithms. We coin this phenomenon \emph{meta-in-context learning}. Multiple experiments reveal that meta-in-context learning adaptively modifies a large language model's priors over latent variables and that it adjusts its learning strategies. Finally, we extend our approach to a benchmark of real-world regression problems where we observe competitive performance to traditional learning algorithms. Taken together, our work improves our understanding of in-context learning and paves the way toward adapting large language models to the environment they are applied purely through meta-in-context learning rather than traditional fine-tuning.

Authors

Julian Coda-Forno, Marcel Binz, Zeynep Akata, Matthew Botvinick, Jane X. Wang, Eric Schulz

Venue

NeurIPS 2023