Jump to Content

Chain-of-Table: Evolves Tables in the LLM Reasoning Chain for Table Understanding

Published
View publication Download

Abstract

Table-based reasoning with large language models is a promising direction to solve many table understanding tasks, such as table-based question answering and table-based fact verification. Chain-of-Thought and its alike approaches have demonstrated impressive performance by incorporating the reasoning chain in the form of textual context. However, it is still an open question how to properly involve tabular data in the reasoning chain for the table-based tasks. Inspired by nested queries in SQL development where temporary tables are used to store intermediate results, we propose Chain-of-Table, where we update the table iteratively to represent the complex reasoning chain. In our approach, the tabular context evolves as a sequence of operations generated by a large language model, where the generation of the latter operation is conditioned on the results of the previous operation. In this way, the constantly evolving table forms a chain, showing the reasoning process of the given problem. The result table carries rich information of the intermediate results so that large language models can skip the complex reasoning over the latent clues, directly aggregate relevant information from the result table, and come to the final prediction easily. Extensive experiments with two large language models show that Chain-of-Table surpasses competitive baselines and achieves the state-of-the-art performance in three well-known benchmark datasets (WikiTableQuestions, and FeTaQA, and TabFact).

Authors

Zilong Wang*, Chen-Yu Lee, chunliang , Hao Zhang, Julian Eisenschlos, Lesly Miculicich, Tomas Pfister, Vincent Perot, Yasuhisa Fujii, Zifeng Wang, Jingbo Shang*

Venue

ICLR 2024