PAL: Program-aided Language Models

1Language Technologies Institute, School of CS, Carnegie Mellon University
2Inspired Cognition, *Equal contribution

PAL generates a Python program to solve a task from its natural language description.


We present Program-Aided Language models (PAL): a new method that uses the LLM to read natural language problems and generate programs as the intermediate reasoning steps, but offloads the solution step to a programmatic runtime such as a Python interpreter.

With PAL, decomposing the natural language problem into runnable steps remains the only learning task for the LLM, while solving is delegated to the interpreter. We demonstrate this synergy between a neural LLM and a symbolic interpreter across 12 reasoning tasks from BIG-Bench Hard and other benchmarks, including mathematical reasoning, symbolic reasoning, and algorithmic problems. In all these natural language reasoning tasks, generating code using an LLM and reasoning using a Python interpreter leads to more accurate results than much larger models, and we set new state-of-the-art results in all 12 benchmarks. For example, PAL using Codex achieves state-of-the-art few-shot accuracy on the GSM benchmark of math word problems when the model is allowed only a single decoding, surpassing PALM-540B with chain-of-thought prompting by absolute 8%. In three reasoning tasks from the BIG-Bench Hard benchmark, PAL outperforms CoT by 11%. On GSM-hard, a more challenging version of GSM that we create, PAL outperforms chain-of-thought by an absolute 40%.

Results Summary

PAL outperforms chain-of-thought on 12 benchmarks, including 3 from BIG-Bench Hard.
Symbolic Reasoning

Mathematical Reasoning

Algorithmic Reasoning

Sample Outputs

Sample outputs for all the datasets are shown below. Clicking on any example pops up the question, the text reasoning generated with chain-of-thought prompting, and Python-based reasoning generated with PAL.



Colored Objects

Date Understanding

Repeat Copy


                    title={PAL: Program-aided Language Models},
                    author={Gao, Luyu and Madaan, Aman and Zhou, Shuyan and Alon, Uri and Liu, Pengfei and Yang, Yiming and Callan, Jamie and Neubig, Graham},
                    journal={arXiv preprint arXiv:2211.10435},