IPUMS.org Home Page

BIBLIOGRAPHY

Publications, working papers, and other research using data resources from IPUMS.

Full Citation

Title: Do LLMs Work on Charts? Designing Few-Shot Prompts for Chart Question Answering and Summarization

Citation Type: Miscellaneous

Publication Year: 2023

Abstract: A variety of tasks have been proposed recently to facilitate exploration and analysis of charts such as chart QA and summarization. The dominant paradigm to solve these tasks has been to fine-tune a pretrained model on the task data. However, this approach is not only expensive but also not generalizable to unseen tasks. On the other hand, large language models (LLMs) have shown impressive generalization capabilities to unseen tasks with zero-or few-shot prompting. However, their application to chart-related tasks is not trivial as these tasks typically involve considering not only the underlying data but also the visual features in the chart image. We propose PROMPTCHART, a multimodal few-shot prompting framework with LLMs for chart-related applications. By analyzing the tasks carefully, we have come up with a set of prompting guidelines for each task to elicit the best few-shot performance from LLMs. We further propose a strategy to inject visual information into the prompts. Our experiments on three different chart-related information consumption tasks show that with properly designed prompts LLMs can excel on the benchmarks, achieving state-of-the-art.

Url: https://arxiv.org/pdf/2312.10610.pdf

User Submitted?: No

Authors: Do, Xuan Long; Hassanpour, Mohammad; Masry, Ahmed; Kavehzadeh, Parsa; Hoque, Enamul; Joty, Shafiq

Publisher: ArXiv

Data Collections: IPUMS CPS

Topics: Methodology and Data Collection

Countries:

IPUMS NHGIS NAPP IHIS ATUS Terrapop