Numerical reasoning in machine reading comprehension tasks: are we there yet?

Abstract

Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting, and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggest that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations strongly suggest that the standard metrics are incapable of measuring progress towards such tasks.

Publication
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Related