Oliver Dürr, Jan Segessenmann, Jan Juhani Steinmann

Meaning, Form and the Limits of Natural Language Processing

Volume 10 () / Issue 1, pp. 42-72 (31)
Published 17.10.2023

This article engages the anthropological assumptions underlying the apprehensions and promises associated with language in artificial intelligence (AI). First, we present the contours of two rivalling paradigms for assessing artificial language generation: a holistic-enactivist theory of language and an informational theory of language. We then introduce two language generation models – one presently in use and one more speculative: Firstly, the transformer architecture as used in current large language models, such as the GPT-series, and secondly, a model for 'autonomous machine intelligence' recently proposed by Yann LeCun, which involves not only language but a sensory-motor interaction with the world. We then assess the language capacity of these models from the perspectives of the two rivalling language paradigms. Taking a holistic-enactivist stance, we then argue that there is currently no reason to assume a human-comparable language capacity in LLMs and, further, that LeCun's proposed model does not represent a significant step toward artificially generating human language because it still lacks essential features that underlie the linguistic capacity of humans. Finally, we suggest that proponents of these rivalling interpretations of LLMs should enter into a constructive dialogue and that this dialogue should continuously involve further empirical, conceptual, and theoretical research.

Oliver Dürr University of Zurich (Zurich, CH) / University of Fribourg (Fribourg, CH)

Jan Segessenmann University of Fribourg (Fribourg, CH)

Jan Juhani Steinmann Catholique University of Paris (Paris, FR) / University of Vienna (Vienna, AUT)