[學術討論會] Sapience without Sentience: An Inferentialist Approach to LLMs
臺 大 哲 學 系
學 術 討 論 會 公 告
主講人:Prof. Ryan Simonelli
武漢大學哲學學院博士後研究
主 題:Sapience without Sentience: An Inferentialist Approach to LLMs
時 間:114年3月17日(週一)
下午15:30 – 17:30
地 點:臺灣大學水源校區哲學系館三樓 302室(台北市思源街18號)
歡迎參加討論,謝謝!
Seminar
Speaker: Prof. Ryan Simonelli
International Postdoctoral Research Fellow, School of Philosophy, Wuhan University
Title: Sapience without Sentience: An Inferentialist Approach to LLMs
Date: 15:30 – 17:30 pm, Monday, March 17, 2025
Venue: Conference Room 302, Department of Philosophy, ShuiYuan Campus, National Taiwan University (18, SiYuan Street, Taipei)
Abstract:
How should we approach the question of whether large language models (LLMs) such as ChatGPT possess concepts, such that they can be counted as genuinely understanding what they’re saying? In this talk, I approach this question through an inferentialist account of concept possession, according to which to possess a concept is to master the inferential role of a linguistic expression. I suggest that training on linguistic data is in principle sufficient for mastery of inferential role, and thus, LLMs trained on nothing but linguistic data could in principle possess all concepts and thus genuinely understand what they’re saying, no matter what it is about which they’re speaking. This doesn’t mean, however, that they are conscious. Following Robert Brandom, I draw a distinction between sapience (conceptual understanding) and sentience (conscious awareness) and argue that, while all familiar cases of sapience inextricably involve sentience, we might think of (at least future) LLMs as genuinely possessing the latter without even a shred of the former.