home 首頁 navigate_next 最新消息 navigate_next Department Office Announcements navigate_next [Seminar Announcement]Sapience without Sentience: An Inferentialist Approach to LLMs
2025/03/10

[Seminar Announcement]Sapience without Sentience: An Inferentialist Approach to LLMs

Seminar

Speaker: Prof. Ryan Simonelli

International Postdoctoral Research Fellow, School of Philosophy, Wuhan University

Title: Sapience without Sentience: An Inferentialist Approach to LLMs

Date: 15:30 – 17:30 pm, Monday, March 17, 2025

Venue: Conference Room 302, Department of Philosophy, ShuiYuan Campus, National Taiwan University (18, SiYuan Street, Taipei)

 

 

Abstract: 

How should we approach the question of whether large language models (LLMs) such as ChatGPT possess concepts, such that they can be counted as genuinely understanding what they’re saying? In this talk, I approach this question through an inferentialist account of concept possession, according to which to possess a concept is to master the inferential role of a linguistic expression. I suggest that training on linguistic data is in principle sufficient for mastery of inferential role, and thus, LLMs trained on nothing but linguistic data could in principle possess all concepts and thus genuinely understand what they’re saying, no matter what it is about which they’re speaking. This doesn’t mean, however, that they are conscious.  Following Robert Brandom, I draw a distinction between sapience (conceptual understanding) and sentience (conscious awareness) and argue that, while all familiar cases of sapience inextricably involve sentience, we might think of (at least future) LLMs as genuinely possessing the latter without even a shred of the former.