home 首頁 navigate_next 最新消息 navigate_next 系辦公告 navigate_next [學術討論會] Deepfakes, Griefbots, and the Ethics of Consent (5/26)
2025/05/20

[學術討論會] Deepfakes, Griefbots, and the Ethics of Consent (5/26)

臺    大    哲    學    系

學    術    討    論    會    公    告

 

主講人艾力(Alexandre Erler)教授

陽明交通大學心智哲學研究所副教授

 

  Deepfakes, Griefbots, and the Ethics of Consent

 

  114526日(週一)

      下午15:30 – 17:30

 

  臺灣大學水源校區哲學系館三樓 302室(台北市思源街18號)

 

迎參加討論,謝謝!

 

 

 

Seminar

Speaker: Prof. Alexandre Erler

Associate professor, Institute of Philosophy of Mind and Cognition, National Yang Ming Chiao Tung University

Title: Deepfakes, Griefbots, and the Ethics of Consent

Date: 15:30 – 17:30 pm, Monday, May 26, 2025

Venue: Conference Room 302, Department of Philosophy, ShuiYuan Campus, National Taiwan University (18, SiYuan Street, Taipei)

 

 

Abstract: 

Recent advances in AI have given rise to applications like deepfakes, which produce audiovisual representations of individuals saying or doing things they never did, and “griefbots,” digital replicas of deceased individuals typically designed to assist relatives with the grieving process. While distinct in purpose, these applications of AI share common ethical challenges: an important one relates to the consent of the individuals whose data are used to create them. Many people feel that it is problematic to digitally simulate someone without their consent (and especially against their expressed wishes). In support of that intuition, Adrienne de Ruiter has defended a “right to digital self-representation” (RDSR) that prohibits the use of a person’s digital likeness in ways they would find objectionable, while Daniel Story and Ryan Jenkins have put forward a related “Non-Veridical Representation Principle” (NVRP).

Despite the appeal of such consent-oriented principles, their scope may have limits. For instance, Story and Jenkins concedes that nonconsensual deepfakes created for satirical purposes constitute exceptions to the NVRP. Yet if that is correct, it raises the question whether other nonconsensual uses of such technology, such as griefbots developed to aid grieving relatives (a more “serious” use than satire, after all), might similarly be exempt from those constraints. If such exceptions are accepted, however, the requirement for a deceased person’s consent is at risk of being rendered irrelevant in most cases of griefbot creation.

This paper addresses that challenge by examining the normative foundations of the need to protect consent in such contexts. I will argue that, from a liberal perspective, it is a mistake to emphasise people’s subjective sense of offense or dissatisfaction at a given use of their digital likeness, as the RDSR does. Rather, we should ground the significance of consent in other, more legitimate interests, such as the interest in avoiding reputational harm. I will contend that different interests should be invoked in different cases, and that when it comes to griefbots (as well as some deepfakes) created against the person’s wishes, two such interests standout: the interest in avoiding the misrepresentation of one’s views and attitudes, and that in reaching closure in one’s life narrative. My analysis will provide grounds for maintaining a strong emphasis on the consent of the subject in the creation of digital clones like griefbots, while still allowing exceptions for practice like satirical deepfakes.