Abstract
Most scholars think that using deepfakes to cause epistemic harm (i.e., to mislead, conceal truth, etc.) may cause us to stop trusting digital testimonies as a whole. This, in turn, will have serious consequences on our interpersonal relations—since most of our communication is conducted in the digital world. I argue that deepfakes created for non-malevolent purposes (e.g., for fun) are epistemically far more dangerous: they undermine digital communication inadvertently and to a much greater extent than malevolent deepfakes. Specifically, they allow malevolent deepfakes to camouflage themselves as non-malevolent (i.e., blend with the environment) thus creating a context of epistemic uncertainty. However, we may have reasons to believe that rational self-interested agents will agree to substantially limit or ban the use of deepfakes. Rather than educating people on the nature of deepfakes or misinformation, we should educate them on the potential harm they could cause to others and themselves by creating and disseminating deepfakes.
| Original language | English |
|---|---|
| Title of host publication | Artificial Intelligence and the Future of Human Relations |
| Subtitle of host publication | Eastern and Western Perspectives |
| Publisher | Springer Science+Business Media |
| Pages | 133-148 |
| Number of pages | 16 |
| ISBN (Electronic) | 9789819671854 |
| ISBN (Print) | 9789819671847 |
| DOIs | |
| Publication status | Published - Jan 1 2025 |
Keywords
- Deepfake technology
- Freedom of expression
- Misinformation
- Testimonies
- Trust
ASJC Scopus subject areas
- General Arts and Humanities
- General Engineering
- General Computer Science