Abstract
What Deep Neural Networks (DNNs) can do is impressive, yet they are notoriously opaque. Responding to the worries associated with this opacity, the field of XAI has produced a plethora of methods purporting to explain the workings of DNNs. Unsurprisingly, a whole host of questions revolves around the notion of explanation central to this field. This note provides a roadmap of the recent work that tackles these questions from the perspective of philosophical ideas on explanations and models in science.
Original language | English |
---|---|
Pages (from-to) | 101-106 |
Number of pages | 6 |
Journal | CEUR Workshop Proceedings |
Volume | 3319 |
Publication status | Published - 2022 |
Externally published | Yes |
Event | 1st Workshop on Bias, Ethical AI, Explainability and the Role of Logic and Logic Programming, BEWARE 2022 - Udine, Italy Duration: Dec 2 2022 → … |
Keywords
- Black Box Problem
- Deep Neural Networks
- Explainable Artificial Intelligence
- explanation
- scientific models
- understanding
ASJC Scopus subject areas
- General Computer Science