My journey into Explainable AI (XAI) began when Professor Sabina Leonelli asked me to prepare a lecture for her course AI in Diverse Societies. I never had the chance to deep dive into the topic - though I was already familiar with the basics and some XAI techniques used for neural networks.
Naively, I expected a straight, well-marked road. Instead, I soon found myself in a labyrinth - a place of countless options, each branching into others, some looping back to where I began, others ending abruptly. Navigating it meant not only choosing which turns to take, but also deciding which doors to leave unopened.
After a while, I realized that this endless exploration had not brought me any closer to the heart of the topic than when I began. I started to wonder about XAI’s very premises - and whether they themselves might be responsible for the labyrinth in which I found myself and what the red thread to get out might look like.
If you want to know more…
This reflection became a lecture first, and then a blogpost for The Ethical Data Initiative, and later an post - in Italian this time - on Magazine Intelligenza Artificiale.