> news & events > How can we explain the decisions taken by artificial intelligence?
How can we  explain the decisions taken by artificial intelligence?

How can we explain the decisions taken by artificial intelligence?

Decisions that rely on artificial intelligence (AI) have the particularity that we are not aware of the logical sequence that resulted in the recommended solution. Added to this is the use of statistical learning methods: “deep learning” rests on correlations between millions, or even billions, of parameters, which cannot be translated into explicit causal links.

And yet, in order to grant our trust, we require explanations. Isabelle Bloch, a professor at Sorbonne University, stresses the essential role of human beings in this regard. It is indeed a question of identifying how to judiciously compensate for the algorithm’s opacity, according to each specific case. The challenge is above all to choose the type of explanation to provide, depending on needs and the people we are addressing. Are we dealing primarily with an issue of trust, of ethics, of responsibility? What is our interlocutors’ level of understanding of AI? For example, we might choose to explain what are the data used, the operating principles of the AI used, the precautions to be taken when using its results, etc. Thus, the more AI develops, the more we will need to develop our ability to communicate about and discuss its results. A new skill set to be explored.


Source: Il faut justifier les décisions prises par un algorithme [Decisions taken by an algorithm need to be justified], interview of Isabelle Bloch by Sophy Caulier, Polytechnique Insights, December 2021.

 

Free trial

Discover our synopses freely and without commitment!

Free trial

All publications

Explore