June 11, 2020

Explainable artificial intelligence and its importance in healthcare




alternative

Imagine that you are a medical doctor and a COVID-19 patient comes to you with severe symptoms. You think it is best to send him to intensive care, however, places are limited and your decision support system – based on machine learning – suggests otherwise. This system normally works very well and gives accurate predictions, but now you feel unsure. Why did it decide this way? What explanation could you give to the patient? Should you trust the system or should you go with your own decision?

Explainable artificial intelligence (XAI) 1 is here for your help! It is an emerging field of artificial intelligence (AI) with the aim of shedding light on the reasons behind the decisions of AI models. Recognising the importance of this aspect for the responsible development and application of AI, the research in this topic has skyrocketed in recent years, as supported by Figure 1 below.

Figure 1: The number of publications with a title, abstract and/or keywords referring to the field of XAI in the past years, querying the ScopusR database on December 10th, 2019. 2

Different approaches

There are two main approaches to achieve model interpretability. The first one is to only use prediction algorithms that are intuitive and simple enough for humans to understand, such as linear regressions and decision trees. The second is to use post hoc methods to analyse, approximate, or in some other ways make sense of the complex model and its predictions. Popular examples belonging to this group are partial dependence plots3 and the 4 technique5, see Figure 2 for an illustration of how the latter one works.

Figure 2: An illustration of how the LIME technique explains an individual prediction in a post hoc manner. By highlighting the main factors for and against the prediction, it enables the doctor to decide whether to trust the model or not in this instance. (Figure source is also from the LIME paper in the reference)

Potential

Understanding the reasons behind the decisions of an AI system in a healthcare setting helps in four main areas:

  1. Satisfying people’s “right to explanation”. By and large, the EU’s General Data Protection Regulation (GDPR)6 has established that individuals have a right to be given an explanation for the outputs of algorithmic decision-making systems that affect their lives.7 In this regard, ensuring some level of transparency and XAI is a requirement by law.
  2. Identifying problems. Seeing only the final output of an algorithm is often not enough to tell if there is a problem with the model, what that problem is, and how the system could be improved.
  3. Discovering new knowledge. In some cases, algorithms can pick up on unexpected things in the data that make sense. This way, well-explained decisions can lead to the discovery of new knowledge and insight. Building trust.
  4. Building trust. Unless the end-users understand why the system made a certain decision, it is hard for them to rely on it in a safety-critical situation, even if the decision is correct.

Drawbacks

In general, complex models have higher accuracy but lower comprehensibility than simpler models. Therefore, when simple models are used to ensure interpretability, a trade-off between model performance and understandability has to be made. On the other hand, using post hoc methods to explain complex models is less trustworthy, as these explanations usually cannot fully be faithful to the original model. Another difficulty is that explanations are subjective and hard to compare. Therefore, finding the best ones for different users can be tricky, but some insights from the social sciences can help guide us 8.

Conslusion

Although the quest to make AI explainable has its challenges, it is a necessity to ensure transparency, and it can also help to identify problems with the algorithm, discover new knowledge, and build trust with the user - which are all of great importance in a healthcare setting.

Download

Author information:

Emese Thamo (XAI Researcher @perfeXia) is a Cambridge maths graduate, pursuing a PhD at the University of Liverpool, where her research topic concerns the safety and interpretability of deep learning. She is passionate about innova-tive technologies and has work experience in the personalised healthcare and aerospace in-dustries.

Contact him at : emese.thamo@perfexia.health