Opting for the technical solution
Home » AI » Step by step » Business understanding » Opting for the technical solution

In general, AI developers should always provide for the development of more understandable algorithms over less understandable ones (see Transparency section in the part of General Exposition in AI). Trade-offs between the explainability/transparency and best performance of the system must be appropriately balanced based on the context of use. For instance, in healthcare the accuracy and performance of the system may be more important than its explainability, whereas, in policing, explainability is much more crucial to justify behaviors and outcomes of law enforcement. In other areas, such as recruitment, both accuracy and explainability are similarly valued.[1] If a service can be offered through both, by an easy to understand and an opaque algorithm, that is, when there is no trade-off between explainability and performance, the controller should opt for the one that is more interpretable (see the ‘Lawfulness, fairness and transparency’ section in the ‘Principles’ chapter).

Box 13: Interpreting interpretability

Even though interpretability seems to be recommended, it must be acknowledged that this is not a clear concept. The academic literature shows different motivations for interpretability and, more importantly, offers myriad notions of what attributes render models interpretable. It is still unclear what “interpretation” brings together. At first sight it seems reasonable to suppose that simple, lineal algorithms are easier to understand. However, “for some kinds of post-hoc interpretation, deep neural networks exhibit a clear advantage. They learn rich representations that can be visualized, verbalized, or used for clustering. Considering the desiderata for interpretability, linear models appear to have a better track record for studying the natural world but we do not know of a theoretical reason why this must be so. Conceivably, post-hoc interpretations could prove useful in similar scenarios.” Therefore, it is hard to arrive at specific recommendations on which type of models should be preferred on the basis of their “interpretability”[2]

 

 

References


1SHERPA project (2019) Guidelines for the ethical development of AI and big data systems: an ethics by design approach. SHERPA, p.26. Available at: www.project-sherpa.eu/wp-content/uploads/2019/12/development-final.pdf (accessed 15 May 2020).

2Lipton, Z.C. (2017) ‘The mythos of model interpretability’, 2016 ICML workshop on human interpretability in machine learning (WHI 2016), New York, NY. Available at: https://arxiv.org/pdf/1606.03490.pdf (accessed 15 May 2020).

 

Skip to content