Skip to main content

Explainable AI

With ever more powerful algorithms and increasing algorithmic decision-making, the need to ensure that we understand why recommendations, predictions, classifications or generated content result the way they do is vital. Explainability is the solution to this challenge, and will help us to ensure that decision made by or with algorithms or AI are beneficial ones.

The power of algorithms and the importance of explainability

Explainability as a means to ensure desirable outcomes

As algorithms become more powerful and consequently more ubiquitous in machine-supported or fully automated decision making, explainability and interpretability, that is the ability for those using or being on the receiving end of algorithmic output to understand why a certain algorithmic output was derived, under what conditions and assumptions, what it’s constraints and caveats are is coming more and more important.

Not only is it a regulatory requirement, but it also ensures desirable outcomes and reduces risks to individuals and society at large.

Consequently, explainability and interpretability must be on the forefront of everyone designing, building and employing algorithmic decision making systems.

Explainable AI conference talk cover

Decoding the algorithm – conference talk

Why explainability and transparency matter when building AI-driven systems.

AI and algorithmic decision making

Algorithmic decision making is the process of making decisions based on or assisted by outputs from algorithms, with or without human involvement.

AI’ in this context is best seen as just a class of very powerful algorithms, which can help add value through classification, prediction, recommendation or solution generation.

The need for explainability

As algorithms – AI or not – become more powerful, they also become more impactful. Due to this far reaching scope and effect of these algorithms we need to ensure that those using algorithms and those on the receiving end of algorithmic outputs (fully automated or with human involvement) have sufficient insight into how inputs let to outputs, why outputs were derived, under which assumptions, with which caveats and with what level of confidence.

This is where the careful design of explainability as part of algorithmic decision making system comes in.

Explainability ensures that the decision made by or following algorithmic outputs are ‘good’ ones.

Benefits of explainability

Explainability is valuable for a number of reasons: The transparency it provides acts as ‘quality assurance’ for those using such systems, they provide trust, they ensure due process, and they ensure regulatory compliance.

While AI specific regulations are drawn up at great pace at this time, the EU AI act, but also more general regulations like GDPR already enshrine rights to explainability and intervention in most cases of ‘serious’ algorithmic decision making.

Why it matters?

The impact of algorithmic decision making is easy to overlook: Even the most simple case of content recommendations can become nefarious (e.g. election interference via social media targeting).

Loan and insurance applications can become a source of extreme social discrimination and economic hardship if not transparent and controlled, as can automated decisions as part of job application procedures or during educational examinations. Here the impact can be life-changing.

Furthermore, consider algorithms for emergency call triage, to determine prison sentence length or probation requirements, or algorithms used in medical imaging. Here impacts are certainly life-changing, possibly life-ending.

And this is without us going near algorithms, with even higher societal impact such as biometric profiling and social scoring, some of which are outright illegal to employ in the EU.

This is why on one hand we need to understand why our algorithms do what they do and how they do it, and also provide mean of human intervention.

How does ‘good’ explainability look like?

A good explanation allows users and recipients of algorithmic decision-making to understand outcomes and optimise for positive outcomes.

Good explanations reflect information and presentation needs in terms of usecase, domain, expectations and capabilities

They are

  • user centric
  • contextual
  • meaningful & understandable

Obviously, context matters: consider vision systems at a border checkpoint vs. an OCR system scanning a customer feedback form. Very different ‘beasts’ with very different ‘needs’ for explainability and oversight.

We achieve this by designing for explainability at every step of the SDLC with focus on the need of each stakeholder group and their usecaes.
ExplainableAI Blogpost Title

Explaining decisions made with AI — a statutory requirement | Blogpost

This article is a Tl;Dr summary of “Explaining decision made with AI” by ICO and The Alan Turing Institute.
Product management dark patterns title slide

Product Management Dark Patterns

A presentation Neha Datt and I gave at 2024 Agile on the Beach on ethical product management and why we need to follow product best practices mindfully, and avoid them turning in dark patterns with nasty, unintended consequences…
Marcel Britsch speaking at 2024 AOTB about existential risks ( (c) AOTB)

How civilisations end

5 minutes, 20 slides, on existential and global risk.
Ethical product management - cover

Ethical Product Management

A presentation I gave at the 2023 Product World on ethical product management and what ‘ethics’ mean for product teams on a day to day basis. In this talk I provide a ‘north star’ of what ‘ethical’ could mean, and a framework on how to make difficult ethical choices when designing, developing and operating products and services.
Bringing product thinking to DevOps

Bringing product management to DevOps

A presentation I gave at DevOps Days on how DevOps can add value to organisation beyond the narrow definition of infrastructure: I demonstrate why we need to expand the view of the DevOps pipeline to the entire value chain and how this can turn our ‘pipeline’ into a source of competitive advantage (and why, consequently we need to manage it as a product).
As part of this presentation I present a case study from the medical device industry where collaboration between DevOps, compliance and other disciplines gave one of my clients a major competitive edge. 
Burn Up Podcast logo

The Chinese Cyber Security Regime

A series of podcast episodes during which Chinese Cyber Security expert Dr. Michael D. Frick and I discuss the nature and implications of the Chinese Cyber Security Regime.
Contact us

Get in touch

We’d love to hear from you if you want to chat about an opportunity or challenge we might be able to help you with, if you want to work with or for us, or if you fancy chat to exchange thoughts.
Contact us