The power of algorithms and the importance of explainability
Explainability as a means to ensure desirable outcomes
As algorithms become more powerful and consequently more ubiquitous in machine-supported or fully automated decision making, explainability and interpretability, that is the ability for those using or being on the receiving end of algorithmic output to understand why a certain algorithmic output was derived, under what conditions and assumptions, what it’s constraints and caveats are is coming more and more important.
Not only is it a regulatory requirement, but it also ensures desirable outcomes and reduces risks to individuals and society at large.
Consequently, explainability and interpretability must be on the forefront of everyone designing, building and employing algorithmic decision making systems.
Decoding the algorithm – conference talk
AI and algorithmic decision making
Algorithmic decision making is the process of making decisions based on or assisted by outputs from algorithms, with or without human involvement.
‘AI’ in this context is best seen as just a class of very powerful algorithms, which can help add value through classification, prediction, recommendation or solution generation.
The need for explainability
As algorithms – AI or not – become more powerful, they also become more impactful. Due to this far reaching scope and effect of these algorithms we need to ensure that those using algorithms and those on the receiving end of algorithmic outputs (fully automated or with human involvement) have sufficient insight into how inputs let to outputs, why outputs were derived, under which assumptions, with which caveats and with what level of confidence.
This is where the careful design of explainability as part of algorithmic decision making system comes in.
Explainability ensures that the decision made by or following algorithmic outputs are ‘good’ ones.
Benefits of explainability
Explainability is valuable for a number of reasons: The transparency it provides acts as ‘quality assurance’ for those using such systems, they provide trust, they ensure due process, and they ensure regulatory compliance.
While AI specific regulations are drawn up at great pace at this time, the EU AI act, but also more general regulations like GDPR already enshrine rights to explainability and intervention in most cases of ‘serious’ algorithmic decision making.
Why it matters?
The impact of algorithmic decision making is easy to overlook: Even the most simple case of content recommendations can become nefarious (e.g. election interference via social media targeting).
Loan and insurance applications can become a source of extreme social discrimination and economic hardship if not transparent and controlled, as can automated decisions as part of job application procedures or during educational examinations. Here the impact can be life-changing.
Furthermore, consider algorithms for emergency call triage, to determine prison sentence length or probation requirements, or algorithms used in medical imaging. Here impacts are certainly life-changing, possibly life-ending.
And this is without us going near algorithms, with even higher societal impact such as biometric profiling and social scoring, some of which are outright illegal to employ in the EU.
This is why on one hand we need to understand why our algorithms do what they do and how they do it, and also provide mean of human intervention.
How does ‘good’ explainability look like?
A good explanation allows users and recipients of algorithmic decision-making to understand outcomes and optimise for positive outcomes.
Good explanations reflect information and presentation needs in terms of usecase, domain, expectations and capabilities
They are
- user centric
- contextual
- meaningful & understandable
Obviously, context matters: consider vision systems at a border checkpoint vs. an OCR system scanning a customer feedback form. Very different ‘beasts’ with very different ‘needs’ for explainability and oversight.
We achieve this by designing for explainability at every step of the SDLC with focus on the need of each stakeholder group and their usecaes.
Explaining decisions made with AI — a statutory requirement | Blogpost
Product Management Dark Patterns
Ethical Product Management
Bringing product management to DevOps
The Chinese Cyber Security Regime
Want to know more?
Here are a number of additional resources that you might find interesting…