Social Europe

politics, economy and employment & labour

  • Themes
    • European digital sphere
    • Recovery and resilience
  • Publications
    • Books
    • Dossiers
    • Occasional Papers
    • Research Essays
    • Brexit Paper Series
  • Podcast
  • Videos
  • Newsletter

Explaining artificial intelligence in human-centred terms

Martin Schüßler 24th June 2020

This series is a partnership with the Weizenbaum Institute for the Networked Society and the Friedrich Ebert Stiftung

Since AI involves interactions between machines and humans—rather than just the former replacing the latter—’explainable AI’ is a new challenge.

explainable AI, XAI
Martin Schüßler

Intelligent systems, based on machine learning, are penetrating many aspects of our society. They span a large variety of applications—from the seemingly harmless automation of micro-tasks, such as the suggestion of synonymous phrases in text editors, to more contestable uses, such as in jail-or-release decisions, anticipating child-services interventions, predictive policing and many others.

Researchers have shown that for some tasks, such as lung-cancer screening, intelligent systems are capable of outperforming humans. In many other cases, however, they have not lived up to exaggerated expectations. Indeed in some, severe harm has eventuated—well-known examples are the COMPAS system used in some US states to predict reoffending, held to be racially-biased (although that study was methodologically criticised), and several fatalities involving Tesla’s autopilot.

Black boxes

Ensuring that intelligent systems adhere to human values is often hindered by the fact that many are perceived as black boxes—they thus elude human understanding, which can be a significant barrier for their adoption and safe deployment. Over recent years there has been increasing public pressure for intelligent systems ‘to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made’. It has even been debated whether explanations of automated systems might be legally required.

Our job is keeping you informed!


Subscribe to our free newsletter and stay up to date with the latest Social Europe content.


We will never send you spam and you can unsubscribe anytime.

Thank you!

Please check your inbox and click on the link in the confirmation email to complete your newsletter subscription.

.

Explainable artificial intelligence (XAI) is an umbrella term which covers research methods and techniques that try to achieve this goal. An explanation can be seen as a process, as well as a product: it describes the cognitive process of identifying causes of an event. At the same time, it is often a social process between an explainer (sender of an explanation) and an explainee (receiver of an explanation), with the goal to transfer knowledge.

Much work on XAI is centred on what is technically possible to explain and explanations usually cater for AI experts. But this has been aptly characterised as ‘the inmates running the asylum’, because many stakeholders are left out of the loop. While it is important that researchers and data scientists are able to investigate their models, so that they can verify that they generalise and behave as intended—a goal far from being achieved—many other situations may require explanations of intelligent systems, and to many others.

Many intelligent systems will not replace human occupations entirely—the fear of full automation and eradication of jobs is as old as the idea of AI itself. Instead, they will automate specific tasks previously undertaken (semi-)manually. Consequently, the interaction of humans with intelligent systems will be much more commonplace. Human input and human understanding are prerequisites for the creation of intelligent systems and the unfolding of their full potential.

Human-centred questions

So we must take a step back and ask more values- and human-centred questions. What explanations do we need as a society? Who needs those explanations? In what context is interpretability a requirement? What are the legal grounds to demand an explanation?

We also need to consider the actors and stakeholders in XAI. A loan applicant requires a different explanation than a doctor in an intensive-care unit. A politician introducing a decision-support system for a public-policy problem should receive different explanations than a police officer planning a patrol with a predictive-policing tool. Yet what incentive does a model provider have to provide a convincing, trust-enhancing justification, rather than a merely accurate account?


We need your support


Social Europe is an independent publisher and we believe in freely available content. For this model to be sustainable, however, we depend on the solidarity of our readers. Become a Social Europe member for less than 5 Euro per month and help us produce more articles, podcasts and videos. Thank you very much for your support!

Become a Social Europe Member

As these open questions show, there are countless opportunities for non-technical disciplines to contribute to XAI. There is however little such collaboration, though much potential. For example, participatory design is well equipped to create intelligent systems in a way that takes the needs of various stakeholders into account, without requiring them to be data-literate. And the methods of social science are well suited to develop a deeper understanding of the context, actors and stakeholders involved in providing and perceiving explanations.

Evaluating explanations

A specific instance where disciplines need to collaborate to arrive at practically applicable scientific findings is the evaluation of explanation techniques themselves. Many have not been evaluated and most of the evaluations which have been conducted have been functional or technical, which is problematic because most scholars agree that ‘there is no formal definition of a correct or best explanation’.

At the same time, the conduct of human-grounded evaluations is challenging because no best practices yet exist. The few existing studies have often found surprising results, which emphasises their importance.

One study discovered that explanations led to a decrease in perceived system performance—perhaps because they disillusioned users who came to understand that the system was not making its predictions in an ‘intelligent’ manner, even though these were accurate. In the same vein, a study conducted by the author indicated that salience maps—a popular and heavily marketed technique for explaining image classification—provided very limited help for participants to anticipate classification decisions by the system.

Many more studies will be necessary to assess the practical effectiveness of explanation techniques. Yet it is very challenging to conduct such studies, as they need to be informed by real-world uses and the needs of actual stakeholders. These human-centered dimensions remain underexplored. The need for such scientific insight is yet another reason why we should not leave XAI research to technical scholars alone.

Martin Schüßler

Martin Schüßler is a Phd candidate at TU Berlin, working at the
interdisciplinary Weizenbaum Institute.

Home ・ Politics ・ Explaining artificial intelligence in human-centred terms

Most Popular Posts

schools,Sweden,Swedish,voucher,choice Sweden’s schools: Milton Friedman’s wet dreamLisa Pelling
world order,Russia,China,Europe,United States,US The coming world orderMarc Saxer
south working,remote work ‘South working’: the future of remote workAntonio Aloisi and Luisa Corazza
Russia,Putin,assets,oligarchs Seizing the assets of Russian oligarchsBranko Milanovic
Russians,support,war,Ukraine Why do Russians support the war against Ukraine?Svetlana Erpyleva

Most Recent Posts

trade,values,Russia,Ukraine,globalisation Peace and trade—a new perspectiveGustav Horn
biodiversity,COP15,China,climate COP15: negotiations must come out of the shadowsSandrine Maljean-Dubois
reproductive rights,abortion,hungary,eastern europe,united states,us,poland The uneven battlefield of reproductive rightsAndrea Pető
LNG,EIB,liquefied natural gas,European Investment Bank Ukraine is no reason to invest in gasXavier Sol
schools,Sweden,Swedish,voucher,choice Sweden’s schools: Milton Friedman’s wet dreamLisa Pelling

Other Social Europe Publications

The transatlantic relationship
Women and the coronavirus crisis
RE No. 12: Why No Economic Democracy in Sweden?
US election 2020
Corporate taxation in a globalised era

ETUI advertisement

Bilan social / Social policy in the EU: state of play 2021 and perspectives

The new edition of the Bilan social 2021, co-produced by the European Social Observatory (OSE) and the European Trade Union Institute (ETUI), reveals that while EU social policy-making took a blow in 2020, 2021 was guided by the re-emerging social aspirations of the European Commission and the launch of several important initiatives. Against the background of Covid-19, climate change and the debate on the future of Europe, the French presidency of the Council of the EU and the von der Leyen commission must now be closely scrutinised by EU citizens and social stakeholders.


AVAILABLE HERE

Eurofound advertisement

Living and working in Europe 2021

The Covid-19 pandemic continued to be a defining force in 2021, and Eurofound continued its work of examining and recording the many and diverse impacts across the EU. Living and working in Europe 2021 provides a snapshot of the changes to employment, work and living conditions in Europe. It also summarises the agency’s findings on issues such as gender equality in employment, wealth inequality and labour shortages. These will have a significant bearing on recovery from the pandemic, resilience in the face of the war in Ukraine and a successful transition to a green and digital future.


AVAILABLE HERE

Foundation for European Progressive Studies Advertisement

EU Care Atlas: a new interactive data map showing how care deficits affect the gender earnings gap in the EU

Browse through the EU Care Atlas, a new interactive data map to help uncover what the statistics are often hiding: how care deficits directly feed into the gender earnings gap.

While attention is often focused on the gender pay gap (13%), the EU Care Atlas brings to light the more worrisome and complex picture of women’s economic inequalities. The pay gap is just one of three main elements that explain the overall earnings gap, which is estimated at 36.7%. The EU Care Atlas illustrates the urgent need to look beyond the pay gap and understand the interplay between the overall earnings gap and care imbalances.


BROWSE THROUGH THE MAP

Hans Böckler Stiftung Advertisement

Towards a new Minimum Wage Policy in Germany and Europe: WSI minimum wage report 2022

The past year has seen a much higher political profile for the issue of minimum wages, not only in Germany, which has seen fresh initiatives to tackle low pay, but also in those many other countries in Europe that have embarked on substantial and sustained increases in statutory minimum wages. One key benchmark in determining what should count as an adequate minimum wage is the threshold of 60 per cent of the median wage, a ratio that has also played a role in the European Commission's proposals for an EU-level policy on minimum wages. This year's WSI Minimum Wage Report highlights the feasibility of achieving minimum wages that meet this criterion, given the political will. And with an increase to 12 euro per hour planned for autumn 2022, Germany might now find itself promoted from laggard to minimum-wage trailblazer.


FREE DOWNLOAD

About Social Europe

Our Mission

Article Submission

Membership

Advertisements

Legal Disclosure

Privacy Policy

Copyright

Social Europe ISSN 2628-7641

Social Europe Archives

Search Social Europe

Themes Archive

Politics Archive

Economy Archive

Society Archive

Ecology Archive

Follow us on social media

Follow us on Facebook

Follow us on Twitter

Follow us on LinkedIn

Follow us on YouTube