AI Ethics in government: understanding its past, present and reimagining its future

Scroll this

This post was published on the Centre for Public Impact Medium Page.

CPI’s Thea Snow has joined forces with Lorenn Ruster, a Master in Applied Cybernetics student from the 3A Institute at the Australian National University, to explore the nature of AI ethics instruments (principles, frameworks, guidelines etc) in government and how they might be reimagined. This piece is the first of a few posts that will be shared along the way, outlining the context for this research, our work in progress questions and findings, and extending an invitation for collaboration.

Over the past 5 years, we have seen a range of Artificial Intelligence (AI) ethics instruments — frameworks, principles, guidelines — developed by governments and organisations to steward how AI should be built and used. Although these instruments are generally not enforceable, they can play an important role in shaping the types of conversations governments have and the types of regulatory and policy decisions they may make about the future of AI-enabled products and services.

Underlying any AI ethics instrument is a value set — a value set that we believe needs some further exploration. Our hypothesis is that the underlying value set for government AI Ethics instruments today is focused on things like prevention, on risk minimisation, on reducing downside. Undoubtedly, these things are important.

But is that all? Is risk minimisation the ultimate value we are striving to achieve through AI Ethics instruments? Should it be?

We believe that the role of government is not just about prevention or protection, but also ‘enablement’ or creating conditions for communities to thrive; both of these things are important. Our current hypothesis is that AI Ethics instruments are currently focused on the first half of the equation so far and taking a dignity-centred approach to AI Ethics could be helpful in moving towards enablement.

To explore this, we have devised a research project in two phases:

Phase 1: Understanding existing AI Ethics instruments

The Ethics Centre talks about ethics as “the process of questioning, discovering and defending our values, principles and purpose”. It defines values as “things we strive for, desire and seek to protect”, principles as “how we may or may not achieve our values” and purpose as giving life to values and principles. When it comes to AI Ethics in government, generally (and at least initially) a statement of principles emerges (see for example the Australian Government AI Ethics PrinciplesCanadian Government AI Guiding Principles etc). AI Ethics principles offer clues as to what things have been decided as important to governments.

Unpacking what’s in AI Ethics instruments and how they came to be will form the first phase of our research. This will be achieved through discourse analysis of current AI Ethics documents in the Australian, UK and Canadian governments to understand what is currently encapsulated. We will then attempt to further understand the genesis of how these instruments came to be and how they are currently used through semi-structured interviews.

We go into this first phase with an open mind, fuelled by a hunch that we believe we could begin to reimagine AI Ethics instruments by putting dignity at the centre, and in doing so, enable conditions for thriving communities. We’ll see whether this hypothesis still resonates at the end of Phase 1.

Phase 2: Reimagining AI Ethics in government

Several waves of AI Ethics have emerged over the past 5 years:

  • First wave: focused on principles and led by philosophers
  • Second wave: focused on technical fixes and led by computer scientists
  • Third wave: focused on practical fixes and fuelled by notions of justice

We are interested in pushing the thinking around what comes next. What might a fourth wave look like, and could the notion of dignity have a role to play?

Image for post
Figure 1: Waves of Ethical AI (adapted from Kind). Graphical template from Slidesgo, including icons by Flaticon and infographics & images by Freepik.

There are a lot of ways to describe dignity. After looking at a range of dignity frameworks across philosophy, law, nursing clinical care, theology and crisis negotiation, at this stage Donna Hicks’ essential elements of human dignity and dignity violations framework resonates. See Figure 2 for an outline of the framework. We would seek to apply this framework to AI Ethics in government and understand whether it gets us closer to enablement. As with the rest of this research, this is work in progress and we are open to your suggestions and feedback.

Image for post
Figure 2: Donna Hicks © Dignity Framework

Overall, we intend to:

  • understand the values underpinning AI ethics instruments currently used in the federal governments (or equivalent) of Australia, Canada and the UK. We hypothesise they are based on risk minimisation and we will explore this hypothesis through discourse analysis and semi-structured interviews.
  • reimagine what government AI ethics instruments could look like. We hypothesise that a dignity lens could allow for ‘enablement’ in addition to the anticipated risk focus.

An overview of our research can be found in Figure 3.

Image for post
Figure 3: A WIP overview of our research. Graphical template from Slidesgo, including icons by Flaticon and infographics & images by Freepik.

Our ask

This is the very beginning of our research and we firmly believe in the power of collective intelligence to help us ask better questions and interrogate our own biases with greater rigour. With this in mind, we invite:

  • feedback on what you’ve read above — does this resonate? What do you find interesting? What is unclear? What’s missing so far?
  • ideas — does this spark a new idea for you that we could potentially incorporate or explore together? Or trigger an idea you have already explored that may be relevant?
  • potential contacts — are you someone we should be speaking with on this? Is there someone in your network that we should be contacting?

In January and February 2021 we’ll be consolidating our thoughts and speaking with selected stakeholders with a view to publish an initial foundational piece in the first quarter of 2021. We will also continue to post our progress as we go.

To reach out, post in the comments below and/or contact us directly.

Thea Snow leads CPI’s work in Australia and New Zealand. Thea’s experience spans the private, public and not-for-profit sectors; she has worked as a lawyer, a civil servant and, most recently, as part of Nesta’s Government Innovation Team. Thea recently returned to Melbourne after spending a few years in London where, in addition to working at Nesta, she completed an MSc PPA at the London School of Economics and Political Science.

Lorenn Ruster is a social justice-driven strategy consultant, intrapreneur & Master in Applied Cybernetics student at the Australian National University’s 3A Institute. Previously, Lorenn was a Director at PwC’s Indigenous Consulting and a Director of Marketing & Innovation at a Ugandan Solar Energy Company whilst an Acumen Global Fellow. She is interested in the intersection of technology, cross-sector collaboration, impact, systems change & human compassion.

Tags: