How to build the AI-enabled society that different members of the public want – findings from the Public Voices in AI project

Background to panel 

Public Voices in AI (2024-2025, funded by RAi UK) aimed to ensure that public voices are attended to in AI development, use, policy & research. A consortium of expert researchers, from the ESRC Digital Good Network at the University of Sheffield, the Alan Turing Institute, the Ada Lovelace Institute, Elgon Social Research and UCL’s Centre for Responsible Innovation, reviewed what’s already known about public views on AI and about engaging diverse publics in AI. We then addressed gaps in that knowledge, highlighting the distinct views of different people and groups, and producing resources to support the inclusion of public voice in AI. We worked in consultation with members of the public, including those most negatively affected and underrepresented to aid our decisions. 

 

AI brings enormous opportunities. As the UK Government’s UK AI Opportunities Action Plan and Digital Inclusion Action Plan highlight, these include economic growth and national service efficiencies, as well as the easing of daily tasks at home and work. Yet it also comes with significant risks to people’s livelihoods and welfare. We need to understand what people think about AI if we are to empower policy-makers to produce good AI policy and AI developers to build the AI-enabled society that people want. Indeed, there is widespread acknowledgement that a failure in public confidence could be one of the biggest roadblocks to AI’s transformative potential. Despite this recognition, public voice is frequently missing from conversations about AI. Addressing this gap is essential to enable ‘good AI’ – which maximises benefits, prevents harm and works for the people. 

 

Public voice’ is not easy to define, as there is no one ‘public’. Different people benefit from and are affected by AI differently, and their hopes, concerns and experiences also vary. Centuries of structural inequities and overlapping systems of oppression (e.g. racism, sexism, ableism, colonialism, transphobia, classism) mean that some of us have more resources and access to power to shape AI technologies than others. There is a related participation gap between those with the social capital to participate in shaping AI and those without. AI public voice and public participation initiatives therefore need to centre those most impacted and underrepresented. Public Voices in AI did that. 

 

Our endeavours to understand public attitudes to AI include a national survey of over 3500 people and community researchers in Belfast, Brixton and Southampton. We also reviewed over 300 different studies from around the world to identify what existing evidence tells us about public thoughts and feelings about AI, who is included and who is excluded from existing evidence, and how this impacts knowledge. We surveyed over 4000 international AI researchers’ perceptions of AI benefits and harms, of including publics in their work, and of public attitudes. We funded participatory projects with people from underrepresented and negatively affected communities, which were led by The People Speak, UNJUST, Migrants’ Rights Network, and The Workers’ Observatory. We worked with the People’s Panel on AI, convened by Connected By Data, as an example of meaningful public inclusion in action. They guided our research development and dissemination, most fully engaging with our innovative Let’s Talk About AI webtoon, influencing a wider campaign for critical AI literacy. They also advised on resources we produced for AI researchers, developers and policy-makers to support the inclusion of public voices in AI, including a framework and self-assessment workbook, which organisations, practitioners and technologists can use to critically evaluate and enhance their approaches to participation. We will present this work and our findings on the proposed panel. 

 

Panel description 

Public Voices in AI proposes a panel comprising researchers from the project with both expert and lay public discussants to reflect on how to build and regulate for the AI-enabled society that diverse members of the public want, and involve them in doing this from the outset. Our research found enormous appetite for this, especially amongst those most negatively impacted by AI. Furthermore, through the dissemination of our findings (March – June 2025), we have identified an enormous appetite for this amongst AI researchers, developers and decision-makers, alongside a keenness for cross-community discussion and shared learnings from our project. 

In the UK, there is great expertise in meaningfully involving the public in AI decision-making, which can be harnessed to build the AI-enabled society that people want. But at present, expertise, knowledge and the diverse groups that produce them are not joined up, and they are excluded from important processes such as the AI Opportunities Action Plan. We will reflect on our efforts to convene this work to shape AI for public good, and in the interests of the AI community.  

 

Speakers 

  • Margaret Colling, retired librarian and member of the People’s Panel in AI on the importance of public deliberation and inclusion   

 

For further information about Public Voices in AI, and to access our outputs, visit our project website.