Demos
Title and Abstract |
Authors |
ADD-TREES: AI-Enabled Tree Planting for Net ZeroOur group works together on an AI for Net Zero research project, ADD-TREES, to develop and deliver AI techniques with a focus on providing tools to support decision making on where to plant the UK’s new woodlands for nature and net zero. Our team will demonstrate AI being used to synthesis process-based science, social science and economics to help address one of the defining challenges of our generation: the mitigation of climate change. We are presenting AI-enabled apps and tools that will help land managers and policy makers design new woodlands for net zero. |
Daniel Williamson (University Of Exeter School Of Business & Economics); Kate Gannon (University Of Exeter School Of Business & Economics); Amy Binner (University Of Exeter School Of Business & Economics); Deyu Ming (UCL); Mingda Yuan (UCL); Timothée Bacri, (University of Exeter); Ivis Kerama (University of Exeter); Bertrand Nortier (University of Exeter); Muhammad Hasan (University of Exeter); Boyun Song (University of Exeter) |
AI Lens: A Live Generative AI Camera System for Dynamic Image TransformationThis demo introduces AI Lens, a novel AI-powered camera system that applies live generative transformations to video in real time. By replacing a traditional camera monitor with AI-generated imagery, the system enables creatives to engage in embodied, real-time interactions with diffusion models, actively shaping generative and presentation processes through movement, framing, and direct manipulation of model parameters. The system explores models for real-time prompt manipulation, and prompt-weighting manipulation, as well as social and technical organisations of a participatory capture process. |
Richard Ramchurn (University of Nottingham); Paul Tennent (The University of Nottingham); Gabriella Giannachi (The University of Exeter); Feng Zhou (University of Nottingham); Karen Lancel (Lancel/Maat); Hermen Maat (Lance/Maat); Kieran Woodward (The University of Nottingham); Angela Higgins (The University of Nottingham); Dominic Price (The University of Nottingham); Steve Benford (The University of Nottingham); Rachael Garrett (The Royal Institute of Technology (KTH)); Christos Kitsidis (The Royal Institute of Technology (KTH)) |
Embrace Angels: an Artistic Exploration of how Humans and Robots can Embrace«Embrace Angels is an artist-led exploration of how humans and robots might embrace in the future intended to engage audiences with questions about responsible physical engagement with future robots. We will present a video installation of documentary material from four design workshops and a first public performance which took place in July 2025. |
Richard Ramchurn (University of Nottingham); Paul Tennent (The University of Nottingham); Gabriella Giannachi (The University of Exeter); Karen Lancel (Lancel/Maat); Hermen Maat (Lance/Maat); Kieran Woodward (The University of Nottingham); Angela Higgins (The University of Nottingham); Dominic Price (The University of Nottingham); Steve Benford (The University of Nottingham); Rachael Garrett (The Royal Institute of Technology (KTH)); Christos Kitsidis (The Royal Institute of Technology (KTH)) |
EdgeAI-Enabled Responsible Hybrid Federated Learning«This research aims to develop a responsible hybrid federated learning system for medical anomaly detection, specifically optimized for EdgeAI platforms. The proposed approach enables multiple clients to train their local models and share only model updates with a fusion server, preserving data privacy. It also integrates fairness evaluation and resilience to adversarial poisoning. A live demo and real-time dashboard will showcase the system’s capabilities on lightweight edge devices. |
Mostafa Anoosha (University of Hull); Koorosh Aslansefat (University of Hull); Dhaval Thakker (University of Hull) |
Open-Source AI Framework for Exploring Electronic MaterialsRecent advances in AI models and materials databases have significantly accelerated materials discovery, yet the resulting landscape of these models and databases enabling materials discovery remains fragmented—especially in the field of electronics materials, where accurate property prediction and model integration are essential. Herein, we present an open-source, unified framework that integrates materials databases, generative models, and property predictors into a modular and extensible platform. The system offers standardized classification, cross-database querying, model benchmarking, and an LLM-based interface for natural language interaction. Users can easily configure application-specific workflows and data pipelines for property-based database filtering and model selection for maximizing the desired figure-of-merit (e.g. electronic property prediction, stable material generation), using simple YAML or JSON files. This framework streamlines AI-driven materials research, enhances interoperability, and expands access to advanced discovery tools for the electronic materials community. |
Atish Dixit (University of Edinburgh); Alexandros Keros (University of Edinburgh); Ben Rowlinson (University of Edinburgh); Shashank Mishra (University of Edinburgh); Jacqueline Cole (University of Cambridge); Subramanian Ramamoorthy (University of Edinburgh); Themis Prodromakis (University of Edinburgh) |
A Demonstrator of Human-Centred Adaptive Multi-Agent Robotics Using Multimodal Physiological SensingWe will be showcasing a demonstrator of a human-centred, adaptive multi-agent robotic system operating in a dynamic environment. The system uses multimodal sensing to infer human states and enable real-time behavioural adaptation, showcasing responsible and responsive human-robot interaction for complex, real-world scenarios |
Aleksandra Landowska (University of Nottingham); Aislinn D. Gomez Bergin (University of Nottingham); Andriana Boudouraki (University of Nottingham); Dominic Price (University of Nottingham); Joel E. Fischer (University of Nottingham) |
Constant Washing Machine: Re-thinking responsible AI as an everyday practice and across the AI research community«The AHRC/BRAID-funded Framing Responsible AI Implementation and Management (FRAIM) team included researchers from diverse backgrounds and disciplines, who worked ‘in-and-with’ partner organisations from different sectors, working with AI. We sought to identify the key questions and stakeholders in making responsible use of AI more tangible and practical for organisations. Through a review of responsible AI literature and resources and in-depth interviews with staff from our partners with different roles related to AI, we sought to understand what questions are and are not being asked about what “responsible AI” means, and to whom. This demonstration presents one output from the project through the collaboration with arts collective Blast Theory as practice-based researchers. |
Susan Oman (University of Sheffield); Denis Newman-Griffis (University of Sheffield); Hannah Redler-Hawes (Independent Curator) |
Musically Embodied Machine Learning: Instrument DemoThe MEMLNaut is a novel instrument, developed under the AHRC Musically Embodied Machine Learning project, that gives musicians access to creative machine learning. It’s key innovation is to provide flexible interface options for optimising models within the device, allowing musicians to develop customised models with their own data, and engage creatively with the full processes of machine learning. The instrument has modes for conventional interactive machine learning (IML), and also offers a reinforcement learning (RL) mode. The instrument itself can both process and synthesise sound, and offers expansions to connect sensors and actuators. The demo will allow conference participants to try out sound synthesis with the MEMLNaut, using IML and RL. |
Chris Kiefer (University of Sussex); Andrea Martelloni (University of Sussex) |
Serious games for ethical preference elicitationAutonomous agents acting on behalf of humans must act according to those humans’ ethical preferences. However, ethical preferences are latent and abstract, and thus it is challenging to elicit them. To address this, we present a serious game that helps elicit ethical preferences in a more dynamic and engaging way than traditional methods, such as questionnaires or simple dilemmas. In this game, the player operates a drone in a rescue setting. The drone can extinguish fires in buildings while flying over a city. It faces different kinds of ethical dilemmas while rescuing people by extinguishing the fire and acts as per the player’s response. In the end, based on all the choices and responses of the player in the game, we infer their ethical preferences in this setting. |
Jayati Deshmukh (University of Southampton); Zijie Liang (University of Southampton); Vahid Yazdanpanah (University of Southampton); Sebastian Stein (University of Southampton); Sarvapali D Ramchurn (University of Southampton) |
Closer to Go(o)d?Closer to Go(o)d? seeks to undermine what some see as the quasi-mystical nature of AI as a neutral, all-seeing arbiter of knowledge. Such views rely on the vast distances that exist between AI labs and local communities. The title plays on the phrase ‘closer to God,’ critiquing the myth of AI’s neutrality by exposing how AI technology, with a focus on computer vision, work to embed systemic inequalities through colonial norms and classification systems. Systems that typically prioritise pattern matching over meaningful difference and undermine human complexity for optimised efficiency. Through participatory workshops with Black and ethnically diverse communities in Birmingham – and in collaboration with BRAID Research Fellow Sanjay Sharma, Diverse AI, and BLAST Fest – this commission embraces Afrofuturism to reimagine image annotation practices. It aims to demystify computer vision technologies and promote a radical, ethical approach to responsible AI that is centred in care and community. The culmination of this engagement is a short film that draws on the participatory experiences and practical outputs of the community workshops. This film not only activates critical AI literacy among diverse audiences but also connects past and present, challenges the myth of objectivity, and makes visible the biases embedded in ostensibly objective systems. Drawing inspiration from the West African philosophy of Sankofa, which emphasises learning from the past to improve the future, the project uses archival footage, collage and glitch effects to highlight how historical biases and colonial classifications continue to shape contemporary algorithmic practices. Closer to Go(o)d? aims not only to help us feel more informed and engaged in discussions about the future of responsible AI but also to encourage a broader rethink of social justice in relation to AI, making critical issues both visible and felt. |
Kiki Shervington-White |
Addressing sociotechnical limitations of LLMs: Requirement-Aligned Evaluation of AI SystemsLarge Language Models (LLMs), like those used in ChatGPT and virtual assistants, are cutting-edge artificial intelligence algorithms trained on massive amounts of text data. They can generate human-like text as well as creative content, translate across languages, and answer questions in an informative way. However, they have known technical limitations such as biases, privacy leaks, poor reasoning and lack of explainability, which raises concerns about their use in critical domains such as healthcare and law. An important goal of our Keystone project on Addressing sociotechnical limitations of LLMs (AdSoLve) is to develop extensive evaluation methods (including suitable novel criteria, metrics and tasks) for assessing the effectiveness and limitations of LLMs in real world settings, enabling our standards and policy partners to implement responsible regulations, and industry and third sector partners to robustly assess their systems. Evaluation should be based on both output quality on a technical level and importantly real world needs, yet existing practices fail to reflect real-world user requirements and task complexities. This demo presents the current state of the AdSoLve evaluation platform following an approach which first seeks to understand stakeholder goals, requirements, and how they interact with the technology, then translates them into targeted aspects and metrics computed on contextually appropriate datasets and tasks. |
Guneet Kohli (Queen Mary University of London); Mahmud Akhter (Queen Mary University of London) |