Katie Winkle is a social robotics researcher at Uppsala University in Sweden, who studies the risks and opportunities that AI-powered robots bring to the classroom. She is developing tools to ensure that robots contribute to inclusive classrooms. Annie Brookman-Byrne talks with Katie about untangling the impacts of social robots on children, and about the evolution of robotic technology.
Annie Brookman-Byrne: In your research, what questions are you asking about robots in the classroom?
Katie Winkle: AI-powered social robots, that is, characterful robots that interact with us in social, human-like, or animal-like ways, have the potential to deliver more education to more children. But how might interacting with these robots influence children’s behaviours, as well as their relationships with teachers and peers? Might they make classrooms more inclusive, allowing all students to be seen and heard?
“Pretrained AI systems like chatbots and facial recognition systems frequently harbour gender and ethnicity biases.”
Pretrained AI systems like chatbots and facial recognition systems frequently harbour gender and ethnicity biases. I am developing computational approaches that allow students and teachers to “teach” robots how to behave – this democratises the application of AI and makes it possible to adapt the robots to diverse learning settings. Fine-tuning robots in this way should reduce biases rooted in the original data.
I also want to understand the risks and opportunities that social robots bring to the classroom. Might they encourage more girls to explore STEM subjects? Might they help mediate difficult interactions between students and their teachers and peers? Or might they actually undermine those relationships, which are crucial to social and emotional wellbeing?
These technologies are evolving rapidly. Social robots are already appearing in some classrooms. However, in our efforts to improve access and inclusion we risk perpetuating existing educational inequities, if we are not careful.
ABB: Have social robots for children’s learning changed over time?
KW: A lot of the early work in this area focused on autism, as researchers explored whether social robots might be used to promote social interaction skills. There was also typically a focus on one-on-one interactions between the child and the robot; the goal was to understand if and how a robot could autonomously deliver an intervention to a child, as a patient, predictable social partner for practicing emotion recognition and perspective-taking.
Recently, researchers have been more interested in learning how social robots might influence interactions between groups of students, and how one-on-one interactions between a child and a robot might influence children’s later interactions with their peers, teachers, and parents. It appears that interactions with robots and artificial social agents really can influence how children (and adults!) interact with others, so there is growing interest in understanding these effects.
However, social interaction is complex! It’s really hard to understand why children respond (or don’t respond) to a particular robot behaviour, and to untangle the ways robots influence children’s later behaviour. This complexity makes it difficult to determine just how to prevent the robot from causing problems, let alone how it can actually achieve something positive! But there’s something quite special about robots for both children and adults. They are clearly compelling, and why remains a bit of a mystery.
ABB: What do you hope your research, and the field of social robotics, will achieve?
KW: I hope to help new digital tools deliver on their promise to provide better education for all students, not only those who are already privileged. By involving children in the design and automation of social robots, I also hope to improve children’s digital literacy. Children need to understand how AI works, what data are being collected about them, and how those data are being used. Given the rapid pace at which these technologies are evolving, children need to understand how to engage with these technologies safely – and when to disengage.
“My vision is for social robots to work alongside teachers to support children’s individual and group work.”
I am very aware of the risks associated with AI and robots in education, but despite those risks, I hope to see AI-powered robots put to good use. My vision is for social robots to work alongside teachers to support children’s individual and group work. Ultimately, I want them to be used as a tool to create more inclusive and welcoming classrooms.
Footnotes
Katie Winkle is an Assistant Professor in the Department of Information Technology at Uppsala University. She is a member of the Human Machine Interaction unit at the Division of Vi3, working in the social robotics lab. Before joining Uppsala University, she completed a digital-futures postdoctoral research fellowship at the KTH Royal Institute of Technology in Stockholm and a PhD in robotics at the Bristol Robotics Lab in the UK. Her work draws from design as well as the computer, cognitive, and social sciences to tackle technical and societal challenges relating to human-machine interaction. Katie is a Jacobs Foundation Research Fellow 2023-2025.
This interview has been edited for clarity.