If you’re waiting for clear scientific answers about how screens and AI affect children, you may be waiting a very long time. In the meantime, families’ daily decisions can feel both urgent and uncertain.

As a researcher who studies how digital technology affects children and adolescents, I try not to mention my job at social gatherings where I would rather disconnect from work. When it does come up, the conversation quickly follows a familiar pattern: Parents share worries, frustrations, and sometimes guilt. Others offer opinions or anecdotes.

“Many parents feel they are making high-stakes decisions about their children’s development with very little clear guidance.”

What strikes me most is not just the intensity of these conversations, but the uncertainty underneath them. Many parents feel they are making high-stakes decisions about their children’s development with very little clear guidance. They are navigating a fast-changing digital world that rarely pauses long enough for them to feel confident in their choices.

And they are not wrong.

Few topics in child development today provoke as much debate as digital technology. One reason is that “technology” is not a single, simple exposure. It includes fast-paced short-form videos, video calls with grandma halfway across the globe, educational apps of varying quality, and AI-powered toys.

These experiences are not the same. A video call with a relative may support connection and language development, while highly stimulating, rapidly changing content may place very different demands on attention. The effects of these experiences are also unlikely to be the same. They vary between children, and even within the same child depending on their age, previous experiences, and context.

“When researchers try to pin down ‘the effect of screen time’, the answer is often mixed or inconclusive.”

This complexity makes the science difficult. When researchers try to pin down “the effect of screen time”, the answer is often mixed or inconclusive. That does not mean nothing is happening. It means we are trying to study something highly varied using methods that are not always well suited to capturing that variation.

At the same time, concerning trends, lived experiences from families, and a broader societal unease cannot simply be dismissed because the evidence is not definitive.

In an ideal world, we would wait for clear, high-quality evidence before making recommendations about children’s technology use. But digital technologies evolve far faster than the research that studies them. By the time robust evidence accumulates, the technology in question may already have changed or been replaced. Plus, children will have grown, and the impacts crystallised.

“Digital technologies evolve far faster than the research that studies them.”

This creates a difficult situation. By relying only on traditional standards of evidence, we might be perpetually behind. This is why we need to think not only about evidence, but also risk.

I and colleagues argue that when technologies are developing at speed, we may never reach the ideal level of scientific certainty. That does not mean we should do nothing. It means we should be transparent about uncertainty and willing to act earlier, while continuing to gather better evidence over time. We call this “minimum viable evidence”.

This is particularly important because many digital products are released in early or “minimum viable” forms. At the same time, the data needed to properly evaluate their safety is often not publicly available as companies keep them under wrap. Parents and policymakers therefore have to make decisions without the full picture.

Thinking about risk also means asking what happens if we get things wrong. Decisions are not just about whether evidence meets a certain threshold, but also the consequences of acting too soon or too late.

“Decisions are not just about whether evidence meets a certain threshold, but also the consequences of acting too soon or too late.”

Consider AI-powered toys. Research into their benefits and harms is still emerging, and there is not yet a clear consensus. So what should we do?

If we restrict them and they turn out to be beneficial for some children, we may have limited a potentially helpful tool. Some children, for example, might benefit from new forms of interaction or support for language development.

But if we wait for stronger evidence and they turn out to be harmful, the consequences could be serious. Children might hear inappropriate content, develop misunderstandings about social interaction, or engage with systems that blur the boundaries between human and machine in ways we do not yet fully understand.

There is no purely scientific answer here. It is a judgement about acceptable risk.

“Where potential risks are significant and benefits are unproven, it may be better to limit exposure until safety is clearer.”

Childhood is a period of rapid and sensitive development that cannot be repeated. It lays the foundation for later learning, behaviour, and wellbeing. For that reason, some researchers, including me, take a more precautionary stance, especially for very young children. Where potential risks are significant and benefits are unproven, it may be better to limit exposure until safety is clearer.

This kind of risk-based thinking informed the development of the UK’s recent screen time guidance for young children, which I contributed to as part of an advisory group.

The guidance suggests that children under two should not be exposed to screen time, except for shared activities that support bonding and interaction. For children aged two to five, it recommends limiting screen time to around one hour per day, and less where possible.

More on screens
Screen time for children

However, a key shift in the guidance is moving beyond time alone. What children do on screens matters just as much as how long they spend on them.

Parents and caregivers are encouraged to choose age-appropriate, slower-paced, and safe content, and to avoid exposing young children to social media and emerging technologies such as AI. Mealtimes, bedrooms, and bedtime routines should be kept screen-free as best possible, creating space for interaction, rest, and routines.

Importantly, the guidance also recognises that children develop within relationships. It is not only children’s screen use that matters, but also that of the adults around them. Minimising distraction and being present with children supports their development.

All of these recommendations were informed by a rapid review of the available evidence. But also by weighing up risks.

This kind of decision-making inevitably involves values. How much risk is acceptable in early childhood? How should potential benefits be weighed against uncertain harms? What responsibilities do technology companies have when their products are used by children?

Increasingly, I have come to see that being a researcher in this space, and in the intersection with policy, is not just about representing the evidence. It is also about helping those in positions of responsibility to think carefully about risk, uncertainty, and the consequences of different choices.

“These decisions are not purely technical. They involve judgement, ethics, and societal values.”

These decisions are not purely technical. They involve judgement, ethics, and societal values. In democratic societies, they are therefore shaped by the people we choose to represent us and the priorities we set collectively through elections.

Working on national guidance has been both challenging and rewarding. It is one thing to study children’s digital lives in research settings, and another to help shape recommendations that may influence millions of families.

And this is only the beginning. Questions about technology use do not stop at age five. Parents, schools, and young people themselves are all grappling with how to manage screen use, and increasingly AI, in their daily lives.

To support children’s learning and development in a rapidly changing world, we need better evidence, but also better ways of thinking about uncertainty and risk. That means being honest about what we do not yet know, while still making thoughtful, informed decisions in the present.

Waiting for perfect answers is not an option.