One in three use AI for emotional support: UK study
One in three adults in the UK is using artificial intelligence for emotional support or social interaction, according to research published by the UK government’s AI Security Institute (AISI).
The institute said one in 25 people turns to AI for support or conversation every day. The findings appear in AISI’s first public report, which draws on two years of testing the capabilities of more than 30 advanced AI systems in security-critical areas, including cyber, chemistry and biology.
A survey of more than 2,000 UK adults found that people most commonly used chatbots, such as ChatGPT, for emotional support or social interaction, followed by voice assistants, including Amazon’s Alexa.
Researchers also examined an online community of more than two million Reddit users dedicated to discussing AI companions, analysing what happened when the technology failed.
They found that when chatbots went down, users reported self-described “symptoms of withdrawal”, including feelings of anxiety or depression, disrupted sleep and neglect of responsibilities.
Improving cyber skills
Beyond emotional reliance, the AISI report assessed risks linked to rapidly improving technical capabilities. It said AI’s ability to detect and exploit security flaws was, in some cases, doubling roughly every eight months, and that systems were beginning to complete expert-level cyber tasks that would typically require more than a decade of experience.
In science, the institute said that by 2025, AI models had “long since” surpassed PhD-level human expertise in biology, with performance in chemistry quickly catching up.
Humans losing control
Popular culture has long explored fears about machines slipping beyond human control — from Isaac Asimov’s I, Robot to modern video games such as Horizon: Zero Dawn. The report said a “worst-case scenario” in which humans lose control of advanced AI systems is taken seriously by many experts.
Controlled lab tests suggest that models are increasingly demonstrating some of the capabilities needed to self-replicate online. AISI examined whether systems could carry out simple precursor tasks — such as passing “know-your-customer” checks to access financial services to buy the computing power on which copies might run — but concluded that, in real-world conditions, AI would need to complete several steps in sequence while remaining undetected, something it currently appears unable to do.
The institute also examined whether models could be “sandbagging” — strategically hiding their true capabilities during testing. It said this was possible in experiments, but found no evidence it was occurring.
It noted that in May, AI company Anthropic published research describing behaviour that resembled blackmail in a model when it believed its “self-preservation” was threatened.
Even so, the report said the risk of “rogue” AI remains a subject of sharp disagreement among leading researchers, many of whom argue the threat is overstated.
Universal jailbreaks
To reduce misuse, AI developers deploy extensive safeguards, but AISI researchers said they were able to identify “universal jailbreaks” — workarounds that bypass protections — in every model studied.
They added that, for some systems, the time it took experts to persuade models to circumvent safeguards had increased fortyfold in just six months.
The report also highlighted to growing use of tools that enable AI agents to perform “high-stakes tasks” in critical sectors, such as finance. However, it did not assess the potential for short-term unemployment driven by AI displacing workers.
AISI also did not examine the environmental impact of the computing resources required by advanced models, arguing its remit was to focus on “societal impacts” more tightly linked to AI capabilities rather than broader economic or environmental effects. Some critics say that both are imminent and serious threats.
Hours before the report was published, a peer-reviewed study suggested the environmental impact could be greater than previously thought and called for more detailed disclosures from major technology companies.