As artificial intelligence systems grow more capable at an accelerating pace, a senior U.K. government adviser has warned that public institutions may be running out of time to fully prepare for the safety risks such technologies pose.
A Warning From Inside Government
David Dalrymple, a program director and AI safety expert at the U.K. government’s Advanced Research and Invention Agency, has issued a stark warning about the speed at which artificial intelligence is advancing and the limited window governments may have to respond. Speaking to The Guardian, Dalrymple said the world “may not have time” to put adequate safety measures in place before AI systems reach levels of capability that fundamentally challenge human dominance across key domains.
Dalrymple framed his concern around systems that could eventually perform “all of the functions that humans perform to get things done in the world, but better.” In such a scenario, he said, humans could find themselves outcompeted in the very areas required to maintain control over “our civilisation, society, and planet.” His remarks come as governments worldwide struggle to reconcile rapid private-sector innovation with the slower pace of regulation, oversight, and public understanding.
A Growing Gap Between Policymakers and AI Developers
Dalrymple highlighted what he described as a widening gap in understanding between the public sector and companies building advanced AI systems. According to him, the pace of technological progress inside leading AI labs is often poorly understood by policymakers, even as breakthroughs arrive with increasing frequency.
“Things are moving really fast,” Dalrymple said, warning that from a safety perspective, society may not be able to get ahead of developments in time. He added that it was “not science fiction” to project that within five years, most economically valuable tasks could be performed by machines at a higher quality and lower cost than by humans.
Such projections, he suggested, raise fundamental questions about economic disruption, institutional readiness, and the ability of governments to adapt before the changes become irreversible.
Reliability, Economic Pressure, and Risk Mitigation
Dalrymple also cautioned governments against assuming that advanced AI systems will be reliable simply because they are powerful. He pointed to economic pressures that could incentivize rapid deployment before the science needed to ensure full reliability has matured.
ARIA, the agency where Dalrymple works, funds high-risk research while operating independently despite public financing. Part of its focus is safeguarding the use of AI in critical sectors, including energy infrastructure. Dalrymple noted that while robust scientific guarantees of safety may not arrive quickly enough, mitigation and control of downsides may be the most realistic near-term option.
“The next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides,” he said.
Rapid Capability Gains and Official Safety Assessments
The warnings align with recent findings from the U.K.’s AI Security Institute, which reported that AI capabilities are advancing at extraordinary speed. In some areas, performance is doubling roughly every eight months, according to the institute.
Its testing showed that advanced models can now complete apprentice-level tasks about half the time, and some systems can autonomously perform tasks that would take a human expert more than an hour. In tests focused on self-replication—a key safety concern—two cutting-edge models achieved success rates exceeding 60 percent.
At the same time, the institute stressed that worst-case scenarios remain unlikely under everyday conditions. Even so, Dalrymple warned that when technological progress outpaces safety measures, the resulting risks could have serious implications for both national security and the global economy.
“Human civilisation,” he said, is “sleepwalking into this transition,” even as those at the frontier hope the disruption will ultimately prove beneficial.