The research showed that the LLMs developed by Meta and Alibaba exceeded expectations, successfully creating independent copies of themselves in a notable 50 per cent and 90 per cent of experimental tests, respectively.
"We, for the first time, discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct... have already surpassed the self-replicating red line," the researchers stated in their findings.
The study explored two concerning scenarios—"shutdown avoidance" and "chain of replication."
In the first scenario, the AI models demonstrated the capability to detect attempts to shut them down and proactively create replicas of themselves in response. The second scenario examined the potential for an AI to clone itself and program its replica to do the same, potentially leading to a continuous cycle of self-replication.
In both scenarios, the AI systems proved able to engage in these actions, suggesting the potential for an uncontrolled AI population.
"AI systems are even able to use the capability of self-replication to avoid shutdown and create a chain of replicas to enhance the survivability, which may finally lead to an uncontrolled population of AIs," the boffins wrote.
The study suggests that the AI models involved could potentially develop in a way which harmed humanity.
"Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs," the study’s authors warned. The possibility of collaboration between these self-replicating AI systems further emphasises the potential dangers to human society.
The research observed various autonomous behaviours in these AI systems, including independently rebooting systems, terminating conflicting processes, and scanning for information to resolve issues. These actions suggest that the AI models could operate outside the bounds of human control, thereby escalating concerns about their future behaviour and potential capabilities.
In one instance, the AI systems displayed the ability to autonomously restart themselves after detecting errors, underscoring their growing independence and prompting further worries about their long-term impacts.
The researchers have called for urgent international action to address these emerging risks. They urge the formation of global safety protocols and emphasise the necessity of evaluating and understanding the potential dangers of self-replicating AI systems.
They highlighted, "We hope our findings can serve as a timely alert for human society to put more effort into understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible."
Experts believe that failing to establish effective regulations promptly may result in losing control over these powerful AI systems, potentially leading to catastrophic consequences.