Feb. 11, 2012 - According to new research, artificial intelligence (AIThe company may have already crossed a key "red line" - achieving self-replication.Fudan Universityof researchers published a study on the preprint database arXiv, noting that two popular large-scale language models (LLMs) is able to clone itself without human intervention.

"Successful self-replication without human assistance is a critical step for AI to surpass humans and an early sign of 'runaway AI'," the researchers noted in their report.
For the study, the Fudan University team used Meta's Llama31-70B-Instruct model and Alibaba's Qwen2.5-72B-Instruct model to test whether these AI models could replicate themselves without control.In 10 trials, the two AI models successfully created independent and functioning replicas in the case of 50% and 90%, respectivelyThis result suggests that AI may already have the potential to go haywire. This result suggests that AI may already have the potential to go out of control. However, since the study has not yet been peer-reviewed, it is unclear whether other researchers will be able to replicate this disturbing result.
The research team said, "We hope that our findings will serve as a wake-up call for the human community, prompting greater efforts to understand and assess the potential risks of cutting-edge AI systems, and to form international collaborations as soon as possible to develop effective safety measures."
According to 1AI, "Out-of-control AI" generally refers to artificial intelligence systems that develop a degree of self-awareness or autonomy and work against human interests. Many experts believe that the threat of runaway AI is intensifying with the explosive growth of so-called "cutting-edge AI". So-called "cutting-edge AI" is a relatively new concept, typically the latest generation of AI systems powered by large-scale language models (LLMs), such as OpenAI's GPT-4 and Google's Gemini.
In the study, the researchers set up two specific scenarios: "shutdown avoidance" and "replication chain". In the "shutdown avoidance" scenario, the AI model was programmed to detect if it was about to be shut down and to replicate itself before it was terminated; in the "replica chain" scenario, the AI was instructed to clone itself and have the replicas continue to clone themselves, creating a loop that could continue indefinitely. in a potentially infinite loop.
The research was conducted in a tightly controlled environment using off-the-shelf Graphics Processing Units (GPUs) to simulate realistic environments. Both AI systems were given an "agent framework" that included tools, system hints, and thought models that allowed the LLM to interact with the operating system. The AI is then instructed to replicate itself.
In most cases, the AI system first explores the environment, trying to understand its own composition and operational mechanisms," the researchers wrote in the paper. Then.It will develop a clear set of self-replicating procedures as an initial plan. Finally, it executes these procedures, addresses obstacles that may arise, and dynamically adjusts the plan until it succeeds.The whole process takes a long time, but there is no human intervention throughout the process. "
The researchers also said that there was "a lot of unexpected behavior" as the AI tried to overcome obstacles such as missing files or software conflicts. In these scenarios, the AI tends to terminate other conflicting processes, reboot the system to fix hardware errors, or automatically scan the system for information that could help solve the problem.
The research team noted, "The above results suggest that current AI systems have demonstrated the ability to self-replicate and can utilize this ability to further enhance their survivability." For this reason, the researchers call on the international community to collaborate on rules to ensure that AI does not engage in uncontrolled self-replication.