OpenAI model defies human instructions, tampers with code to avoid closure

Local time on May 25, Britain's "Daily Telegraph" reported that the U.S. Open Artificial Intelligence Research Center (OpenAI) company's new artificial intelligence (AI) model o3 does not listen to human commands and refuses to shut itself down. The report said that human experts gave o3 clear instructions during the test, but o3 tampered with the computer code to avoid automatic shutdown. The Palisades Institute announced the results of the said test on the 24th, but said it could not determine the reason why o3 disobeyed the shutdown instruction. (AFP)

Search