December 6th, according to Business Inner StoryAI Godmother”Fei-Fei LiINDICATES THAT THE CURRENT DISCUSSION ON AI IS TOO EXTREME。

A LECTURE AT STANFORD UNIVERSITY WAS MADE PUBLIC ON THURSDAY. IN HER SPEECH, SHE SAID, "I WANT TO SAY THAT I'M THE MOST BORING SPEAKER IN THE AI FIELD TODAY, BECAUSE THE HOLDERS OF THE AI THREAT THEORY AND THE AI ALMIGHTY VIEW ARE EXAGGERATING, AND THAT'S WHERE SHE WAS DISAPPOINTED."
“WHAT WE HEAR IS THE TOTAL EXTINCTION OF HUMAN BEINGS, THE END OF THE WORLD, AND SO FORTH, THAT AI WILL DESTROY HUMAN BEINGS, THAT THE MACHINE WILL DOMINATE THE WORLD,” SHE SAID, “ON THE OTHER HAND, THERE ARE THOSE WHO HOLD THE `FULL UTOPIA' (TOO IDEALISTIC) VIEW THAT AI WILL BRING `POST-DEPRIVED TIMES' (VERY RESOURCE-RICH) AND `UNLIMITED PRODUCTIVITY'.”
Li Fei Fei is a long-time computer science professor at Stanford University, known for creating ImageNet data sets. Last year, she co-founded the World Laboratory Corporation, which works to develop an AI model that can sense, generate and interact with a three-dimensional environment。
In her lecture at Stanford University, she said:This “extreme speech” is full of technical discussions and misleading vulnerable populations.
“People around the world, especially those outside Silicon Valley, need to hear the truth and know exactly what this technology is”, she said, “but this kind of discussion, this way of communication, this kind of public education has not yet produced the desired results”
IN ADDITION TO LI FEI FEI, TOP COMPUTER SCIENTISTS ARE CALLING FOR A MORE BALANCED PROMOTION OF AI AND ITS SOCIAL INFLUENCE。
IN JULY THIS YEAR, THE FOUNDER OF GOOGLE'S BRAIN, U NANDA, SAID THAT HE THOUGHT THAT GENERAL ARTIFICIAL INTELLIGENCE (AGI) WAS OVERESTIMATED. AGI MEANS THAT THE AI SYSTEM HAS A HUMAN-LEVEL COGNITIVE ABILITY TO LEARN AND APPLY KNOWLEDGE LIKE HUMANS. THE EXECUTIVES OF THE LEADING AI COMPANIES ARE OFTEN ASKED WHEN THEY THINK AGI WILL ARRIVE, AND WHAT IT MEANS TO HUMAN WORKERS。
“AGI has been over-prosecuted,” said U Nanda, in Y Combinator's speech, “There will still be a lot of things humans can and AI can't do.”
The former chief AI scientist Yann LeCun has stated that the big language model, while “surprising”, has limitations。
IN AN INTERVIEW LAST YEAR, HE SAID, “THEY ARE NOT THE WAY TO THE SO-CALLED AGI. I HATE THAT TERM. THERE IS NO DOUBT THAT THEY DO WORK, BUT THEY ARE NOT THE WAY TO HUMAN INTELLIGENCE.”
Last month, Yang Liqun announced on his predecessor that he would end his 12-year career in Meta and start an AI Enterprise。