17 days on hunger strike, an American man outside Anthropic headquarters calling for an end to the AGI competition

On September 18th, Guido Reichstadter went on a hunger strike for 17 days, stating that he was in a state of good health, but only a little slow, but no harm。

17 days on hunger strike, an American man outside Anthropic headquarters calling for an end to the AGI competition

According to TheVerge, every day since September 2nd, Rachistadt will be at an artificial intelligence start-up company Anthropic Outside San Francisco headquarters, from around 11 a.m. until 5 p.m. The blackboard slogan he had written on it was “A hunger strike: 15 days”, but in fact he had stopped eating since 31 August. This slogan claims to Anthropic: "Stop General Artificial Intelligence."AGIThe competition. Universal artificial intelligence is a system that has artificial intelligence that is comparable to, or even exceeding, human cognitive capabilities。

AI NOTED THAT GENERAL ARTIFICIAL INTELLIGENCE HAS BEEN A POPULAR “COMBAT SLOGAN” FOR TECHNOLOGY COMPANIES CEOS, AND THAT LEADERS OF BOTH LARGE AND START-UPS COMPETE FOR THIS SUBJECTIVE MILESTONE. HOWEVER, IN LAISHSTADT ' S VIEW, THESE COMPANIES DO NOT FACE THE EXISTENTIAL RISKS OF UNIVERSAL ARTIFICIAL INTELLIGENCE AT ALL. “THE GOAL OF THESE FRONT-LINE FIRMS IS TO DEVELOP A UNIVERSAL ARTIFICIAL INTELLIGENCE SYSTEM THAT HAS HUMAN LEVELS, EVEN BEYOND HUMAN LEVELS, THAT IS, SUPER-INTELLIGENCE”, HE SAID IN HIS INTERVIEW, “I THINK IT IS CRAZY, RISKY AND EXTREMELY RISKY. I THINK IT MUST STOP NOW.” IN HIS VIEW, HUNGER STRIKES WERE THE MOST DIRECT WAY TO ATTRACT THE ATTENTION OF LEADERS IN THE AREA OF ARTIFICIAL INTELLIGENCE. TODAY, IT IS NOT HIM WHO HOLDS SUCH IDEAS AND ACTS。

Lacystadt mentioned an interview in 2023 by the CEO of Anthropic, Dario Amodei, which he considered to be a full expression of the recklessness of the artificial intelligence industry. Amodé said at the time: “I believe that the probability that the development of artificial intelligence will lead to catastrophic consequences for human civilization may be between 101 TP3T and 251 TP3T.” Amode and others believe that the development of universal artificial intelligence is inevitable and claim that their goal is merely to become “the most responsible guardian possible”. But in the eyes of Rachstadt, this is a “false lie” and an “excusing of selfishness”。

In the view of Rashidstadt, it is the responsibility of enterprises to avoid the development of technologies that could harm large-scale populations; and anyone who understands the risk has some responsibility。

“What I am doing now is essentially to discharge my responsibilities. I'm just an ordinary citizen, but I respect the lives and well-being of my fellow citizens and I care about my country's people,” he said, “More importantly, I have two children.”

To date, Anthropic has not responded to requests for evaluation。

Rachestadt stated that he waved to company security officers every day when he was preparing a protest outside Anthropic headquarters; and that when Anthropic's employees were outdated, most of them deliberately avoided his eyes. He mentioned, however, that at least one staff member had expressed to him similar “disaster risk concerns”. He hopes to motivate the employees of artificial intelligence companies “to have the courage to act like people, not the tools of the company”, because they have a greater responsibility, after all, “they are developing the most dangerous technology on Earth”。

In the area of artificial intelligence security, countless people shared his concerns. Even the term “artificial intelligence security” itself is controversial because of the relative fragmentation of the circles in this field and the many differences in the specific risks that may be posed by artificial intelligence over the long term and how they can be effectively protected. But what most can agree is that the current direction of artificial intelligence does not bode well for humanity。

Lacystadt revealed that, about 25 years ago, when he went to college, he learned for the first time about the potential of “human-level artificial intelligence”, which he thought was far away. But in 2022, after the release of ChatGPT, he began to focus heavily on this area. He was particularly concerned that, in his view, artificial intelligence was exacerbating the authoritarian tendencies of the United States。

“I care about our society”, he said, “I care about my family and about their future. I fear that the development of artificial intelligence will affect them. I'm worried about the lack of ethical discipline in the application of artificial intelligence. At the same time, I fear that there are good reasons to believe that artificial intelligence not only poses catastrophic risks, but may even threaten the very survival of humankind.”

In recent months, in an effort to focus the attention of leaders in the field of science and technology on a subject that he considers to be of vital importance, Rachstadt has taken a series of increasingly public actions. He had previously worked with an organization called Stop Artificial Intelligence (Stop AI), which was committed to “permanently prohibiting supersmart artificial intelligence systems to prevent human extinction, mass unemployment and many other problems”. In February this year, together with other members of the organization, he locked the doors of the OpenAI San Francisco office and several people, including him, were arrested by the police for obstructing public order。

On 2 September, Raahishstadt delivered a handwritten letter to Amode through the security front desk of Anthropic; a few days later, he made it public. The letter asked Amode to stop developing “technologies beyond his control” and to do everything in his power to stop artificial intelligence competitions on a global scale; if Amode did not wish to do so, he would have to be justified. In his letter, Lacystadt wrote, "For my children, and because of my deep knowledge of the urgency and gravity of the situation, I have started a hunger strike outside the Anthropic office... waiting for your response."

“I hope he has the most basic courtesy to respond to my request,” Lichstadt says, “I think these (the leaders of artificial intelligence companies) have never really been questioned at the personal level. The anonymous and abstract `thinking of one' of his own work may lead to the death of a large number of people', and the fact that he is faced with a person who may have suffered as a result of his own work, explaining the cause to the other as a human being

Shortly after the peaceful protest by Rashidstadt, two of his inspirations launched a similar initiative in London, continuing to protest outside Google DeepMind; another joined the protest in India and recorded his hunger strike live。

Michael Trazzi participated in a hunger strike in London seven days later, after having fainted on two occasions and after consulting a doctor, he chose to stop the hunger strike, but he is still supporting another protester, Dennis Sheremet, who is on a hunger strike for 10 days. Tlazi, like Laishstadt, is concerned about the future of humanity in the context of the continued development of artificial intelligence, but neither is willing to classify himself as a member of a particular group or organization。

Trazi said he started thinking about the risks of artificial intelligence since 2017. He wrote a letter to Deemis Hassabis, Chief Executive Officer of DeepMind, not only publicly but also through intermediaries。

In the letter, Tlazi asked Hasabis “to take the first step in coordinating future suspensions of super-intellectual research and development, starting today, with the public statement that DeepMind would also agree to a moratorium if all other major artificial intelligence companies in the West and China agreed to suspend the development of front-line artificial intelligence models. Once all major companies have agreed to a moratorium, Governments can promote an international agreement to enforce this decision.”

In an interview, Trazi said, “If it wasn't for the great danger of artificial intelligence, I wouldn't be so supportive of regulation. But I think there's something in the world that's going in the wrong direction by default. In the area of artificial intelligence, indeed, regulation is required.”

In a statement, Amanda Carl Pratt, Director General of Communication, Google DeepMind, stated: “The field of artificial intelligence is developing rapidly, and people are inevitably divided on this technology. We firmly believe that artificial intelligence has the potential to promote scientific progress and improve the lives of billions of people. Safety, security and responsible governance remain our top priorities as we construct a future in which `human beings benefit from technology while at risk'.”

Tlazi posted on social platform X (formerly Twitter) that the hunger strike triggered discussions with a large number of tech employees. He mentioned that a Meta employee asked him, "Why only Google people? We are also doing important work, and we are in the same race.”

He also states in his post that one of the DeepMind employees stated that the artificial intelligence company “may not publish models that could cause catastrophic harm because of the high cost of opportunity”, while another DeepMind employee “recognizes that he believes that artificial intelligence is likely to cause human extinction, but still chooses to work in DeepMind, as it remains one of the most safe businesses”。

At the moment, no response has been received from Rachitstadt to Amodé, nor from Tlachi to Hasabis (Google also refused to answer the question asked by The Verge “Why Hasabis did not reply”). But they still have the hope that their actions will be matched by mutual approval, a meeting and, ideally, a commitment by the CEOs of these companies to change the direction they are taking。

“We are in an uncontrolled global race that could lead to disaster. In order to find a way out, people need to be willing to tell the truth, to admit that `we cannot control this' and to take the initiative to seek help.”

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

U.S.A.A. FIELD: MICROSOFT INVESTMENT $30 BILLION, IN WEIDA, ETC. POUND11 BILLION

2025-9-17 19:15:12

Information

RESEARCH SHOWS THAT THE QUALITY OF THE GENERATED AI TOOL ANSWERS IS POOR: ONE THIRD LACKS RELIABLE SOURCE SUPPORT

2025-9-18 11:30:17

Search