On the 17th day of his hunger strike, Guido Reichstadter reported he was feeling generally okay, albeit moving somewhat slower than usual.
Since September 2nd, Reichstadter has been demonstrating outside the San Francisco headquarters of AI startup Anthropic, standing from approximately 11AM to 5PM daily. His chalkboard sign displayed “Hunger Strike: Day 15,” even though he had actually ceased eating on August 31st. The message calls for Anthropic to cease the pursuit of artificial general intelligence (AGI), a goal characterized by the development of AI systems that could match or exceed human cognitive capabilities.
AGI has become a rallying point for tech executives, with both established corporations and startups striving to achieve this ambitious milestone. Reichstadter regards this pursuit as a significant existential threat that is not being adequately acknowledged. “Building AGI—systems that approach or surpass human intelligence—is a common aspiration among these leading companies,” he remarked to Technology News. “And I think it’s reckless. It is exceedingly dangerous and should come to a halt immediately.” He believes that a hunger strike is the most effective way to capture the attention of AI leaders.
Reichstadter cited a 2023 interview with Anthropic CEO Dario Amodei, which he considers to exemplify the industry’s reckless attitudes. Amodei stated that there is a “10 to 25 percent chance” that AGI could lead to catastrophic outcomes for human civilization. While Amodei and other leaders believe that AGI is an inevitable advancement, they take pride in being responsible stewards of its development—something Reichstadter labels a “myth” and “self-serving.”
From Reichstadter’s perspective, technology developers have a moral obligation to refrain from creating innovations that could harm large populations. He asserts that individuals aware of these risks also share some accountability.
“What I’m attempting is to meet my duty as an ordinary citizen who cares about the lives and wellbeing of others, including my fellow citizens,” he expressed. “I have two children as well.”
Anthropic has not yet replied to inquiries for comment.
Reichstadter noted that he acknowledges the security personnel at Anthropic as he sets up his display every day, often observing employees who choose to avoid eye contact as they pass by. He mentioned that at least one employee has confided similar apprehensions about potential disasters, and he aspires to motivate AI industry staff to embrace their humanity rather than behave merely as instruments of their employers, given their involvement in developing potentially perilous technologies.
Concerns such as Reichstadter’s resonate with many in the AI safety community, which remains divided over the specifics of AI-related threats and the appropriate measures to mitigate them. Despite differences, most participants share the belief that the current trajectory is detrimental to humanity.
Reichstadter’s awareness of potential “human-level” AI began during his college studies approximately 25 years ago, during which the concept seemed distant. The advent of technologies such as ChatGPT in 2022 prompted him to reassess his perspective, leading to concerns over the illumination of authoritarianism in the U.S. driven by AI.
“My concern lies with society and the future of my family,” he stated. “I worry that AI is not being utilized ethically and that it presents severe and even existential risks.”
In recent months, Reichstadter has escalated his efforts to raise awareness among tech leaders regarding this urgent issue. He has previously engaged with an advocacy group named “Stop AI,” which advocates for a permanent prohibition on superintelligent AI systems to avert human extinction and widespread job loss. Earlier this year, he participated in an action that led to the closure of OpenAI’s office in San Francisco, resulting in arrests for some protestors including himself.
On September 2nd, he delivered a handwritten letter addressed to Amodei at Anthropic’s security desk. A few days later, he shared the letter online, urging Amodei to abandon the pursuit of uncontrollable technology and halt the global ascent of AI. The letter poignantly expressed his fear for his children’s future, concluding with a statement about the commencement of his hunger strike.
“I trust he will respond to that appeal with the respect it deserves,” Reichstadter added. “It’s different to consider the impact of your work abstractly than to confront a potential victim face-to-face.”
Following Reichstadter’s initiation of his protest, others have been inspired to initiate similar demonstrations, including one outside Google DeepMind’s offices in London and another in India, who are broadcasting their fasts.
Michael Trazzi, who participated in the London protest for seven days, ceased after experiencing near-fainting spells. He is still backing Denys Sheremet, who is continuing the fast. Both Trazzi and Reichstadter share fears about humanity’s future in the context of advancing AI, yet prefer not to align with any specific group.
Trazzi has been contemplating AI’s risks since 2017, culminating in a letter he sent to DeepMind CEO Demis Hassabis.
In his correspondence, Trazzi urged Hassabis to coordinate a halt on superintelligent AI development, proposing that DeepMind should announce it would stop if other leading AI companies in the West and China agreed to pause as well.
Trazzi commented to Technology News, “My advocacy for regulation stems from the inherent dangers associated with AI; if this area weren’t fraught with risks, I wouldn’t feel this way.”
Google DeepMind’s communications director, Amanda Carl Pratt, released a statement affirming the ongoing focus on safety, security, and responsible governance while underscoring the potential benefits of AI.
In a recent post on X, Trazzi mentioned that the hunger strike initiated substantial conversations among tech workers. He noted inquiries from a Meta employee, who questioned why the protests were limiting attention to Google personnel.
He also shared that a DeepMind employee expressed skepticism regarding their company’s willingness to release models that might cause catastrophic harm, citing opportunity costs, while another conceded a belief that AI could lead to extinction.
To date, Reichstadter and Trazzi have yet to receive responses to their letters sent to Hassabis and Amodei, with Google stating it would not comment on the lack of response. Both activists remain hopeful for engagement or commitments from the CEOs regarding responsible practices in AI development.
Reichstadter concluded, stating, “We are in an uncontrolled, global race toward disaster. If a solution exists, it hinges on individuals being willing to acknowledge the truth and ask for assistance.”
0 Comments