Ultraman is not afraid of his mansion being attacked; he has a fortress.
In 2016, Sam Altman built an underground bunker in Wyoming. 1,200 square meters, three-level structure, 500 kg of gold, 5,000 potassium iodide tablets, 5 tons of freeze-dried food, 100,000 bullets. That same year, OpenAI had just celebrated its first anniversary.
Ten years later, the leader of the world's most powerful AI company was attacked two weekends in a row, first with a Molotov cocktail, then with gunfire. In a blog post, he admitted to severely underestimating the "power of narrative." Whose narrative was he referring to, someone else's or his own?
48 Hours, Two Attacks
At 3:40 am on April 10, San Francisco Chestnut Street. A 20-year-old man, Daniel Moreno-Gama, threw a Molotov cocktail at the metal gate of Sam Altman's apartment. The fire ignited near the outer gate, and he fled. About an hour later, the same individual appeared near OpenAI's San Francisco office, continuing to threaten arson and was subsequently arrested. Charges included attempted murder and arson.

Sam Altman's San Francisco residence and surveillance footage of the arson suspect
Two days later, at 1:40 am on April 12, a Honda sedan parked next to Altman's residence in the Russian Hill area. A passenger in the car extended their hand out the window and fired a shot towards the residence. Surveillance footage captured the license plate, leading to the arrest of two individuals: Amanda Tom (25) and Muhamad Tarik Hussein (23). Three guns were found during a search of the residence, and the two were charged with reckless discharge of a firearm.
One weekend, two attacks.
The suspect in the first case, Daniel Moreno-Gama, was an AI doomsayer. He quoted themes of human vs. machine from "Dune" on social media, wrote articles arguing that AI alignment failure posed an existential risk, criticized tech leaders for pursuing "hyperhumanism," and taking an "all-in bet on the fate of humanity."
What was his argument?
Over the past five years, one of OpenAI's standard moves in constructing the narrative around AI is to repeatedly emphasize the "existential" threat of AGI. This discourse serves multiple purposes: to urge governments to take regulation seriously, to help investors understand the stakes, and to make the entire industry realize that this race is too important to lose. This narrative serves a function, positioning OpenAI as simultaneously at the frontier of danger, the most responsible, and therefore the rightful recipient of funding.
However, the phrase "this is the most dangerous technology in human history" will not just stay within the tech and investor circles once it's out there. It will trickle down and become a literal call to action for some. Moreno-Gama wrote in an ins post, "Exponential progress plus misalignment equals existential risk." The original source of this argumentative framework is mainstream literature on AI safety, much of it funded or endorsed by OpenAI.

Daniel Moreno-Gama Social Media Account
After the first attack, Ultraman blogged. He posted a photo with his child, saying he hoped this picture would prevent the next person from throwing a Molotov cocktail at his home. He acknowledged the opponents' "legitimate moral stance" and called for a public discussion "with a little less explosiveness in both the literal and metaphorical sense."
He also responded to a New Yorker deep dive. The article, published days before the attack, openly questioned his credibility as the ultimate AI authority. He wrote, "I severely underestimated the power of public narrative and discourse."
Two days later, his residence was shot at.
Security Budgeting Is a Statement; a Bunker Is Another
The starting point of this trajectory is a year earlier than most people realize.
December 4, 2024, New York. UnitedHealthcare CEO Brian Thompson was shot outside the Hilton Hotel. Suspect Luigi Mangione, an Ivy League graduate, left behind a handwritten statement criticizing the health insurance industry. The case sparked an unusual wave of reactions on social media: a substantial number of regular users openly expressed sympathy for the perpetrator, even turning him into some kind of rebel symbol.
At that moment, some doors were pushed ajar.
Following the Thompson incident, executive security went from being a "perk" to a "survival necessity." According to research data cited by Fortune magazine, since 2023, there has been a 225% increase in violent crime attacks on top corporate executives. In the S&P 500, 33.8% of companies reported executive security expenses in their financial reports in 2025, up from 23.3% in 2020. Firms providing security services had a median cost of $130,000, a 20% year-on-year increase, doubling in five years.
The AI industry is the latest and most prominent recipient of this trend. The total security expenditure of the top ten tech giants' CEOs in 2024 exceeded $45 million. Mark Zuckerberg alone exceeded $27 million, higher than the sum of the security expenses of four other CEOs such as Apple and Google. Jensen Huang of NVIDIA had $3.5 million in 2025, a 59% increase year-over-year. Sundar Pichai of Google had $8.27 million, a 22% increase.
The AI industry has something unique that few other industries have: even the creators themselves believe this technology could destroy civilization. In 2025, the Pew Research Center surveyed 28,333 respondents worldwide, with only 16% expressing excitement about AI development and 34% expressing concern. A counterintuitive finding was that the higher the level of education and income, the stronger the concern about AI running amok. The most knowledgeable are the most afraid.
Recently, the home of Indianapolis City Councilman Ron Gibson was shot at by a gunman firing 13 shots in the middle of the night, waking up his 8-year-old son. A handwritten note was left at the door, saying, "No data centers allowed." The FBI has intervened in the investigation. Jordyn Abrams, a researcher at the George Washington University's Extremism Program, pointed out that data centers are becoming targets of anti-tech and anti-government extremists.

Ron Gibson Shooting Scene
This fear is not a secret within the industry; it's just not openly discussed.
Ultraman built the fortress in Wyoming in 2016. That year, OpenAI had just been announced, outlining to the world how AI would benefit humanity. Both events coincided: he publicly bet that AI would succeed while privately stockpiling enough ammunition to support an armed militia.
This was a rational double bet: publicly betting on AI's success and privately preparing for AI to go rogue.
Ultraman's Boomerang
On February 27 this year, OpenAI signed a contract with the US Department of Defense, allowing the Pentagon to deploy ChatGPT on a classified defense network for use in "any lawful purpose." On the same day, Ultraman also publicly endorsed Anthropic's position on limits for AI military applications. Subsequently, ChatGPT's daily uninstall rate surged by 295%, and one-star reviews increased by 775% within 24 hours. The QuitGPT boycott movement reportedly accumulated over 1.5 million participants.
On March 21st, about 200 protesters marched in San Francisco, spanning Anthropic, OpenAI, and xAI, demanding the CEOs of the three companies commit to pausing cutting-edge AI development. Concurrently, London saw its largest anti-AI protest to date.
Ultraman's Wyoming redoubt and the security detail he employs address two distinct risks, one from outsiders and one from what he himself is building. He takes both risks seriously in private but acknowledges only one in public.
The week of the first attack, The New Yorker published a deep dive into Ultraman. Journalists Ronan Farrow and Andrew Marantz interviewed over 100 sources, with the central thesis distilled into just two words: untrustworthy. The article quoted a former OpenAI board member labeling Ultraman an "antisocial personality," "untethered to truth." Multiple ex-colleagues described his shifting positions on AI safety, often reshaping power structures as needed.
In his blog response, Ultraman admitted to having a "conflict-avoidant" tendency. He had crafted a public narrative of "AI as an existential threat" as a tool for fundraising and regulatory maneuvering. As a result, this tool slipped from his grip, made a circuit, and came crashing back at his door.
You may also like

Arthur Hayes New Post: It's "No Trade" Time Now

Claude Opus 4.7 Review: Is It Worthy of the Title of Strongest Model?

DWF In-Depth Report: AI Outperforms Humans in Yield Farming Optimization in DeFi, But Complex Transactions Still Lag Behind 5x

The financial tricks of the crypto giant Kraken

When proactive market makers start to take initiative

Massive Whale Movement: Unstaking $84.96 Million in HYPE Tokens
Key Takeaways A crypto whale, known as TechnoRevenant, has unstaked approximately $84.96 million in HYPE tokens. The tokens…

ListaDAO Addresses Third-Party Contract Vulnerability Concerns
Key Takeaways GoPlus Security revealed a vulnerability in a contract resembling those of ListaDAO. ListaDAO confirmed that their…

Security Risks of Fake Ledger Nano S+ Devices Emerging Through Chinese E-Commerce
Key Takeaways Counterfeit Ledger Nano S+ devices are being sold on Chinese e-commerce platforms, posing significant risks to…

Wave of Cyber Attacks Hits DeFi Protocols Post-Drift Hack
Key Takeaways A significant $280 million attack on Drift Protocol set off a chain of security breaches across…

Tom Lee Says ‘Mini Crypto Winter’ Is Over, Sees Ether Above $60K
Key Takeaways: Tom Lee predicts Ether’s resurgence, projecting it to surpass $60,000 in the coming years. Bitmine suffered…

French Government Tackles Rising Crypto Safety Concerns
Key Takeaways: France is intensifying measures to counter the surge in crypto kidnappings and wrench attacks. Since early…

Europe’s Bitcoin Treasury Playbook Unlikely to Mirror US Strategy: PBW 2026
Key Takeaways: European firms are adapting unique Bitcoin treasury strategies due to distinct financial regulations and market dynamics…

Circle Confronts Lawsuit Over $280M Drift Protocol Hack
Key Takeaways: Circle faces a lawsuit for allegedly aiding in the transfer of $230 million in stolen USDC.…

Bitcoin Faces ‘Near-Term Selling Pressure’ Following Surge to $76K: CryptoQuant
Key Takeaways: Bitcoin reaches a multi-month high of $76,000, prompting increased deposits to exchanges. CryptoQuant identifies a peak…

Ethereum Foundation Unveils North Korean Infiltration in Web3
Key Takeaways: The Ethereum Foundation’s ETH Rangers program exposed 100 North Korean operatives infiltrating Web3 companies. The Ketman…

Crypto in Sustained Winter as CEX Volumes Drop 39% in Q1
Key Takeaways: Centralized crypto exchange trading volume fell by 39% in Q1 2026 to $2.7 trillion. March saw…

Bitcoiners Should Prepare for Quantum Computing Now, Urges Adam Back
Key Takeaways: Adam Back emphasizes immediate steps toward quantum-resistant solutions for Bitcoin. Quantum computing may disrupt blockchain security…

Cybersecurity Alert: Counterfeit Ledger Devices on Chinese Market
Key Takeaways: Scammers distribute fake Ledger devices via Chinese marketplaces, risking user crypto assets. Victims of a related…
Arthur Hayes New Post: It's "No Trade" Time Now
Claude Opus 4.7 Review: Is It Worthy of the Title of Strongest Model?
DWF In-Depth Report: AI Outperforms Humans in Yield Farming Optimization in DeFi, But Complex Transactions Still Lag Behind 5x
The financial tricks of the crypto giant Kraken
When proactive market makers start to take initiative
Massive Whale Movement: Unstaking $84.96 Million in HYPE Tokens
Key Takeaways A crypto whale, known as TechnoRevenant, has unstaked approximately $84.96 million in HYPE tokens. The tokens…




