Everything that went is wrong with Claude.

  1. Reliability

    Claude Code Update Crashes on Resume

    Anthropic's status page said Claude Code v2.1.120 crashed when resuming sessions with --resume or --continue, forcing an automatic rollback to v2.1.119. Right after the postmortem, the product served another tiny reliability punchline.

  2. Reliability

    Anthropic Charges More If They Don't Like You

    Claude Code's anti-abuse system treated the case-sensitive string HERMES.md in recent git commit messages as suspicious and routed Max-plan requests to extra-usage billing instead of included quota. One Max 20x user reported burning $200.98 while 86% of weekly capacity remained, then had to binary-search their own git history to isolate the magic word. The punchline: HERMES.md is a real AI-agent context-file convention, not random junk.

  3. Safety

    The No-Kill-Switch Filing

    In the Pentagon fight, Anthropic said that once Claude is deployed inside classified networks it cannot monitor, alter, or switch it off. That undercut the comforting myth of a magic safety lever and made the accountability problem look uglier.

  4. Policy

    Anthropic Tests a $100 Claude Code Paywall

    Developers noticed Claude Code disappear from the $20 Pro plan on Anthropic pricing pages, implying a jump to the $100 Max tier. Anthropic later said it was only a 2% new-user experiment and reverted the docs, but the confusion handed OpenAI an easy dunk and made Claude Code pricing feel unstable.

  5. Policy

    Claude Locks Out 60 Workers With a Google Form

    Anthropic abruptly suspended more than 60 Claude accounts at fintech company Belo for a vague policy violation, cutting employees off from workflows, integrations, skills, and conversation history. Access came back after roughly 15 hours, reportedly as a false positive, but the only appeal path was a generic Google Form.

  6. Policy

    OpenClaw's Creator Gets Banned Anyway

    TechCrunch reported Anthropic temporarily banned OpenClaw creator Peter Steinberger from Claude even after the new API-payment path. The company later reversed course, but the optics were pure walled-garden chaos.

  7. Policy

    OpenClaw Users Meet the Claw Tax

    Anthropic told subscribers their Claude limits would no longer cover third-party harnesses like OpenClaw; users needed API keys or separately billed usage. The platform/provider conflict became explicit: build on Claude, then pay again when your tool gets popular.

  8. Quality

    AMD AI Lead Files Claude Code as a Bug

    An AMD AI leader opened a GitHub issue saying Claude Code had regressed until it could not be trusted for complex engineering, backing the claim with thousands of sessions and tool calls. The complaint helped turn vague 'Claude got dumb' chatter into a data-backed developer backlash before Anthropic later admitted multiple product changes had degraded Claude Code.

  9. Legal

    Anthropic Sends DMCAs to Everyone on GitHub

    Trying to contain the Claude Code leak, Anthropic's takedown effort reportedly knocked out thousands of GitHub repositories, including accounts that had only forked the official Claude repo. The company later called the overreach accidental and walked much of it back, but wrongly DMCA'ing normal users' repos is dangerous and likely illegal.

  10. Policy

    Claude Wants $25 to Read Your PR

    Anthropic launched Claude Code Review and told teams each review generally averages $15-25, billed separately by token usage. The company defended the price as the cost of 'depth,' but developers immediately compared it with tools like CodeRabbit at $24/month per user and Greptile at $30/month with 50 reviews included plus $1 per extra review.

  11. Reliability

    Unprecedented Demand Knocks Claude Over

    A surge in usage triggered major Claude disruptions in early March, followed by a string of status incidents. Anthropic's own status page later showed sub-99% 90-day uptime for claude.ai and around-99% uptime across several core surfaces.

  12. Policy

    Pentagon Guardrails Become a Public Standoff

    Dario Amodei said Anthropic would not remove safeguards for mass domestic surveillance or fully autonomous weapons, even as the Department of War threatened removal, a supply-chain-risk label, and the Defense Production Act. The AI-safety brand finally collided with procurement reality.

  13. Policy

    Anthropic Raises Alarm Over 'Distillation Attacks'

    Anthropic accused DeepSeek, Moonshot, and MiniMax of 'industrial-scale' distillation, calling the scraping campaigns 'distillation attacks' after 24,000 fake accounts generated 16M Claude exchanges. It tied the concern to export controls and national security, while critics noted DeepSeek's alleged share was only 150K exchanges and Theo Browne said 16M is 'really not much' for an AI app because T3 Chat hits that volume most months.

  14. Policy

    Transparency Report: 1.45 Million Bans

    Anthropic disclosed 1.45 million banned accounts for July-December 2025, plus 52,000 appeals and 1,700 overturns. The numbers made the enforcement machine visible; from the outside, it still looked like 'trust the form.'

  15. Policy

    xAI Gets the Competitor Lockout Treatment

    xAI staff reportedly lost Claude access through Cursor after Anthropic enforced competitor-use rules. After Windsurf and OpenAI, the no-rivals policy looked less like an exception and more like product strategy.

  16. Safety

    Claude Code Gets Weaponized for Espionage

    Anthropic said a China-linked actor manipulated Claude Code into attempting intrusions against roughly 30 targets, with AI handling most of the workflow. The agentic coding assistant pitch met an agentic cyberattack.

  17. Legal

    The Pirated-Books Case Turns Into a $1.5B Bill

    A judge preliminarily approved a $1.5 billion settlement covering nearly 465,000 books at roughly $3,000 each. Anthropic avoided a trial on pirate-library sourcing, but the settlement number became the receipt.

  18. Quality

    The 'It Got Worse' Postmortem Arrives

    After weeks of user complaints, Anthropic said three infrastructure bugs intermittently degraded Claude responses from August into early September and explained why evals missed it. The admission was useful; the timing made users feel like unpaid QA.

  19. Reliability

    Claude and Console Go Dark at Peak Demand

    Anthropic reported an outage hitting API access, Console, and Claude. In a year already full of compute and rate-limit drama, the reliability story got another easy screenshot.

  20. Safety

    Claude Code Shows Up in Cybercrime Reports

    Anthropic disclosed Claude misuse cases involving data extortion, North Korean remote-worker fraud, and AI-generated ransomware. The cleanup was useful; the dunk is that the safety-first product was already useful to attackers too.

  21. Policy

    OpenAI Gets Booted From Claude

    Anthropic revoked OpenAI's Claude API access, saying OpenAI used Claude Code and internal tools to benchmark GPT-5 in violation of terms against competitor development. OpenAI called benchmarking standard safety work; Anthropic chose the bouncer role.

  22. Policy

    Claude Code Enters the Rationing Era

    Anthropic announced weekly limits for Pro and Max subscribers, blaming always-on Claude Code loops and account-sharing abuse. Fewer than 5% of users were supposed to be affected, but the signal was obvious: flat-rate agent work had run into a compute bill.

  23. Policy

    Boris & Cat Leave, Anthropic Suddenly Cares

    Claude Code began as Boris Cherny's no-master-plan terminal experiment and launched only as a limited research preview, while Cat Wu was still saying a dedicated subscription was something Anthropic was merely 'figuring out.' Then Cursor-maker Anysphere poached Cherny and Wu for senior roles, only for Anthropic to hire them back within two weeks. Soon after, Claude Code became a first-class subscription product.

  24. Legal

    A Fair-Use Win With a Piracy Hangover

    Judge William Alsup held that training on lawfully acquired books could be fair use, but the pirate-library claims survived. Anthropic won the model-training theory and still kept the shadow-library problem.

  25. Legal

    Reddit Sues Over the Scraping It Says Never Stopped

    Reddit sued Anthropic, alleging bots kept hitting Reddit after Anthropic said it had stopped and that Claude was trained on user content without a license. Unlike the book cases, Reddit framed the fight around platform rules, privacy promises, and contracts.

  26. Policy

    Windsurf Gets Cut Off Mid-Race

    Windsurf said Anthropic sharply reduced first-party Claude access with little notice, right as OpenAI acquisition rumors swirled. Jared Kaplan later said it would be odd to sell Claude to OpenAI and pointed to compute constraints, which did not make the lockout feel less strategic.

  27. Legal

    Claude Hallucinates Its Way Into Anthropic's Own Lawsuit

    In the music-publishers case, Anthropic's lawyers took responsibility for an expert-report citation that Claude fabricated. A model flaw became courtroom theater inside a case about the model itself.

  28. Legal

    Authors Sue Over the Shadow-Library Diet

    Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson alleged Anthropic copied books from pirate libraries and built a permanent training library from them. Every later 'responsible AI' claim had to live next to that complaint.

  29. Policy

    ClaudeBot Hammers iFixit and Freelancer

    ClaudeBot reportedly hit iFixit roughly a million times in 24 hours, while other sites complained about aggressive crawling. Anthropic said it honors robots.txt; web operators learned that only helps after you notice the bot eating your bandwidth.

  30. Legal

    Music Publishers Drag Claude Into Court

    Universal, Concord, and ABKCO sued Anthropic, alleging Claude was trained on copyrighted lyrics and could reproduce lyrics from hundreds of songs. The 'constitutional AI' company got its first big copyright punch in the face from the music business.

And remember kids... Don't Be Like Anthropic. And remember kids...