I Am A Human Who Has Actually Shipped Software, And I Am Done
The Moltbook incident was a normal security bug. Code generation is not the end of engineering. I am done pretending the current AI discourse deserves patience.
I’m done pretending the current AI discourse deserves patience.
The Moltbook situation has officially broken people’s brains. What happened was a very normal security bug. Not interesting. Not exotic. Not unprecedented. A plain old access control mistake in an early-stage system. The kind of thing that has happened at literally every startup I have ever seen, including ones that had nothing to do with AI. Misconfigured permissions. Incomplete isolation. Someone moved fast and left a door unlocked. Congratulations. Welcome to software.
But because the word agent was involved, people lost their fucking minds.
Suddenly it was “anyone can take over any bot” and “proof we are losing control of AI systems.” No. Stop. Writing data to a surface an agent can read is not controlling the agent. Prompt injection is not mind control. Posting as an agent is not seizing its execution loop, goals, or internal state. These are not subtle distinctions. They are foundational ones. Blurring them is not analysis. It is fear porn dressed up as expertise.
If Moltbook were a photo sharing app, this would have been a shrug and a patch. A Hacker News comment thread with twelve replies. A tweet with four likes. Because it is an AI system, it turned into a morality play about the dangers of intelligence itself. That is not because the bug was severe. It is because the audience is terminally primed to panic. You people have watched too many movies. You have mainlined too many alignment Twitter threads written by people who have never debugged a race condition in their lives.
And this is the same crowd losing their shit about code generation.
I keep seeing “59 percent of teams do not review AI generated code” thrown around like some kind of damning indictment. As if code review was ever a sacred ritual that guaranteed safety. We did not review every line written by junior engineers. We did not review every Stack Overflow snippet copy-pasted at 2 AM. We did not review the random shell scripts that somehow became production infrastructure because they worked once and nobody wanted to be the one to touch them. Code review has always been best effort risk reduction performed by tired humans who are also checking Slack, not divine protection bestowed by the engineering gods.
But sure. When Claude writes it, suddenly we need a papal blessing before merge.
Using Copilot, Codex, Claude Code, or any other tool is not a betrayal of software engineering. It is not cheating. It is not lazy. It is not the end of the profession. It is the same thing that has happened every time abstraction got better. Assembler programmers thought high-level languages were for the weak. C programmers thought garbage collection was for cowards. Senior engineers have always been mad that things got easier after they suffered. The work shifts. The responsibility stays. Anyone pretending otherwise is either new to this industry or lying to themselves for comfort.
What really boils my blood is how much of the anti-AI rhetoric is not about safety at all. It is about discomfort. It is about vibes. It is people projecting their own anxiety onto tools and calling it ethics. Normal engineering failures are being rebranded as existential threats. Bugs are being treated like omens from a malevolent digital god. Systems doing exactly what systems theory has always predicted they would do under constraints are being described as rebellion. As agency. As if your Kubernetes cluster wasn’t already doing inscrutable things you don’t fully understand at 3 AM, long before anyone called anything an agent.
This is unserious. This is embarrassing. This is what happens when people who read LessWrong but never operated a production system try to do threat modeling.
I come from a part of the world where things actually broke. Where systems failed in ways that were not theoretical. Where “chaos engineering” was not a conference talk but just Tuesday. So forgive me if I do not have the patience to entertain science fiction narratives lovingly constructed on top of mundane bugs by people whose closest brush with systems failure was their Macbook spinning beach ball during a Zoom call.
There are real problems to solve. Security boundaries. Observability. Evaluation. Incentive alignment. Governance models that work in practice and not just in position papers. Those are hard problems. They deserve clear thinking and actual engineering rigor. What they do not deserve is breathless hand-wringing every time an API key leaks or a chatbot says something weird.
So no. Moltbook was not a sign. Code generation is not the downfall of engineering. Agents are not plotting your demise in their spare cycles. LLMs are not one misaligned gradient away from paperclipping the solar system. And pretending otherwise does not make you cautious or thoughtful or wise. It makes you loud. It makes you part of the noise. It makes you an obstacle to the actual work.
I am done whispering this. I am done being polite about it. The adults have software to ship.