The layoffs will continue till we learn to use AI:
Code is an input.
Features are an output.
Users spending money on your product is an outcome.
and
But the truth is that these layoffs, even if they they are not because AI is replacing you you, and even if they are some form of AI-washing. These layoffs are still because of AI. And these layoffs will continue till we learn to use AI. Till we learn to convert AI-tokens into outcomes and not just input. Till we learn to re-align the speed of "alignment" with the new speed of coding. And till we figure out, beyond our 2 good and 8 stupid ideas, 10 more ideas that we can chase with our increased productivity.
Till we figure out how the GDP of the world actually grows because of AI, we have to offset the $70 B (combined OAI/Ant enterprise revenue) of annual token spend by cutting some salaries. And till we figure out how to unblock each other faster, we can always be removed from the org chart itself.
The Case for Strategic Illegibilty
But, and there is always a but, there's nuance to this that I can't stop thinking about. As companies race to become legible to AI, they are not just making their own businesses easier for agents and AI tools to navigate. They are also translating proprietary knowledge into a format AI tools can ingest, learn from, train on and improve on. Making those tools smarter.
And once those tools get smarter, they do not only serve you. They serve every other customer using the same vendor. The MCP integration that lets your agents act faster and deeper also lets the playbook be reverse engineered.
Populist backlash towards AI?:
Americans have been negative on social media for 10 years, and there has been no meaningful political action. And that's despite all the other hallmarks of backlash people are saying about AI---violent extremists (people forget there was a shooting at YouTube HQ), protests, etc.
"being able to create something useful for a specific person’s needs, without any fluff, in a single sitting, is just unreal"
Behind the Scenes Hardening Firefox with Claude Mythos Preview (via):
Just a few months ago, AI-generated security bug reports to open source projects were mostly known for being unwanted slop. Dealing with reports that look plausibly correct but are wrong imposes an asymmetric cost on project maintainers: it’s cheap and easy to prompt an LLM to find a “problem” in code, but slow and expensive to respond to it.
It is difficult to overstate how much this dynamic changed for us over a few short months. This was due to a combination of two main factors. First, the models got a lot more capable. Second, we dramatically improved our techniques for harnessing these models — steering them, scaling them, and stacking them to generate large amounts of signal and filter out the noise.
Looks like the Mythos hype was real.
A few tips from Mr. Claude Code himself.
7.