Thoughts
The big news in markets this week was this report from Citrini Research & Alap Shah that apparently crashed the markets and led to a lot of debate in our office. It lays out a "fast take-off" scenario for AI, which causes mass layoffs of white-collar emplopyees as AI replaces intelligence work and starts off an economic downward spiral as demand collapses.
It should have been clear all along that a single GPU cluster in North Dakota generating the output previously attributed to 10,000 white-collar workers in midtown Manhattan is more economic pandemic than economic panacea. The velocity of money flatlined. The human-centric consumer economy, 70% of GDP at the time, withered. We probably could have figured this out sooner if we just asked how much money machines spend on discretionary goods. (Hint: it’s zero.)
AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved…
It was a negative feedback loop with no natural brake. The human intelligence displacement spiral. White-collar workers saw their earnings power (and, rationally, their spending) structurally impaired. Their incomes were the bedrock of the $13 trillion mortgage market - forcing underwriters to reassess whether prime mortgages are still money good.
The report found many believers in markets but I find myself on the skeptical side, much more pusuaded by the many pushback articles which are grounded in conventional economic theory. And they came from many sources.
Here's Tyler Cowen in his cryptic style. Here's Zvi with the inverse. And finally here's Citadel.
And lastly, here's Claude summarizing it all and adding its own perspective:
Why Citrini's Scenario Doesn't Add Up
The piece is an excellent thought experiment and a useful sector-level vulnerability map. The macro conclusion — that AI abundance causes a demand collapse and systemic crisis — is built on a fundamental accounting error.
The Core Contradiction: Every Loss Is Someone Else's Gain
The entire scenario rests on a demand collapse: AI replaces workers, workers stop spending, the economy spirals. But the same force destroying jobs is also destroying prices. If a Claude agent does the work of a $180K PM for $200/month, then everything that PM helped produce also gets dramatically cheaper. The piece catalogs agents slashing insurance premiums, SaaS costs, delivery fees, real estate commissions, and interchange — then claims displaced workers can't afford things. Which things? The things that just got 80% cheaper?
Every corporate revenue loss in the piece is a gain on the other side. ServiceNow loses $500K in licenses — that's $500K freed for the client. DoorDash loses its 30% take rate — drivers earn more, consumers pay less. Real estate commissions drop from 6% to 1% — that's a 5% stimulus to every home purchase. SaaS fees are a tax on business. That tax went down.
Meanwhile, the piece describes NVIDIA posting records, hyperscalers spending $150-200B/quarter, AI companies thriving. Someone is paying for all of that. You cannot have booming AI revenues and an economy where nobody is spending. The money doesn't vanish — it circulates through different channels. The piece tracks one side of every transaction and ignores the other.
The Timeline Is Physically Impossible
The scenario requires mass deployment across every major industry in ~30 months. Enterprise procurement cycles, organizational restructuring, regulatory review, and infrastructure buildout don't move at this speed. More critically: there isn't enough compute on Earth to run always-on agents for every American consumer while simultaneously replacing most white-collar work by 2028. The compute constraint alone paces everything and makes this timeline a non-starter. Even clearly superior technologies (electricity, the internet) took decades to restructure economies.
Zero Policy Response for Two Years Is Ahistorical
The US deployed trillions within weeks during COVID. Automatic stabilizers activate immediately when unemployment rises. The piece treats political difficulty in designing optimal long-term policy as equivalent to no response at all.
More importantly: in a world with booming real output and deflation, aggressive fiscal stimulus is straightforwardly correct — you have too much output and not enough demand, so you print money without inflationary risk. During COVID, stimulus risked inflation because real production was down. Here, real production is exploding. This is the easiest macro policy environment imaginable.
What's Actually Useful in the Piece
The sector-level analysis of which business models are vulnerable is genuinely good. SaaS pricing power, intermediation platforms, payments, insurance built on inertia, real estate brokerage — anything dependent on friction or information asymmetry is legitimately threatened. The private credit / insurance / reinsurance daisy chain is a real and underappreciated fragility.
The transition will be uneven. Specific sectors, geographies, and demographics will get hit hard, and the speed of disruption concentrated in high-earning white-collar cohorts could cause real dislocations even if the macro aggregates are fine. That's worth positioning around.
But the mechanism for how sectoral disruption becomes a macro demand collapse is where the analysis fails — because it treats every efficiency gain as pure destruction rather than a transfer from incumbents to consumers. Read it as a sector vulnerability map, not a macro forecast.
Going back to the title of my last post from Singapore, there's clearly been a vibe-shift when it comes to what the latest models can do. In early Jan only those focused on AI were feeling it and now it is going more mainstream. But that's no reason to fall prey to half-baked thinking. Let Claude taste it and cook it for you.
Note: I haven't posted anything other than links here for the most part. So I didn't notice the bug in the code which messed up the formatting of quoted portions that are not in a bulleted list. The portion above wasn't formatting correctly without the bullets. A project for next weekend, to get Claude to fix this.
Links
Mathematics in the Library of Babel: Speaking of vibe-shifts...Daniel Litt is a professor of mathematics at the University of Toronto.
I think I have been underrating the pace of model improvements. In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.
-
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow.
It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
Best Practices for Claude Code: TBF this looks like an influencer account but I am collecting these guides and this is one.
The Claude-Native Law Firm: Another one of the above.
Writing about Agentic Engineering Patterns: A more serious work...
I think of vibe coding using its original definition of coding where you pay no attention to the code at all, which today is often associated with non-programmers using LLMs to write code.
Agentic Engineering represents the other end of the scale: professional software engineers using coding agents to improve and accelerate their work by amplifying their existing expertise.
How will OpenAI compete?: Great read.
OpenAI has some big questions. It doesn’t have unique tech. It has a big user base, but with limited engagement and stickiness and no network effect. The incumbents have matched the tech and are leveraging their product and distribution. And a lot of the value and leverage will come from new experiences that haven’t been invented yet, and it can’t invent all of those itself. What’s the plan?
These Al Prompts Exposed My Biggest Blind Spots: More influencer content but interesting direction.