UNSCARCITY

Robots + AI + Cheap Energy → Sustainable Abundance

Edition #5 · February 2026

The Management Layer Just Automated Itself — And That Changes What Scarcity Means

Claude Code landed quietly a few weeks ago and the AI world is still catching up to what it means. This isn't a better autocomplete. Anthropic built something that can reason about a codebase, identify what needs doing, decompose the problem into subtasks, spin up sub-agents to handle those subtasks, and synthesize the outputs. No human in the loop between the first prompt and the final review. The management layer inside software development just became a function that AI runs on itself.

We're entering the age of agents managing agents — and the implications for where scarcity actually lives are more disorienting than anything this newsletter has tackled.

Pair that with the emerging trillion-agent economy frame — directionally correct even if the timeline is aggressive — and you start to see what's happening. We're building systems where the decision about which agent gets deployed, what resources it consumes, how it coordinates with other agents, and when it escalates to a human is itself made by an agent. Turtles all the way down, except the turtles are doing real economic work.

The economic structure is striking. Claude Code's architecture effectively compresses what used to be a three-layer organisation — senior engineer setting direction, mid-level engineers doing implementation, junior engineers handling tasks — into a single orchestrated system. That's not productivity improvement. That's a structural collapse of the labour pyramid that built the modern software industry.

Precision matters here — the hype merchants blur this part. Agents managing agents doesn't mean humans are out of the loop. Someone still sets the goal at the top. Someone still reviews the output at the bottom. What's gone is the middle — the coordination overhead, the status meetings, the handoffs, the project managers managing project managers. That middle layer represents the majority of cost structure in a knowledge-work organisation. When it becomes an automated function, the economics of building things collapse in ways that compound with the energy and robotics curves this newsletter tracks.

Here's the scarcity that matters now. The ability to specify what you want with enough precision that an autonomous multi-agent system can pursue it without producing something technically correct but strategically useless. Goal clarity. That's a harder cognitive skill than orchestration. It requires taste, context, and judgment about what matters — things prompts can't capture. The people who have it will command extraordinary output multiples. The people who don't will watch their roles dissolve regardless of how good they are at managing AI tools.

I'll make the prediction explicitly: by 2028, the dominant competitive moat in knowledge-intensive industries won't be who has the best AI — every company will access equivalent models through APIs. It'll be who has developed organisational clarity about what they're actually trying to build. Strategy as a function gets more valuable as execution becomes free. Not the PowerPoint kind of strategy. The genuine, uncomfortable, hard-to-fake kind.

One flag on all of this: the safety architecture of multi-agent systems is genuinely unsolved. When an agent decides which other agents to deploy, and one of those agents makes an error, the cascading failure in a recursive system is non-linear. Anthropic knows this — their research on constitutional AI is partly aimed at this problem. But we're deploying these systems faster than we're solving the alignment problems inside them. That's not a reason to stop. It's a reason to pay attention.

TODAY'S EDITION IS BROUGHT TO YOU BY

Unscarcity

Every week, we break down the three forces reshaping the global economy: robots getting cheaper than labour, AI getting cheaper than thought, and energy getting cheaper than ever. No hype. No hedge. Just the signals that matter. If someone forwarded this to you, now's the time.

Subscribe to Unscarcity →

TRIFECTA UPDATE

Robots — Hitachi's electric excavator runs 24/7 and that changes construction economics

Hitachi just rolled out a 13-ton dual-mode electric excavator designed to run around the clock — no refuelling stops, no diesel logistics, no downtime. That sounds incremental until you do the maths. Diesel excavators halt for refuelling, maintenance windows, and shift changes that compress productive time well below theoretical capacity. An electric machine that runs continuously changes the output equation for every fleet that adopts it. This is the pattern that matters: not flashy humanoid demonstrations, but purpose-built electric machines eliminating specific bottlenecks in heavy industry. When your excavator runs on grid power and never stops for fuel, the cost curve for physical infrastructure bends hard.

AI — Inference costs are collapsing and the business model implications are enormous

Recent analysis of AI unit economics reveals that deployment costs are dropping faster than most business models can adapt. Compute costs per query have fallen by an order of magnitude, driven by hardware improvements, better model architectures, and ruthless optimisation of serving infrastructure. When inference approaches commodity pricing, every company charging per-API-call faces margin compression. The winners will be those building products where AI is the substrate, not the line item. Cheap inference doesn't just make existing applications cheaper. It makes entirely new categories of always-on, ambient AI economically viable for the first time.

Energy — Texas overtakes California in battery storage on pure economics

Texas grid battery storage grew 30% year-over-year and is now on track to surpass California — the state that defined America's clean energy narrative for a decade. This isn't symbolism. Texas runs a deregulated electricity market under ERCOT where storage assets compete on economics without policy subsidy propping up the numbers. The fact that battery storage is winning in that environment tells you something the IEA's policy-heavy analysis often obscures: storage is now economically rational without help. When Texas capital — deeply pragmatic, deeply unsentimental — flows into batteries at this scale, the technology has crossed from supported to viable. The renewable grid is no longer a political project in America's largest energy state. It's an infrastructure investment.

THE NUMBER

10x.

That's the order-of-magnitude drop in AI inference costs since frontier models first reached commercial deployment. Not a gradual decline — a cliff. The curve hasn't flattened. For every business model built on metering AI access, that's an existential compression. For every application that was "too expensive to run continuously," the maths just changed. When inference cost drops 10x, the question stops being "can we afford to use AI here?" and becomes "can we afford not to?" That's the transition from tool to utility. And utilities, once cheap enough, get left running.

WHAT I'M WATCHING

Ottonomy's Ottumn.AI platform — unified orchestration across delivery robots, drones, and smart infrastructure built on NVIDIA's stack. This is what the abundance trifecta looks like when it starts operating as a system rather than three parallel trends. One AI layer managing physical machines of different types across different environments. When this kind of orchestration becomes infrastructure rather than custom engineering, the deployment curve for physical automation goes near-vertical.

Medtronic's Stealth AXiS FDA clearance — surgical robotics just cleared one of its hardest regulatory barriers in spinal surgery, where the margin for error is measured in millimetres. FDA clearance in high-stakes surgery is a proxy signal: it means the liability and insurance frameworks are catching up to the capability. When insurance catches up, deployment accelerates. Surgical robotics is about five years behind industrial robotics on the adoption curve — which means it's about to move fast.

The datacenter CPU renaissance — AMD, Intel, Arm, and custom silicon are all competing for inference workloads that Blackwell GPUs don't handle optimally. Distributed agent execution — the infrastructure requirement of the trillion-agent economy — needs different compute architecture than centralised model training. The datacenter is being rebuilt around agent workloads. That's a multi-hundred-billion-dollar shift happening below the level of mainstream attention.

THE QUESTION

If agents can now autonomously decide which other agents to deploy — allocating resources, setting sub-goals, managing outputs — at what point does the human "decision-maker" become a goal-setter in name only, ratifying choices the system has already optimised toward? And when that happens, who actually owns the goal-setting layer?

Goals emerge from values, and values aren't neutral. When the management layer is automated, the values baked into the orchestration system at design time become the effective strategy of the organisation. That's not a technology problem. That's a governance problem nobody has seriously started solving.

The abundance thesis has always rested on a quiet assumption: that the gains from cheap energy, cheap intelligence, and cheap physical labour flow broadly rather than concentrating narrowly. Multi-agent autonomy is the first development that makes me genuinely uncertain about that assumption. The productivity gains are real. Who captures them is still an open question — and it's the most important question in this space right now.

The scarcity that remains isn't cognitive. It's structural. And structural problems require structural solutions, not better prompts.

UNSCARCITY

A weekly newsletter on the convergence of robots, AI, and cheap energy driving sustainable abundance.

Was this forwarded? Subscribe here.

© 2026 Unscarcity. All rights reserved.

Keep Reading