YWR GP: The Mispriced Moat
Another deep analysis from Pancras. Two things:
I would extend NVDA/TSMC/ASML call to memory (SK Hynix, Samsung).
what Pancras Beekenkamp is saying about the internal fabric of how companies manage their AI’s, the prompts ,the tasks, the datasets, the ‘Master Prompt’, is 100% right on the money and firms are only starting to realise this.
Now over to Pancras.
*strictly my personal views only *
Those who have followed the Sovereign Index series will recognise the central thread of this note before I announce it. Beginning in August 2025 with AI Sovereignty - the question of whose anthropology you buy when you procure a mind - and running through Capital in the Age of Enterprise AI, IT Spaghetti, the Marlboro Liability, the Agentic Dilemma, and most recently the Innovator’s Dilemma in January 2026, the underlying inquiry has been consistent: who owns the reasoning capacity of the firm, and under what conditions does that ownership become sovereign rather than merely licensed? That series examined the problem from the inside - the institutional will, the legacy debt, the legal exposure, the governance architecture. This note examines the same territory from the outside: through the lens of a market that briefly, and rather noisily, decided to price the answer.
The fortnight at the start of February 2026 will occupy at least a footnote in the history of enterprise software investing. In roughly two trading sessions, approximately USD285bn in market capitalisation was wiped from the global software sector, triggered by Anthropic’s release of Claude Cowork and a cascade of earnings calls that failed to inspire confidence. Piper Sandler downgraded Adobe, Freshworks, and Vertex in a single morning note, its analyst Billy Fitzsimmons warning that ‘seat-compression and vibe coding narratives could set a ceiling on multiples.’ Jefferies described software as potentially ‘the next print media.’ The iShares Expanded Tech-Software ETF fell into a technical bear market, and price-to-sales ratios compressed from roughly 9x to 6x - levels unseen, as Market Minute noted, since the mid-2010s.
There is a useful frame for interpreting this behaviour, one I have been developing through the Sovereign Index series and which I want to make explicit here. For most of the past forty years, the operational logic of global finance was governed by Information Arbitrage - the ability to acquire, transmit, and act upon data marginally faster or more comprehensively than a competitor. The Golden Copy and transmission speed were the primary sources of edge. Today, as Satya Nadella, CEO of Microsoft, observed in his fireside conversation with Larry Fink, CEO of BlackRock, at Davos in January, data and speed have been arbitraged into abundance. The new scarcity is synthesis. Cognitive Arbitrage - the capacity to metabolise vast quantities of unstructured information into actionable judgement - is the premium activity. The February panic was a market performing Information Arbitrage: it acted swiftly on a surface-level narrative. The question this note poses is what a more demanding synthesis would reveal instead.
The intellectual ground had been prepared by two essays of unusual candour. In Machines of Loving Grace, published in October 2024, Dario Amodei, CEO of Anthropic, sketched a civilisation that had navigated the transition to powerful AI successfully - compressing, in his phrase, a century of biological and scientific progress into a decade. His January 2026 essay, The Adolescence of Technology, turned to the passage itself: the rite of passage in which humanity acquires unimaginable power before it has developed the wisdom to govern it. Amodei’s central concern - that the genuine bottleneck is not chips but interpretability, the ability to interrogate why a model reached its conclusion - maps directly onto the governance anxiety at the heart of the Sovereign Index: without interpretability, the sovereign firm delegates its cognitive arbitrage to a mind it cannot interrogate. The market, in its February haste, did not pause on that distinction.
What follows proceeds in five movements. First, I examine why the ‘vibe coding kills SaaS’ thesis is, for all its surface plausibility, analytically thin - not because the disruption is unreal, but because it systematically misidentifies where software value resides. Second, I explore the revised hierarchy of competitive moats, asking which have genuinely weakened and which have paradoxically strengthened in the agentic era, including where Palantir’s orchestration model and Anthropic’s own Model Context Protocol fit within that map. Third, I turn to where value is accumulating in the new stack, including the art of prompting toward synthesis rather than mere retrieval. Fourth - the dimension the market has most conspicuously missed - I examine the depreciation illusion embedded in the AI infrastructure boom: Nvidia’s accelerating hardware cadence, its consequences for hyperscaler earnings quality, and the unexpected beneficiaries further up the supply chain. Fifth, I consider what the private credit industry, which has committed hundreds of billions to data centre assets, makes of a hardware obsolescence schedule it did not fully price. The conclusion attempts a topology of where cognitive advantage actually resides.
The Poverty of the Surface-Level Thesis
Markets process information imperfectly at the best of times, and with particular imprecision during moments of rapid technological change. By early 2025, 92% of U.S. developers were reportedly using AI coding tools daily; by 2026, an estimated 41% of all global code is AI-generated. The natural inference - that the cost of software production has collapsed, and therefore the premium one pays for SaaS businesses must collapse with it - is the kind of crisp, legible story that moves institutional capital quickly. It is also, in important respects, wrong.
The phenomenon driving the panic is vibe coding: the shift from deductive programming, in which developers manually construct syntax and logic, to abductive programming, in which large language models are steered toward likely solutions through natural language. The efficiency gains are genuine. Tasks that once consumed 1,000 hours of manual engineering can now be completed in roughly 200 hours. Tools like Windsurf and Cursor can refactor fifty-file systems through a single prompt. Replit Agent 3’s Max Autonomy Mode handles database migrations unsupervised for over 200 minutes. None of this is fiction.
What is fiction is the inference drawn from it. The case for the SaaS collapse rests on a conflation of two activities that bear only a superficial resemblance: building a version-one prototype and maintaining a production-grade enterprise system. As Jason Lemkin observed at Saastr, nobody is building a homegrown CRM in Replit to replace their Salesforce instance. The initial coding of a software product represents roughly 2% of the actual work involved over its operational life - the remainder being scaling, security audits, regulatory compliance, and the patient accumulation of institutional integration. The market was pricing as though the 2% had become free, and the 98% had therefore become optional.
Vibe coding produces what practitioners have taken to calling ‘backend chaos’: functional at demo, brittle under load, vulnerable to SQL injection and littered with hard-coded API keys. Industry benchmarks place maintenance at between 50% and 80% of a software system’s total cost of ownership over its lifecycle, a proportion that rises when the initial codebase was generated probabilistically rather than architected deliberately. AI-generated code inflates those maintenance costs further still - engineers spend considerable time debugging hallucinations they did not introduce and cannot easily locate, producing what amount to three-times development cost in ongoing upkeep. The panic narrative never engaged with this arithmetic. It stopped at the headline and moved on.
There is something almost instructive about the speed of the February repricing. A market that spent the previous decade rewarding recurring revenue and switching costs suddenly decided, in the course of a fortnight, that neither of those properties mattered. The question that cognitive synthesis would ask - and that information arbitrage does not pause to consider - is whether the disruption is uniform across the software landscape, or whether it is, in fact, highly targeted. That question leads directly to the moat.
The Moat Hierarchy, Revisited
The hierarchy of competitive advantage in enterprise software has not been destroyed by agentic AI. It has been reorganised, and the reorganisation is less flattering to some categories than others - which is a very different proposition from the uniform crisis the February selloff priced.
At the most exposed position sits what might be called the UI moat: businesses whose primary differentiation was a well-designed interface layered atop a relatively undifferentiated data structure. Point solutions - invoicing tools, project management dashboards, basic CRMs that enterprises adopted during the best-of-breed decade - are genuinely at risk. AI agents do not require a conventional interface; they read from and write to the data layer directly. Vibe coding allows a developer to build a custom internal replacement for a USD100k-per-year SaaS subscription in a matter of weeks for USD10k to USD20k. The market’s instinct to reprice these businesses was defensible. Its failure was applying that instinct indiscriminately.
What the selloff failed to distinguish is the durable position of the System of Record. Platforms like Salesforce and SAP have accumulated, over decades, something that cannot be replicated overnight: a dense web of integrations with hundreds of tools, tens of thousands of features refined through iterative enterprise feedback, and an institutional memory that is simultaneously data and process. Agentic AI has, if anything, deepened the centrality of these platforms. Agents require a foundation to function; they must read from an authoritative source of truth and write their results back to it. As the interface layer thins and autonomous workflows multiply, the question of which system owns the canonical record of enterprise reality becomes more consequential, not less. The System of Record is not the victim of the agentic era - it is the substrate on which the agentic era runs.
Above the UI moat and the SOR sits the regulatory and compliance moat, which has strengthened considerably. In healthcare, finance, and legal services, software is not merely a productivity instrument - it is an accountability infrastructure. Platforms like Abridge in clinical note-taking and Harvey in legal document drafting draw their defensibility from domain-specific training on proprietary datasets, hard-coded clinical safety guardrails, and the ability to produce outputs that are auditable and traceable to verifiable sources. General-purpose AI cannot replicate these properties by definition: the proprietary data does not exist on the public internet, and the accountability framework is not a feature that can be added later.
At the apex of the revised hierarchy sits the orchestration and governance layer - the infrastructure that connects disparate systems of record into a unified intelligence mesh. Palantir’s performance during the February selloff is worth dwelling on; while the sector was being hammered, the company’s stock rose on a strong fourth-quarter report, because it has built precisely this kind of infrastructure - integrating and operationalising proprietary government and commercial data through what its management calls architecting the mesh. The CIO transformation I described in the Innovator’s Dilemma note is the institutional version of the same insight: technology leadership has shifted from buying licences to designing the reasoning architecture.
There is a technical dimension to this that connects directly to the Agentic Dilemma of December 2025. Anthropic’s Model Context Protocol - the emerging standard governing how AI agents read from and write to external systems - is now the plumbing of the agentic economy. The dilemma I described then remains unresolved: to allow agentic access via MCP is to risk becoming a dumb pipe; to disallow it is to court irrelevance. Palantir’s answer is to embed itself as the ontological layer - governing how agents interpret data rather than merely transmitting it. The Walled Garden, with its Master Prompt as institutional constitution, is another. What both share is the recognition that the premium in the agentic era accrues not to those who build the fastest pipe, but to those who own the meaning attached to what flows through it. That is the competitive logic the February panic never priced.
The Intelligence Layer and the Art of the Prompt
The transition under way is not, at its core, about software being replaced by AI. It is about where value migrates within a technology stack that is being comprehensively reorganised. The most visible migration runs from the application layer toward infrastructure: the Big Five hyperscalers are forecast to spend between USD660bn and USD690bn on AI infrastructure in 2026, a 36% increase on 2025, with roughly 75% targeting AI specifically. Overall IT spending grew by 8% year-over-year in early 2026; AI-specific budgets surged by over 100%. The arithmetic is not subtle: the money is coming from somewhere, and the somewhere is largely the SaaS seat-count economy.
The collapse of seat-based pricing is real and structural rather than cyclical. When a single AI agent performs the work of multiple human analysts, a customer may reduce its licence count from fifty to ten while maintaining output. Gartner forecasts that 40% of enterprise SaaS will carry outcome-based elements by 2026, up from 15% in 2024. Yet outcome-based pricing carries its own contradictions. Critics at Parloa have made the case in Forbes that it constitutes ‘efficiency theft’: if AI makes a process ten times faster, the enterprise continues paying the same outcome price while the vendor’s compute costs fall, widening the vendor’s margin at the customer’s expense. CFOs, unsurprisingly, often prefer the predictability of per-token consumption pricing. By 2026, 46% of SaaS companies have adopted hybrid models that blend base subscriptions with variable charges - a pragmatic truce between the economics of the old model and the logic of the new.
The deeper value story, however, is not about pricing mechanics. It concerns the emergence of what might be called the System of Intelligence: the analytical layer that sits above the data and beneath the agent, synthesising records to answer not merely what happened but why, and what should be done next. This is the layer the Sovereign Index series has been circling from the beginning - the Corporate Reasoning Engine, the digitisation of institutional wisdom described by Greg Jensen at Bridgewater, the Walled Garden where the firm’s proprietary data meets its reasoning architecture. The question the February panic never posed was not whether AI threatens software. It was who owns the intelligence layer that agents must ultimately consult.
That question has a practical corollary that has received almost no attention in market commentary: the quality of reasoning an AI system produces is not simply a function of the model’s training. It is equally a function of how the model is questioned. Cognitive Arbitrage, as a practice, is inseparable from the art of the prompt. A model asked to retrieve and summarise produces, at best, Information Arbitrage - a faster version of what was already possible. A model engaged through a structured dialectic produces something qualitatively different.
The architecture of a synthesis-enabling prompt is identifiable. It opens not with a request but with a framing: here is the proposition I am testing, and here is the specific assumption I wish to interrogate. It asks the model to argue the opposing position with equal rigour before attempting any resolution. It maps the relevant domains of evidence - financial, regulatory, technical, institutional - and asks for reasoning about their interactions rather than isolated treatment of each. It ends not with a verdict but with a map of residual uncertainty: where does the evidence point in genuinely different directions, and what would resolve the ambiguity? Thesis, antithesis, synthesis, open question - this is the cognitive structure that transforms a language model from a retrieval engine into a reasoning partner. It is also, not incidentally, the architecture of the Sovereign Index series itself, each note a station in a Socratic journey rather than an arrival at a fixed conclusion. Greg Jensen’s insight at Bridgewater was precisely this: the Master Prompt is not a set of answers. It is an institutional constitution for asking the right questions.
The human expertise that commands a 40% wage premium in the 2026 labour market reflects this shift. The capacity to write a for-loop is no longer the differentiating skill; architectural intent is - the ‘Taste’ to recognise when a technical feature serves the business purpose and the ‘Architectural Literacy’ to guide an ensemble of agents toward a coherent outcome. The Vibe Coding Cleanup Specialist who transforms AI-generated prototypes into production-grade systems is, at bottom, a practitioner of Cognitive Arbitrage: supplying the judgement, the ethical weighting, and the interpretability that no autonomous system provides for itself. Amodei identified this as the Adolescence of Technology’s central constraint - not chips, but interpretability, the ability to look inside a model’s reasoning and understand why it arrived where it did.
The organisational consequence of this distinction is only beginning to register in management literature, but it may be the most consequential divergence of the current decade. Firms that have invested systematically in Cognitive Arbitrage - clearing the IT Spaghetti, enclosing their data within sovereign architectures, building the institutional prompting discipline that converts a generic language model into a Reasoning Engine - are not merely more productive than those that have not. They are accumulating an advantage that compounds in a manner structurally unlike any previous technology adoption curve.
The logic of compounding is this. Better prompts produce better outputs; better outputs, when fed back into the institutional knowledge base, generate richer context for the next cycle of prompting; richer context produces more precise synthesis; and more precise synthesis, over time, becomes the proprietary model weight that Nadella described at Davos as the true meaning of AI sovereignty - the ability to embed the tacit knowledge of a corporation into model weights that the firm actually controls. The firm that achieves this is not merely automating existing work - it is converting accumulated human judgement into a durable, scalable asset. The firm that has not begun that process is not standing still. It is falling behind at a rate determined by the compounding cadence of the firms that have.
The adoption data suggests the gap is already material. NBIM saved 213,000 staff-hours annually by making AI proficiency mandatory across its lean organisation - not by deploying a novel technology, but by subjecting an existing one to institutional discipline. The NHS, confronted with the same tools, saw barriers. That divergence is not primarily technical; it is a question of institutional will, of the courage to dismantle the familiar before the siege is complete. What distinguishes the leaders from the laggards in 2026 is not access to models - those are, by now, effectively a commodity - but the quality of the questions being put to them, the cleanliness of the data being fed into them, and the governance architecture ensuring that the outputs remain interrogable and sovereign. The gap between those who have built that infrastructure and those who have not is widening at a pace that will make the cloud adoption curve of the 2010s look leisurely by comparison.
The Clock Inside the Machine
There is a dimension of the AI infrastructure buildout that the Saasmageddon narrative has almost entirely neglected, yet which may ultimately bear more on the long-run economics of the technology sector than whether any individual SaaS company survives the transition. It concerns the relationship between the pace of Nvidia’s hardware releases and the depreciation assumptions embedded in the financial statements of the hyperscalers committing trillions to build AI factories. The question is disarmingly simple: what is the genuine economic life of a GPU when Nvidia ships a new architecture every twelve to eighteen months, each delivering three to four times the compute performance of its predecessor?
The hardware cadence under Jensen Huang has become extraordinary by any historical standard. Blackwell B200 arrived in 2024. Blackwell Ultra (B300) followed in the second half of 2025, delivering roughly 50% performance uplift and 288GB of 12-Hi HBM3E memory per GPU. Vera Rubin - already taped out, with Huang reportedly asking TSMC to increase 3nm production by 50% to meet Rubin demand - is slated for enterprise deployment in the second half of 2026, delivering 3.3 times the dense FP4 compute of the B300, a transition from HBM3e to HBM4, and doubled NVLink interconnect bandwidth. Rubin Ultra follows in 2027 at a projected 100 petaflops of dense FP4 - another tripling. At Computex in 2024, Huang confirmed the philosophy: ‘Build the entire data centre scale, disaggregate and sell to you parts on a one-year rhythm, and push everything to technology limits.’
The financial consequences of that rhythm have been managed, on the hyperscalers’ income statements, by a quiet accounting adjustment that preceded the AI boom but now shapes it decisively. By 2023 and 2024, AWS, Google Cloud, and Microsoft Azure had, in a remarkably coordinated fashion, extended the assumed useful life of their data centre hardware from three or four years to six - a change that Cerno Capital estimates reduced collective depreciation expenses by roughly half. With the Big Five committing USD443bn to AI infrastructure in 2025 and a projected USD602bn in 2026, the gap between accounting life and economic life is not a rounding error. Cerno Capital’s arithmetic suggests that extending useful lives from three to six years reduced collective data-centre depreciation from approximately USD39bn to USD21bn in 2024 alone; at 2025 capex levels, the estimated saving rises to USD23bn per year. This suppression of the depreciation charge flows directly to operating income - and, in turn, to the valuations on which the AI narrative has been trading.
Michael Burry entered this debate publicly in November 2025, arguing across social media and Substack that the hyperscalers were systematically inflating earnings by depreciating Nvidia-powered hardware over five or six years while Nvidia shipped new architectures annually. His estimate, widely cited across the financial press, was that the divergence between accounting life and economic life would produce approximately USD176bn of understated depreciation and overstated profits between 2026 and 2028 - leaving reported operating income at companies like Oracle and Meta more than 20% above what he considers economic reality. The comparison he drew was not to Enron but to Cisco at the peak of the dot-com bubble: exuberant capital spending and optimistic accounting assumptions converging toward a reckoning that nobody wants to date precisely.
The counterargument rests on a three-stage lifecycle: years one and two in primary service supporting frontier model training; years three and four repurposed for high-value real-time inference; years five and six relegated to batch analytics. This cascade logic has worked before - ageing server infrastructure has historically found second lives in lower-value workloads, and the analogy is not frivolous. Its vulnerability is supply dynamics. A100 GPUs retained pricing during the H100 shortage precisely because Blackwell was still queued. Once supply normalised, A100 values eroded sharply. Cerno Capital observed that once H100s became widely available, A100s saw steep value erosion. With Rubin arriving in the second half of 2026 at 3.3 times Blackwell Ultra’s compute performance, the economic incentive to retire older silicon rather than pay identical electricity costs for a fraction of the throughput will be substantial. The cascade argument assumes demand outpacing supply. Nvidia’s own cadence may be the force that dissolves that assumption.
Amazon has at least demonstrated some accounting integrity on this question. Its 3Q25 10-Q disclosed a shortening of the useful life of a subset of servers from six years to five, citing explicitly ‘the increased pace of technology development, particularly in the area of artificial intelligence and machine learning’ - a charge that reduced net income by USD677mn across nine months and which management absorbed without flinching. Nadella, separately, acknowledged the underlying tension: ‘I didn’t want to go get stuck with four or five years of depreciation on one generation.’ Meta moved in the opposite direction, extending its schedule to five and a half years in the same period it was projecting USD110bn in 2026 capex, with no direct AI revenue and a free cash flow that had fallen from USD54bn in 2024 to a projected USD20bn in 2025. When Meta raised its capex guidance in October, the stock dropped 11% in a session - one of those brief moments when the market performed the arithmetic it usually prefers to defer.
What the depreciation debate has largely overlooked, however, is who benefits structurally from Nvidia’s accelerating cadence. The answer lies upstream - and it reframes the narrative considerably. TSMC is the primary and most immediate beneficiary. Nvidia’s one-year rhythm translates directly into a perpetual, high-intensity demand signal for TSMC’s most advanced process nodes: both Blackwell Ultra and Vera Rubin are fabbed on TSMC’s N3P, and Huang has publicly asked TSMC to increase 3nm production by 50% for Rubin. TrendForce estimates that TSMC’s CoWoS advanced packaging capacity will scale to produce interposers at 5.5 times reticle size in 2026 and 9.5 times by 2027 - the latter designed specifically for Rubin Ultra’s four-die GPU packages. C.C. Wei, TSMC’s chief executive, told analysts after the company’s 4Q25 results that his conviction in ‘the multi-year AI megatrend remains strong.’ TSMC earns its revenue at the moment of manufacture, carrying none of the obsolescence risk that haunts the hyperscalers’ balance sheets. Each generation refresh that renders the previous generation economically uncompetitive is, for TSMC, simply another order.
ASML sits one cycle further removed, but no less structurally essential. As the sole manufacturer of High-NA EUV lithography systems - the equipment that enables TSMC to pattern silicon at 2nm and beyond - ASML’s growth profile lags Nvidia’s cadence by several years, because lithography machines are ordered and installed in anticipation of process nodes that will not ramp for another eighteen to thirty-six months. ASML’s 2025 revenue growth was projected at approximately 15%, modest against TSMC’s 36% and Nvidia’s triple-digit expansion, reflecting that the current AI boom runs largely on capacity built on prior-generation equipment. But TSMC’s N2 node began ramping in the second half of 2025, and A16 with Super Power Rail follows in 2026 - both requiring ASML’s latest tools. The AI demand wave reaches ASML through a lag; it reaches it nonetheless. Nvidia’s accelerating cadence is, in effect, ASML’s long-term order book, compounding at a remove.
The Credit Beneath the Clay
The depreciation question has a further dimension that the technology press has been slow to examine, but which the credit markets are beginning to price with mounting unease: the structural exposure of the private lending industry to data centre assets whose genuine economic lives may be materially shorter than the terms against which they were financed. The numbers involved are large enough to command attention even in a market accustomed to large numbers.
There was nearly USD200bn of debt raised for data centre development in 2025 alone, according to iCapital’s analysis - among the transactions a USD27bn joint venture between Meta and Blue Owl Capital. JPMorgan estimates that an additional USD5.3tn will be required to support AI infrastructure development through 2030, with roughly half expected from external capital. Morgan Stanley sees a USD1.5tn funding shortfall in the USD2.9tn of global data centre capex projected from 2025 to 2028 - a gap that credit markets are being asked to fill across investment-grade bonds, asset-backed securities, commercial mortgage-backed securities, and bespoke private credit facilities. In February 2026, S&P Global Ratings flagged this dynamic as among the principal drivers of credit market liquidity risk for the year, noting that global data centre securitisation volumes had topped USD30bn in 2025, nearly tripling in a single year.
Software companies - historically among the most favoured borrowers in private credit since 2020, representing roughly 17% of BDC investments by deal count - face a separate set of pressures from those examined in this note. UBS has warned that in an aggressive AI disruption scenario, default rates in U.S. private credit could climb to 13%, against roughly 8% in leveraged loans. Software and services companies account for the largest share of payment-in-kind loans, arrangements where borrowers defer cash interest payments - structures that become acutely dangerous when underlying revenues are under pressure from exactly the kind of structural repricing described above.
The data centre credit exposure is structurally distinct from the software credit exposure, but potentially more consequential in scale. Private credit lenders have committed capital against assets whose value rests on two assumptions: that the facilities will be occupied by creditworthy hyperscaler tenants on long-term contracts, and that the computing assets within them will remain economically competitive across the lending term.
Nvidia’s annual cadence challenges the second assumption directly. If Blackwell hardware carries a genuine economic life of three to four years rather than six, the collateral supporting a six-year lending facility begins to deteriorate meaningfully in years four through six - precisely when performance-per-watt comparisons with Rubin Ultra make continuing to run the older silicon economically irrational. Michelle Russell-Dowe, co-head of private debt at Schroders Capital, described ‘master trust structures where the assets can be rotated every few years’ as hard to underwrite, precisely because the residual value assumptions are sensitive to technological displacement. She chose not to participate.
The financing architecture of the AI buildout introduces a further complexity that has attracted attention from regulators if not yet from markets. Nvidia guarantees it will purchase excess capacity from CoreWeave through 2032; CoreWeave issues debt backed partly by that guarantee; hyperscalers depend on CoreWeave for overflow compute; and the whole structure requires sustained capital inflows to function - what one analyst described as a closed system of mutually reinforcing balance sheets where the loop holds only as long as capital keeps flowing inward. AXA signalled its discomfort in December 2025, announcing it would ‘avoid financing technological gambles.’ The Bank of England has begun reviewing lending exposures to data centres. Global data centre securitisation volumes are projected to approach USD100bn by 2027 from virtually nothing in 2023. As one analysis observed, this boom combines 1990s-scale infrastructure spending with 1920s-style unregulated lending and layers 2000s-era financial engineering on top. The hyperscalers are racing their own depreciation schedules. The private credit investors who financed the assets are, in a real sense, racing alongside them - with somewhat less visibility into the semiconductor roadmap they are implicitly underwriting.
Where the Dots Connect
The threads of this note converge on a picture considerably more complex than the one the February panic implied. The market performed Information Arbitrage - it acted swiftly on a legible narrative - and in doing so created, for at least a portion of the software landscape, a mispricing of the kind that a more patient synthesis is designed to exploit.
The disruption is real where it is real. Vibe coding has genuinely compressed the cost of version-one software production. The seat-based pricing model is under structural pressure that is not cyclical. AI infrastructure spending is, in a meaningful sense, cannibalising software budgets. Point solutions that never had data gravity beyond their interface are exposed in ways that the market was correct to recognise. These observations stand.
Where the market failed was in the extrapolation. It conflated initial build cost - which vibe coding has reduced - with total cost of ownership, which remains formidable and is rising for AI-generated codebases. It confused the thinning of the interface layer with the erosion of the data layer beneath, when the agent economy has in fact deepened the centrality of the System of Record. It missed the emergence of the orchestration and governance layer - the Palantir mesh, the MCP-governed ecosystem, the Bridgewater Master Prompt - as a premium category with no obvious incumbent challenger. And it failed entirely to examine the financial foundations of the infrastructure boom displacing software in its narrative, overlooking the possibility that hyperscaler earnings are being enhanced by depreciation assumptions that Nvidia’s own roadmap is quietly undermining - with consequences rippling upward to TSMC and ASML as structural beneficiaries, and outward to the private credit industry as a source of risk that is only beginning to be priced.
Bank of America’s observation - that the market was simultaneously pricing two mutually exclusive propositions - captures the logical incoherence at the heart of the selloff. If AI is powerful enough to displace established software workflows entirely, the infrastructure spending required to support it cannot simultaneously be deteriorating. If the infrastructure spending is predicated on accounting assumptions that Nvidia’s release cycle will eventually force into correction, then the confidence with which AI-driven disruption is being priced rests on shakier ground than the narrative admits. Amodei’s framing in The Adolescence of Technology is apt: a period of acquiring power before developing the wisdom to govern it. The market’s February behaviour illustrated that adolescence in miniature - powerful enough to move USD285bn in a fortnight, insufficiently measured to distinguish the genuinely disrupted from the paradoxically strengthened.
The historical parallel is instructive without being determinative. In February 2016, LinkedIn fell 44%, Tableau 50%, and Salesforce 13% in a near-identical panic about cloud software economics. The sector recovered within months, and Microsoft acquired LinkedIn four months later at a material premium.
Whether the same recovery follows in 2026 depends on the very distinctions this note has been drawing: the disruption is real for point solutions and seat-based models in a way it was not in 2016; it is overapplied to systems of record, regulated vertical AI, and the orchestration layer in ways that 2016 did not even need to consider.
Towards a Topology of Cognitive Advantage
If the analysis above is roughly right, the first practical imperative is to refuse to treat ‘software’ as a single analytical category. The moat hierarchy described here suggests a clear enough topology: at the base, accelerating obsolescence for UI-only point solutions that never owned the data; in the middle, durable resilience for systems of record with deep integration density, regulatory embeddedness, and institutional memory; at the apex, emerging premium for orchestration and governance infrastructure, for vertical AI platforms trained on proprietary domain data that generalist models cannot access, and for the supply chain positions - TSMC and ASML in particular - that are structurally indifferent to which generation of hardware is being installed, because every generation requires them.
The second imperative is to extend the total cost of ownership calculation in both directions. Downstream, vibe-coded replacements for SaaS tools carry hidden maintenance costs that the panic narrative has entirely ignored. Upstream, the ROI claimed for AI infrastructure carries depreciation risks that Nvidia’s own release schedule makes increasingly difficult to sustain. The private credit industry - which has committed capital against assets whose obsolescence timeline is being compressed by the very hardware manufacturer whose demand they are financing - faces a version of this problem that deserves considerably more analytical scrutiny than it has yet attracted.
The third imperative is to read the depreciation footnotes in forthcoming 10-K filings alongside Nvidia’s release announcements. Amazon has moved in the defensible direction, shortening useful lives and absorbing the earnings charge; Meta has moved in the opposite direction, extending schedules at the moment its capital intensity reached levels with no precedent in the technology sector. When Vera Rubin ships in the second half of 2026 - delivering 3.3 times the compute of Blackwell Ultra - the question of whether Blackwell assets are worth five or six years of economic value will cease to be theoretical. The secondary market for H100s after Blackwell’s arrival previewed the answer: steep erosion follows availability, not announcement.
The fourth imperative is to invest in Cognitive Arbitrage as a practice - to treat the construction of synthesis-enabling prompts as a genuine intellectual discipline rather than a productivity shortcut. The Sovereign Index series has been, among other things, an attempt to demonstrate what that practice looks like at scale: beginning with a provocation, interrogating its assumptions through sustained Socratic inquiry, allowing each station in the journey to qualify rather than simply confirm the previous one, and arriving not at certainty but at a more precisely mapped uncertainty. The Walled Garden, the Master Prompt, the Reasoning Engine - these are the institutional architectures of Cognitive Arbitrage. Their value is only as high as the quality of the questions they are designed to ask.
There is a fifth imperative that the four above presuppose but do not state plainly: the adoption gap is itself a compounding asset. The enterprises that have made the early, unglamorous investments - resolving the IT Spaghetti, building sovereign data pipelines, training their analysts to prompt for synthesis rather than summary - are not simply ahead on a linear scale. They are operating in a different cognitive economy from those that have not. The outputs of their Reasoning Engines become inputs to the next cycle of institutional learning; the proprietary model weights accumulating in their Walled Gardens grow more differentiated with each passing quarter; the switching costs imposed on their clients deepen as those clients’ own data becomes embedded in a sovereign architecture they have come to depend upon.
This is the organisational analogue of data gravity at the software level - and it carries the same implication. Just as the System of Record becomes harder to displace precisely because more agents depend upon it, the AI-sovereign enterprise becomes harder to compete with precisely because its cognitive infrastructure compounds in ways that a late adopter cannot replicate by throwing capital at the problem. NBIM’s 213,000 hours of annual saving was not purchased overnight; it was the return on an institutional commitment to AI proficiency made before the commitment felt safe. The firms making equivalent commitments in 2026 will find, in 2028, that the distance between themselves and the laggards has widened to a degree that no amount of vibe-coded catch-up can meaningfully close.
The February selloff was, in this light, doubly ironic. The market punished the software vendors that enable Cognitive Arbitrage while rewarding the infrastructure spend that supports it - without pausing to consider that the enterprises best positioned to capture value from that infrastructure are precisely those that have already built the institutional discipline to use it. The mispriced moat was not only in the software sector. It was in the gap between organisations that understand what they are building and those that are still, in Nadella’s phrase, importing an Executive Centre with embedded philosophical assumptions they have not examined. That gap will prove rather more consequential than a fortnight’s price-to-sales compression.
The broader question that Saasmageddon has forced into the open is whether enterprise technology is undergoing a disruption analogous to the shift from on-premise to cloud - a significant restructuring of value within a landscape that remains essential - or something more fundamental. The view I have been building through this series is that it is the former. The system of record does not become dispensable when agents multiply; it becomes the substrate they cannot function without. The regulated industry does not abandon accountability infrastructure because vibe coding is fast; it invests more heavily in making AI auditable. And the AI infrastructure buildout - however confidently priced by the market - is running a race against its own depreciation clock, a race that Nvidia’s annual rhythm makes more consequential with each successive release. The gap between that layered reality and the surface-level narrative the market has been trading is precisely where cognitive advantage lives. The investigation, as ever, continues.
Pancras
— all views expressed are my own —



