The Paradox at the Heart of AI Finance
When AI wins in the wrong sequence
There is a scenario that I would like to discuss.
AI disrupts enough companies fast enough to trigger a private credit crisis. That credit crisis then makes it harder to finance the AI infrastructure buildout that caused the disruption in the first place.
The technology that wins creates the conditions that starve its own rollout of capital.
Last week, UBS credit strategist Matthew Mish published a tail risk scenario that deserves the attention. In a rapid, severe AI disruption, he estimates private credit default rates could reach 14–15 percent. High yield defaults at 3–6 percent. Leveraged loan defaults at 8–10 percent. These are dotcom-bust numbers. In this scenario, private credit and leveraged loan issuance could fall 50–75 percent year over year.
The Financial Times noted the irony: investors’ worries that AI hyperscalers will win too much might actually disrupt the financing of AI’s commercial rollout.
Might? I believe It will.
Why private credit is the transmission mechanism
Private credit and leveraged loans are not diversified across the economy. They are concentrated — heavily — in software and business services. These are precisely the sectors in the line of AI fire.
This is not an abstract concern. Sector-level default waves are of course how credit crises happen. Around a fifth of US high-yield energy companies defaulted in 2016 and again in 2020. More than half of high-yield finance companies defaulted in the Great Financial Crisis. Four-fifths of high-yield telecom companies defaulted in the dotcom bust. Diversification matters in credit because defaults do not come for individual borrowers — they come for entire sectors at once.
Software and business services are one sector.
When AI disrupts a company’s revenue model, it does not do so gradually. It does so suddenly, at scale, across an entire category of business. A workflow automation company, a business process outsourcer, a mid-market SaaS vendor — these businesses can go from viable to structurally impaired in the time it takes a new model to reach production deployment. Their lenders, concentrated in private credit, absorb that shock together.
The credit crunch that follows is not a GPU story. It is a software story. But its consequences land directly on the AI buildout.
How a software credit crisis becomes a compute financing crisis
When private credit defaults spike and issuance collapses, capital does not selectively retreat from the bad borrowers. It retreats from the asset class. Risk appetite falls across the board. Underwriting standards tighten. Lenders who were extending credit at 7–8x debt/EBITDA to software companies start asking harder questions about every deal on their book — including GPU-backed infrastructure loans.
The GPU-backed credit market is not reckless. But it is young, thinly standardized, and structurally exposed to confidence shocks.
Most transactions today share several features:
Collateral is highly concentrated in a single hardware generation (e.g., H100-class GPUs)
Depreciation assumptions are negotiated deal by deal, not derived from an agreed market curve
Rental value underwriting is based on bilateral contracts or broker quotes, not a transparent reference index
There is no deep secondary market with observable clearing prices in stress
Covenants are often lighter than traditional equipment finance because demand has so far outstripped supply
Many lenders are private credit funds with exposure to the very software sectors most vulnerable to AI disruption
None of these characteristics are fatal in isolation. Together, they mean valuation confidence is narrative-driven rather than market-anchored.
In a stress scenario, the issue is not that GPUs suddenly become worthless. It is that lenders lack standardized tools to determine what they are worth when counterparties fail, contracts are renegotiated, or rental rates fall 30–40 percent.
When distress hits one part of a lender’s portfolio — software and business services — internal risk committees do not calmly re-underwrite novel collateral classes. They reduce exposure. They tighten liquidity. They prioritize balance sheet preservation.
In that environment, the absence of an independent compute rental reference rate, a broadly accepted depreciation curve by generation, standardized loan-to-value bands under stress scenarios, and a track record of recoveries through a full hardware cycle becomes decisive.
The financing window for AI infrastructure does not slam shut because AI failed.
It slams shut because AI won — in the wrong sequence.
The distribution consequence
There is a second effect that compounds the concern.
If private credit issuance falls 50–75 percent, the capital that continues to flow will flow to the strongest credits. Hyperscalers with investment-grade balance sheets. Tier-one neoclouds with hyperscaler backstops. The largest, most well-capitalized players in the stack.
The mid-market builders — the vertical AI companies, the industrial automation platforms, the applied AI firms doing the work that actually generates economy-wide productivity gains — are the first casualties of a credit crunch. They were already paying 12–16 percent for capital. In a risk-off environment, they cannot access capital at any price.
The Economist published a careful analysis this month documenting what economists have been quietly noting: AI’s productivity gains are real at the task level but nearly invisible in aggregate data. Adoption is rising. Intensity of use remains low. The organizational rewiring that produces economy-wide productivity gains has barely begun.
The implication is uncomfortable. AI’s most disruptive effects on employment and business models may arrive before its most productive effects show up in GDP. Companies are disrupted before the economy captures the gains. Borrowers default before lenders benefit from AI-driven efficiency improvements in their own underwriting.
If capital retreats to the top of the stack during that gap, the technology diffuses to the places capital can reach — and nowhere else. The center of gravity does not shift from model training to applied deployment. The economic substrate does not form.
What you get instead is a more concentrated AI economy, financed at lower cost for a smaller number of players, with higher barriers to entry than the technology itself would otherwise require.
That is the real downside scenario. Not that AI fails. That it succeeds — expensively, narrowly, and for fewer people than it should.
The open question
UBS’s tail risk scenario is not the base case. Most analysts expect AI adoption to be gradual enough to avoid sudden sector-wide disruption at the speed required to trigger default waves of this magnitude.
But the scenario does not need to be the base case to matter.
The question for anyone building, lending, or investing in the AI infrastructure layer is not whether the tail risk materializes. It is whether the financial infrastructure exists to function through it.
Right now, it exists in fragments — not in standardized form.
Individual lenders have internal models. Brokers publish rate sheets. Operators have views on useful life by chip generation. But there is no independent, widely cited compute rental benchmark, no standardized methodology for marking GPU collateral in distress, no transparent depreciation index by generation, and no broadly adopted stress-case underwriting framework.
In its absence, every stress event becomes a full stop. Lenders do not reprice — they pause. The financing window narrows not because the technology is unsound but because no shared analytical framework exists to price uncertainty.
Financial infrastructure does not eliminate risk. It makes markets functional through risk.
On financial engineering and fragility
The response to this analysis is sometimes: given this fragility, perhaps it is better to let the market develop more slowly, with less financial engineering, and let the technology prove itself before building complex financial structures around it.
This confuses cause and design.
Financial infrastructure can amplify fragility if poorly constructed. But well-designed infrastructure distributes risk to those best positioned to bear it, creates price discovery mechanisms, and prevents liquidity shocks from becoming solvency crises.
The mortgage market did not make housing riskless. But standardized underwriting, securitization frameworks, insurance structures, and transparent benchmarks made housing finance scalable. Where those mechanisms were misaligned, fragility emerged. Where they were properly structured, markets functioned through stress.
AI infrastructure will be financed regardless of whether deep, standardized tools exist. The choice is not between finance and no finance. It is between concentrated, instinct-driven lending at high cost, or transparent, benchmarked markets capable of pricing and diversifying shocks.
Reference rates for compute rental. Transparent depreciation curves. Standardized collateral haircuts under stress. Deeper secondary markets.
These are not accelerants of speculation. They are shock absorbers.
The time to build financial plumbing is before the pipes are needed.
The Penstock is an independent research publication on financial infrastructure for emerging technology asset classes. Starting with compute.

