Key Takeaways

The $7.6 Trillion Baseline

A recent Goldman Sachs report shifts the debate from whether artificial intelligence (AI) demand exists to which supply-side factors will determine the actual cost of the build-out. The report projects $7.6 trillion in AI capital expenditure as a baseline but emphasizes that this figure is highly sensitive to “swing variables,” including the useful life of AI silicon.

This longevity is seen as the most critical factor because rapid innovation could make standard chips—which typically last four to six years—obsolete in three years, causing costs to skyrocket. Conversely, a “tiered model” where older chips are reused for simpler tasks, such as inference, could stabilize costs.

Data center complexity and the elasticity of compute demand are other variables likely to affect how much capital is expended on AI infrastructure in the next five years. Shortages in power grid capacity, specialized labor, and electrical equipment are also seen as factors elongating the build-out.

A separate report, meanwhile, frames this staggering infrastructure expenditure as the cornerstone of an emerging “machine economy.” In this paradigm, AI agents become the primary economic actors, executing high-frequency transactions and managing resource allocation independently. The report’s authors contend that legacy financial systems, characterized by slow settlement cycles and rigid know your customer (KYC) frameworks, are fundamentally ill-equipped for the velocity of agentic commerce.

Decentralized Infrastructure and the Latency Trade-off

Consequently, it positions crypto and decentralized protocols as the essential, permissionless “economic rails” required to facilitate this shift. However, skeptics remain wary, questioning whether decentralized physical infrastructure networks (DePINs) can truly mitigate AI’s ballooning capital requirements.

Vadim Taszycki, head of growth at StealthEX, notes that while decentralized networks can offer significant cost savings, they face physical limitations. While a decentralized provider like Akash might rent an H100 GPU for $1.48 an hour compared to $12.30 on Amazon Web Services, the trade-off is speed.

“The big cloud providers can do [fast work] because their GPUs sit next to each other in one building, connected by special cables that move data in microseconds,” Taszycki said. He explained that decentralized networks, which stitch together GPUs across different countries via the public internet, add milliseconds of delay. This latency makes decentralized orchestration competitive for batch jobs and fine-tuning but unsuitable for serving high-scale, live chatbots where user experience depends on near-instant responses.

Leo Fan, founder of Cysic, echoed these sentiments, insisting that decentralized inference is unsuitable for low-latency workloads. Fan argued, however, that latency is the wrong benchmark for comparing decentralized platforms and hyperscalers like AWS.

“The hard problem isn’t distributed compute but discovery, scheduling, and attestation. The wedge isn’t price-per-token; it’s verifiability,” Fan said. He noted that trusted execution environments (TEEs) and zero-knowledge (ZK) attestations allow decentralized networks to compete in sectors where trust and verification matter more than “tail latency.”

Onchain Credit and the Funding Gap

Beyond compute, the focus is shifting to how these capital-intensive projects are funded. While traditional private credit has ample capital, it often overlooks smaller or non-standard deals. Onchain credit offers distinct advantages, such as allowing retail investors to participate in data center revenue that was previously restricted to institutional limited partners. Furthermore, platforms like Maple and Centrifuge can syndicate loans in the $5 million to $50 million range—a bracket often ignored by firms like Apollo due to high underwriting costs relative to fees.

Finally, onchain credit enables novel “pay-per-inference” models, where revenue fluctuates with GPU usage. Such models fit more naturally into tokenized revenue-share structures than rigid 20-year traditional leases.

Despite this potential, experts identify four “gates” that remain closed to institutional adoption: legal enforceability in bankruptcy courts, the lack of tamper-evident oracle infrastructure for servicing covenants, regulatory uncertainty for billion-dollar tranches, and unstandardized tax and accounting products.

The consensus suggests a realistic timeline of 12 to 24 months for mid-sized syndicated deals to gain traction onchain, with majority-onchain mezzanine debt likely three to five years away. The first breakthroughs will likely come from Tier 2 operators rather than industry leaders like Coreweave.



Source link

Leave a Comment

Newsletter

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!