OpenAI × NVIDIA × AMD and the “Circular Funding” Debate: How $100B–$500B Infrastructure Commitments Could Reshape AI Compute, Energy, and Capital Markets
OpenAI × NVIDIA × AMD and the “Circular Funding” Debate: How $100B–$500B Infrastructure Commitments Could Reshape AI Compute, Energy, and Capital Markets
Author: Zion Zhao Real Estate | 88844623 | 狮家社小赵
Author’s note: This article is strictly informational and educational. It does not provide investment, legal, or policy advice, and it avoids sensitive or prohibited content. All facts are attributed to credible sources and checked against primary disclosures where available. Full disclosure, I am a long-term shareholder of NVDA & AMD.
Executive overview
In the span of days, two announcements reframed the economics of artificial intelligence:
NVIDIA ↔ OpenAI unveiled what NVIDIA called “the biggest AI infrastructure project in history”—a plan to deploy ~10 gigawatts (GW) of AI systems over several years, paired with up to $100 billion of staged NVIDIA investment into OpenAI as each GW is stood up. NVIDIA Newsroom+1
AMD ↔ OpenAI followed with a 6 GW, multi-year GPU deployment agreement, including warrants for up to 160 million AMD shares to OpenAI (roughly up to ~10% of AMD on a fully issued basis if milestones are met), aligning chip purchases with equity upside. OpenAI+2Advanced Micro Devices, Inc.+2
These designs provoked a fresh charge—“circular funding”—because capital flows, equity, and capacity commitments interlock across the same few firms. Yet, the underlying demand for AI compute and grid-scale power appears to justify unprecedented build-outs that could dwarf prior tech cycles. Bloomberg
What “circular” really means—and why critics care
Circular financing here refers to arrangements in which a supplier helps fund a customer’s capacity (or vice-versa)while locking in long-term offtake—in this case, AI chips, racks, and data-center GW. For example, NVIDIA’s staged investment into OpenAI accompanies OpenAI’s multi-GW commitment to NVIDIA systems; AMD’s share warrants to OpenAI vest as OpenAI buys AMD capacity. Critics worry this can inflate apparent demand, concentrate market power, or mask unit economics if equity consideration substitutes for cash margins in the short run. Proponents counter that capital intensity and supply-chain risk in AI make vendor-financed scale-up rational—similar in spirit to historic semiconductor pre-payments, foundry reservations, and long-term capacity contracts in other network industries. Bloomberg
A fair reading is that both deals are additive to pre-existing cloud capacity (Azure, OCI/Oracle, CoreWeave, etc.), not substitutes. The 10 GW and 6 GW build-outs explicitly sit on top of earlier commitments, emphasizing that compute scarcity—not creative deal-making—remains the binding constraint. Investopedia
Anatomy of the new mega-deals (facts, not hype)
NVIDIA ↔ OpenAI (10 GW; up to $100 B staged)
NVIDIA and OpenAI described a multi-year program to bring ~10 GW of NVIDIA AI infrastructure online, with NVIDIA investing up to $100 B in OpenAI progressively as each GW is deployed. NVIDIA’s CEO positioned this as the “largest computing project in history” and a catalyst for the “AI industrial revolution.” NVIDIA Newsroom+1AMD ↔ OpenAI (6 GW; equity-linked)
AMD and OpenAI announced a 6 GW multi-generation agreement. The first 1 GW (based on Instinct MI450) is targeted for 2H 2026, with additional tranches to 6 GW. To align incentives, AMD issued OpenAI a warrant for up to 160 million AMD shares that vest with deployment milestones—a design that shares upside while tying equity to delivered compute. OpenAI+1Context: “Stargate” and multi-site build-outs
In parallel, OpenAI and partners expanded “Stargate”—a multi-site, multi-GW build program—with Oracle and SoftBank to $500 billion of planned infrastructure over time (public details emphasize scale rather than year-specific spend). The 10 GW NVIDIA plan is explicitly additive to prior cloud contracts, underscoring that aggregate demand continues to outstrip today’s infrastructure. OpenAI+1
The economics: why compute scarcity and power scarcity travel together
Two forces are colliding:
Compute demand is scaling faster than the supply chain can comfortably deliver.
The capex signals are unmistakable:Alphabet guided ~$85 B capex for 2025 (AI data centers, servers, and network). Reuters
Microsoft spent over $30 B in capex in one recent quarter*, with AI infra a primary driver. TRG Datacenters
Meta raised 2025 capex to $66–$72 B to accelerate AI investments. Investing.com
These are historic, outpacing the cloud boom of the late 2010s.
Electricity demand from data centers is re-rating power markets.
The IEA projects global data-center electricity use doubling by 2026 (range 620–1,050 TWh), with AI a major contributor. U.S. analyses by the DOE/LBNL and EIA similarly flag rapid growth and commercial-sector mix shifts toward computing loads. Bloomberg+2The Department of Energy's Energy.gov+2
Implication: 10–16 GW of incremental AI compute (NVIDIA+AMD programs) implies city-scale power build-outs, substation upgrades, and new generation (including nuclear and renewables) to avoid bottlenecks. Even optimistic edge-offloading and model efficiency cannot fully offset the frontier-model appetite for dense, low-latency, high-availability power. Bloomberg
Are we in a bubble—or just early in a long infrastructure super-cycle?
Bubble claims rest on three points: (i) a narrow leadership group (mega-cap tech plus a handful of AI platform firms), (ii) valuation expansion ahead of measured cash returns, and (iii) circular financing optics. The counter-case is that the productivity and revenue effects of AI are already measurable and spreading:
Peer-reviewed evidence: Generative AI at Work (Quarterly Journal of Economics) shows ~14–15% average productivity gains in a large-scale call-center deployment, with outsized benefits for less-experienced workers—consistent with diffusion of best practices via AI assistants (Brynjolfsson, Li, & Raymond, 2025). OUP Academic
Sector-level signals: Developers using AI coding tools (e.g., Copilot-style agents) complete tasks substantially faster in controlled experiments, a leading indicator for software productivity as a profit driver. (While not peer-reviewed in all cases, multiple studies and working papers converge on meaningful time savings for coding tasks.) arXiv
Index concentration context: The S&P 500 has indeed become more concentrated in the largest constituents; however, concentration typically waxed and waned across tech cycles, and the current phase aligns with a capex-led diffusion of AI into broader sectors. Reuters and S&P DJI commentary highlight the trend and the market’s focus on firms with accelerating AI revenue and capex discipline. Morningstar
Bottom line: Evidence suggests we’re in the investment phase of a long compute-and-power build-out, with measurable productivity gains already accruing in real workplaces. That doesn’t eliminate misallocation risk, but it weakens the claim that demand is purely circular or speculative. OUP Academic
What the deal structures actually optimize
Supply-chain certainty
Long-dated offtake and staged equity pull forward capacity (chips, packaging, racks, networking) at scale. For NVIDIA, coupling capital with committed offtake protects utilization; for OpenAI, it reduces time-to-compute. For AMD, equity-linked warrants de-risk shareholder value creation if OpenAI’s deployments ramp on schedule. NVIDIA Newsroom+1Technology diversification
OpenAI’s multi-vendor posture (NVIDIA for 10 GW; AMD for 6 GW) lowers single-supplier risk and exploits workload niches (memory- vs compute-heavy inference, price/perf trade-offs, rack-scale architectures). The AMD agreement’s multi-generation scope (MI450 then successors) bakes roadmap feedback directly into future parts. OpenAICapital efficiency under scarcity
When grid interconnects, permitting, and advanced node packaging are scarce, the binding constraint is timely access, not headline capex alone. Equity-linked milestones, capacity pre-pays, and multi-site roll-outs reduce execution risk compared with ad-hoc, spot-market sourcing. OpenAI
The power and policy angle: grids, generation, and location
As compute scales, electricity becomes the new oil—price, carbon intensity, and reliability matter. The EIA AEO 2025 projects computing loads rising from ~8% to ~20% of U.S. commercial electricity by 2050; DOE/LBNL warns data-center load could double or triple by 2028 without efficiency breakthroughs. Expect to see:
Co-siting with firm, low-carbon generation (e.g., nuclear uprates, SMRs where feasible) and high-capacity renewables.
Grid-enhancing technologies and transmission upgrades near hyperscale clusters.
Heat-reuse, water-saving cooling, and AI-aware demand response as standard design elements. U.S. Energy Information Administration+1
Policy takeaway: permitting reform, interconnection queues, and standards for energy-efficient AI will shape wherethese 16 GW land and how fast they deliver useful compute. U.S. Energy Information Administration
Investor lens: returns, risks, and what to watch
Revenue realization vs. capex curves. Follow usage-based billing, inference monetization, and enterprise AI adoption. Firms showing accelerating AI revenue (not just capex) have been rewarded by markets; those slow to deploy have lagged. Reuters
Gross margins and pricing power. NVIDIA’s data-center margins and product ASPs signal persistent scarcity; AMD’s design-wins and software stack maturity (ROCm, compilers, libraries) will determine share capture as 6 GW rolls out. Cyfuture Cloud
Execution milestones. For AMD, warrant tranches tied to 1 GW in 2H 2026 and subsequent scale are observable checkpoints; for NVIDIA, per-GW staging converts infra delivery into equity support at OpenAI over time. OpenAI+1
Conclusion: not a perpetual motion machine—an aggressive answer to genuine scarcity
The appearance of circularity conceals a more straightforward reality: AI systems need orders-of-magnitude more compute and reliable power than today’s grids and supply chains can furnish. Capital is being engineered—via staged investments, equity warrants, and long-term offtake—to accelerate delivery. Whether one celebrates or questions this design, the fact pattern is clear: 10 GW (NVIDIA) + 6 GW (AMD) for OpenAI, on top of hyperscale cloud programs, represents a multi-hundred-billion-dollar bet that AI productivity gains and new services will support the bill. Early peer-reviewed evidence on productivity—and the sheer scale of capex and power planning—suggests this is less a bubble than the front end of a decade-long infrastructure super-cycle. Bloomberg+3NVIDIA Newsroom+3OpenAI+3
Like what I have mentioned in the beginning, I am a long-term shareholder of NVDA and AMD, my views are evidently biased and bullish. Not financial advice and please do your own due diligence.
Future-Proof Your Wealth in the AI Infrastructure Super-Cycle — With a Singapore Property Strategist
When OpenAI × NVIDIA × AMD commit $100B–$500B to compute and power, capital, talent, and families move.In moments like this, you want a real estate advisor who can connect AI capex, energy constraints, rates, geopolitics, and markets—and translate them into clear, property-first decisions in Singapore.
I’m a Singapore-based Real Estate Salesperson with deep experience in macroeconomics, asset allocation, equities/crypto trading, and formal proficiency in Singapore Land & Business Law, Statutes and Legislation. As an SAF Officer (OC, Captain), I operate with discipline, discretion, and mission-level execution.
Every day, I dedicate hours to research and writing—including analyses on the OpenAI × NVIDIA × AMD “circular funding” debate—so your portfolio choices rest on independent due diligence, not noise.
Why engage me now
Cross-asset intelligence → property outcomes
I link AI data-centre capex, power availability, interest-rate paths, and policy into entry timing, sub-market & unit selection, financing mix, and risk controls—so your property sleeve strengthens your overall portfolio.Stable core, compounding edge
Add Singapore real estate as your lower-volatility anchor with rental-yield, dividend-like cash flow and long-run appreciation potential—to complement higher-beta assets across equities and digital markets.Institutional-grade process
From PSF & breakeven to lease covenants, sensitivity tests (rates/vacancy/maintenance), and hold-vs-IRRscenarios—you see the numbers before we move.Who I serve
International, China Chinese, Southeast Asia & Singapore clients—UHNW individuals, family offices, and institutions—including immigration/education-adjacent needs(陪读家长 / 留学 / 家办)with discreet, end-to-end service.
How we’ll work
Discovery – goals, constraints, risk budget, horizon
Strategy – districts/projects, unit typologies, yield & IRR targets
Numbers-first – scenario models and downside controls
Execution – shortlist → viewings → negotiation → documentation → asset management
Ongoing intelligence – concise updates on AI-driven demand, policy, supply, and rates
给中国与东南亚客户
在 AI 基础设施超级周期 背景下,全球资本与人才正加速流动。新加坡房地产可作为您组合中的稳健核心资产:稳定租金回报+长期增值潜力。我每日投入时间进行严格尽调与写作,为 超高净值/家族办公室/机构 提供定制化方案,并服务陪读家长 / 留学 / 家办等需求。欢迎预约一对一私享咨询。
Ready to act—calmly and confidently?
Message me for a private consultation. Let’s add a stable, income-producing Singapore property to your cross-asset plan—so you capture the upside of the AI decade without unnecessary volatility.
References (APA)
Bloomberg. (2025, October 7). OpenAI, Nvidia fuel $1 trillion AI market with web of circular deals. Bloomberg News.
Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. The Quarterly Journal of Economics, 140(2), 889–941.
International Energy Agency. (2025). Electricity 2024–2026: Data centres and AI. IEA.
Meta Platforms, Inc. (2025). Q2 2025 earnings materials and capex outlook.
Microsoft Corporation. (2025). FY2025 quarterly results and capex discussion.
NVIDIA Corporation. (2025). NVIDIA and OpenAI announce the biggest AI infrastructure project in history (Newsroom release).
NVIDIA Corporation. (2025). NVIDIA and OpenAI announce ‘the biggest AI infrastructure project in history’ (Company blog).
OpenAI. (2025, October 6). AMD and OpenAI announce strategic partnership to deploy 6 gigawatts of AMD GPUs(Press release).
Oracle / SoftBank / OpenAI. (2025). OpenAI expands Stargate with Oracle and SoftBank; $500B multi-site plan.
Reuters. (2025). Alphabet to spend $85B on capex in 2025; Microsoft quarterly capex tops $30B; Meta lifts 2025 capex to $66–$72B.
U.S. Department of Energy (DOE). (2024). 2024 Report on U.S. Data Center Energy Use (LBNL).
U.S. Energy Information Administration (EIA). (2025). Annual Energy Outlook 2025; Short-Term Energy Outlook; Today in Energy: Computing loads in commercial electricity.
In-text citations used (examples)
NVIDIA’s 10 GW and up to $100 B: (NVIDIA, 2025). NVIDIA Newsroom
AMD’s 6 GW and equity warrants: (OpenAI, 2025; AMD IR, 2025; WSJ, 2025). OpenAI+2Advanced Micro Devices, Inc.+2
Circular-funding framing: (Bloomberg, 2025). Bloomberg
Energy outlooks and data-center load: (IEA, 2025; DOE/LBNL, 2024; EIA, 2025). Bloomberg+2The Department of Energy's Energy.gov+2
Peer-reviewed productivity evidence: (Brynjolfsson et al., 2025). OUP Academic

Comments
Post a Comment