
两个月前,英伟达(Nvidia)与OpenAI公布了一项重磅计划,拟部署至少10吉瓦英伟达系统,投资规模高达1,000亿美元。如今,这家芯片巨头承认:该协议实际上尚未最终敲定。
上周二,在美国亚利桑那州斯科茨代尔举行的瑞银(UBS)全球科技与人工智能大会上,英伟达执行副总裁兼首席财务官科莱特·克雷斯向投资者表示,与OpenAI的这项备受瞩目的合作目前仍停留在意向书阶段。
在被问及10吉瓦承诺已实际落实多少时,克雷斯坦言:“我们仍未完成最终协议的签署。”
这番表态对外界而言颇具冲击力。这项合作曾被英伟达首席执行官黄仁勋称为“史上最大规模AI基础设施项目”。分析师此前估计,该合作未来可能为英伟达带来高达5,000亿美元的收入。
双方在今年9月宣布合作时提出一项计划,将在数年内部署数百万颗英伟达GPU,并将配套建设高达10吉瓦的数据中心容量。英伟达承诺,随着各期项目落地,将向OpenAI投资最高1,000亿美元。这一消息一度推动AI基建概念股大涨,英伟达股价随之上涨4%,强化了市场对两家公司“深度绑定”的预期。
然而,从克雷斯的最新表态来看,即便合作框架发布已数月,但这项协议仍存在较大变数。
尚未落地的“超级大单”
目前尚不清楚协议为何迟迟未能签署。不过,从英伟达最新提交的10-Q文件中可以找到一些线索。文件明确指出,公司“无法保证任何投资都能按照预期条款完成,甚至可能根本无法完成”。这一表述不仅涉及与OpenAI的合作,还包括对Anthropic的100亿美元投资计划,以及对英特尔(Intel)的50亿美元投资承诺。
在篇幅颇长的“风险因素”部分,英伟达详细说明了此类超级交易背后的脆弱架构。公司强调,计划能否实现,取决于全球是否有能力为支持英伟达系统运行建设数据中心并供电。英伟达必须提前一年以上订购GPU、HBM内存、网络设备以及其他组件,而且往往要通过不可撤销的预付款合同完成订购。英伟达警告称,如果客户缩减需求、延迟融资或改变方向,公司可能面临“库存过剩”、“取消订单的罚金”或“计提存货跌价与减值”的风险。文件指出,以往供需错配曾“严重损害公司的财务业绩”。
更大的不确定性来自物理世界本身。英伟达表示,“数据中心容量、电力和资本”的可用性,是客户能否部署口头承诺的AI系统的关键。文件称电力基础设施建设是一个“将耗时数年的过程”,会面临“监管、技术和施工方面的挑战”。英伟达警告称,如果客户无法获得足够的电力或融资,可能“延迟客户部署进度,或缩小其部署规模”,进而影响AI的实际采用速度。
英伟达也承认,自身的创新节奏加大了规划难度。从Hopper,到Blackwell,再到Vera Rubin,英伟达已转向每年推出一个新架构的节奏,同时还需继续支持旧架构。公司表示,更快的架构迭代“可能加剧需求预测的难度”,并导致“对现有产品的需求下降”。
这些表态也呼应了“唱空AI行情”的投资人的警告。如因《大空头》(Big Short)而闻名的迈克尔·伯里曾指出,英伟达及其他芯片制造商过度延长芯片的使用寿命,一旦芯片贬值,将导致投资周期断裂。不过,黄仁勋此前表示,英伟达六年前的芯片至今仍在满负荷运行。
英伟达还明确提及此前与“热门”应用场景相关的繁荣与萧条周期,例如加密货币挖矿。该公司警告称,新一代AI工作负载也可能出现类似的需求飙升与回落,这类难以预测的波动可能导致大量二手GPU流入灰色市场。
尽管协议尚未正式签署,克雷斯仍强调,英伟达与OpenAI的合作关系“非常稳固”,至今已有十多年历史。她表示,OpenAI将英伟达视为其“首选算力合作伙伴”。不过她补充称,公司当前的销售指引并不依赖这项新的超级协议。
克雷斯指出,英伟达预测2025至2026年间,Blackwell和Vera Rubin系统的需求将达到约5,000亿美元,其中“并不包含我们当前与OpenAI就协议下一阶段展开的任何工作”。目前,OpenAI的相关采购主要通过微软(Microsoft)和甲骨文(Oracle)等云服务合作伙伴间接完成,而非采用意向书中规划的直接采购模式。
克雷斯表示,OpenAI“的确希望直接采购。但需要再次强调的是,我们仍在致力于推动最终协议的落地。”
英伟达坚称“护城河”完好无损
在谈及竞争格局时,克雷斯的态度十分明确。近期市场追捧的谷歌(Google)TPU,被视为英伟达GPU的潜在竞争者。TPU的适用范围虽小于GPU,但功耗更低。当被问及这类ASIC芯片(专用集成电路)是否正在缩小英伟达的领先优势时,克雷斯回答道:“绝对没有。”
她表示:“我们当前的重点不仅是支持各类模型开发者,同时也要为众多企业提供完整的技术解决方案。”在克雷斯看来,英伟达真正的“护城河”从来不是某一款芯片,而是涵盖硬件、CUDA架构,以及持续扩展的行业专用软件库在内的整个平台体系。因此,即便Blackwell成为新的行业标准,旧架构依然被广泛使用。
克雷斯表示:“所有企业都在使用我们的平台。无论是在云端还是在本地部署,所有模型都在我们的平台上运行。”(*)
译者:刘进龙
审校:汪皓
两个月前,英伟达(Nvidia)与OpenAI公布了一项重磅计划,拟部署至少10吉瓦英伟达系统,投资规模高达1,000亿美元。如今,这家芯片巨头承认:该协议实际上尚未最终敲定。
上周二,在美国亚利桑那州斯科茨代尔举行的瑞银(UBS)全球科技与人工智能大会上,英伟达执行副总裁兼首席财务官科莱特·克雷斯向投资者表示,与OpenAI的这项备受瞩目的合作目前仍停留在意向书阶段。
在被问及10吉瓦承诺已实际落实多少时,克雷斯坦言:“我们仍未完成最终协议的签署。”
这番表态对外界而言颇具冲击力。这项合作曾被英伟达首席执行官黄仁勋称为“史上最大规模AI基础设施项目”。分析师此前估计,该合作未来可能为英伟达带来高达5,000亿美元的收入。
双方在今年9月宣布合作时提出一项计划,将在数年内部署数百万颗英伟达GPU,并将配套建设高达10吉瓦的数据中心容量。英伟达承诺,随着各期项目落地,将向OpenAI投资最高1,000亿美元。这一消息一度推动AI基建概念股大涨,英伟达股价随之上涨4%,强化了市场对两家公司“深度绑定”的预期。
然而,从克雷斯的最新表态来看,即便合作框架发布已数月,但这项协议仍存在较大变数。
尚未落地的“超级大单”
目前尚不清楚协议为何迟迟未能签署。不过,从英伟达最新提交的10-Q文件中可以找到一些线索。文件明确指出,公司“无法保证任何投资都能按照预期条款完成,甚至可能根本无法完成”。这一表述不仅涉及与OpenAI的合作,还包括对Anthropic的100亿美元投资计划,以及对英特尔(Intel)的50亿美元投资承诺。
在篇幅颇长的“风险因素”部分,英伟达详细说明了此类超级交易背后的脆弱架构。公司强调,计划能否实现,取决于全球是否有能力为支持英伟达系统运行建设数据中心并供电。英伟达必须提前一年以上订购GPU、HBM内存、网络设备以及其他组件,而且往往要通过不可撤销的预付款合同完成订购。英伟达警告称,如果客户缩减需求、延迟融资或改变方向,公司可能面临“库存过剩”、“取消订单的罚金”或“计提存货跌价与减值”的风险。文件指出,以往供需错配曾“严重损害公司的财务业绩”。
更大的不确定性来自物理世界本身。英伟达表示,“数据中心容量、电力和资本”的可用性,是客户能否部署口头承诺的AI系统的关键。文件称电力基础设施建设是一个“将耗时数年的过程”,会面临“监管、技术和施工方面的挑战”。英伟达警告称,如果客户无法获得足够的电力或融资,可能“延迟客户部署进度,或缩小其部署规模”,进而影响AI的实际采用速度。
英伟达也承认,自身的创新节奏加大了规划难度。从Hopper,到Blackwell,再到Vera Rubin,英伟达已转向每年推出一个新架构的节奏,同时还需继续支持旧架构。公司表示,更快的架构迭代“可能加剧需求预测的难度”,并导致“对现有产品的需求下降”。
这些表态也呼应了“唱空AI行情”的投资人的警告。如因《大空头》(Big Short)而闻名的迈克尔·伯里曾指出,英伟达及其他芯片制造商过度延长芯片的使用寿命,一旦芯片贬值,将导致投资周期断裂。不过,黄仁勋此前表示,英伟达六年前的芯片至今仍在满负荷运行。
英伟达还明确提及此前与“热门”应用场景相关的繁荣与萧条周期,例如加密货币挖矿。该公司警告称,新一代AI工作负载也可能出现类似的需求飙升与回落,这类难以预测的波动可能导致大量二手GPU流入灰色市场。
尽管协议尚未正式签署,克雷斯仍强调,英伟达与OpenAI的合作关系“非常稳固”,至今已有十多年历史。她表示,OpenAI将英伟达视为其“首选算力合作伙伴”。不过她补充称,公司当前的销售指引并不依赖这项新的超级协议。
克雷斯指出,英伟达预测2025至2026年间,Blackwell和Vera Rubin系统的需求将达到约5,000亿美元,其中“并不包含我们当前与OpenAI就协议下一阶段展开的任何工作”。目前,OpenAI的相关采购主要通过微软(Microsoft)和甲骨文(Oracle)等云服务合作伙伴间接完成,而非采用意向书中规划的直接采购模式。
克雷斯表示,OpenAI“的确希望直接采购。但需要再次强调的是,我们仍在致力于推动最终协议的落地。”
英伟达坚称“护城河”完好无损
在谈及竞争格局时,克雷斯的态度十分明确。近期市场追捧的谷歌(Google)TPU,被视为英伟达GPU的潜在竞争者。TPU的适用范围虽小于GPU,但功耗更低。当被问及这类ASIC芯片(专用集成电路)是否正在缩小英伟达的领先优势时,克雷斯回答道:“绝对没有。”
她表示:“我们当前的重点不仅是支持各类模型开发者,同时也要为众多企业提供完整的技术解决方案。”在克雷斯看来,英伟达真正的“护城河”从来不是某一款芯片,而是涵盖硬件、CUDA架构,以及持续扩展的行业专用软件库在内的整个平台体系。因此,即便Blackwell成为新的行业标准,旧架构依然被广泛使用。
克雷斯表示:“所有企业都在使用我们的平台。无论是在云端还是在本地部署,所有模型都在我们的平台上运行。”(*)
译者:刘进龙
审校:汪皓
Two months after Nvidia and OpenAI unveiled their eye-popping plan to deploy at least 10 gigawatts of Nvidia systems—and up to $100 billion in investments—the chipmaker now admits the deal isn’t actually final.
Speaking Tuesday at the UBS Global Technology and AI Conference in Scottsdale, Nvidia EVP and CFO Colette Kress told investors that the much-hyped OpenAI partnership is still at the letter-of-intent stage.
“We still haven’t completed a definitive agreement,” Kress said when asked how much of the 10-gigawatt commitment is actually locked in.
That’s a striking clarification for a deal that Nvidia CEO Jensen Huang once called “the biggest AI infrastructure project in history.” Analysts had estimated that the deal could generate as much as $500 billion in revenue for the AI chipmaker.
When the companies announced the partnership in September, they outlined a plan to deploy millions of Nvidia GPUs over several years, backed by up to 10 gigawatts of data center capacity. Nvidia pledged to invest up to $100 billion in OpenAI as each tranche comes online. The news helped fuel an AI-infrastructure rally, sending Nvidia shares up 4% and reinforcing the narrative that the two companies are joined at the hip.
Kress’s comments suggest something more tentative, even months after the framework was released.
A megadeal that isn’t in the numbers—yet
It’s unclear why the deal hasn’t been executed, but Nvidia’s latest 10-Q offers clues. The filing states plainly that “there is no assurance that any investment will be completed on expected terms, if at all,” referring not only to the OpenAI arrangement but also to Nvidia’s planned $10 billion investment in Anthropic and its $5 billion commitment to Intel.
In a lengthy “Risk Factors” section, Nvidia spells out the fragile architecture underpinning megadeals like this one. The company stresses that the story is only as real as the world’s ability to build and power the data centers required to run its systems. Nvidia must order GPUs, HBM memory, networking gear, and other components more than a year in advance, often via non-cancelable, prepaid contracts. If customers scale back, delay financing, or change direction, Nvidia warns it may end up with “excess inventory,” “cancellation penalties,” or “inventory provisions or impairments.” Past mismatches between supply and demand have “significantly harmed our financial results,” the filing notes.
The biggest swing factor seems to be the physical world: Nvidia says the availability of “data center capacity, energy, and capital” is critical for customers to deploy the AI systems they’ve verbally committed to. Power build-out is described as a “multiyear process” that faces “regulatory, technical, and construction challenges.” If customers can’t secure enough electricity or financing, Nvidia warns, it could “delay customer deployments or reduce the scale” of AI adoption.
Nvidia also admits that its own pace of innovation makes planning harder. It has moved to an annual cadence of new architectures—Hopper, Blackwell, Vera Rubin—while still supporting prior generations. It notes that a faster architecture pace “may magnify the challenges” of predicting demand and can lead to “reduced demand for current generation” products.
These admissions nod to the warnings of AI bears like investor of Big Short fame Michael Burry, who has alleged that Nvidia and other chipmakers are overextending the useful lives of their chips and that the chips’ eventual depreciation will cause breakdowns in the investment cycle. However, Huang has said that chips from six years ago are still running at full pace.
The company also nodded explicitly to past boom-bust cycles tied to “trendy” use cases like crypto mining, warning that new AI workloads could create similar spikes and crashes that are hard to forecast and can flood the gray market with secondhand GPUs.
Despite the lack of a deal, Kress stressed that Nvidia’s relationship with OpenAI remains “a very strong partnership,” more than a decade old. OpenAI, she said, considers Nvidia its “preferred partner” for compute. But she added that Nvidia’s current sales outlook does not rely on the new megadeal.
The roughly $500 billion of Blackwell and Vera Rubin system demand Nvidia has guided for 2025–26 “doesn’t include any of the work we’re doing right now on the next part of the agreement with OpenAI,” she said. For now, OpenAI’s purchases flow indirectly through cloud partners like Microsoft and Oracle rather than through the new direct arrangement laid out in the letter of intent.
OpenAI “does want to go direct,” Kress said. “But again, we’re still working on a definitive agreement.”
Nvidia insists the moat is intact
On competitive dynamics, Kress was unequivocal. Markets lately have been cheering Google’s TPU—which has a smaller use case than the GPU but requires less power—as a potential competitor to Nvidia’s GPU. Asked whether those types of chips, called ASICs, are narrowing Nvidia’s lead, she responded: “Absolutely not.”
“Our focus right now is helping all different model builders, but also helping so many enterprises with a full stack,” she said. Nvidia’s defensive moat, she argued, isn’t any individual chip but the entire platform: hardware, CUDA, and a constantly expanding library of industry-specific software. That stack, she said, is why older architectures remain heavily used even as Blackwell becomes the new standard.
“Everybody is on our platform,” Kress said. “All models are on our platform, both in the cloud as well as on-prem.”
