HPE Bets on AI with NVIDIA Factories and AMD Supercomputing
Company deepens NVIDIA partnership to build 'sovereign AI factories' while also launching a new AMD-powered open-architecture system, targeting the full spectrum of enterprise AI spending.
Hewlett Packard Enterprise (HPE) on Tuesday unveiled a sweeping expansion of its artificial intelligence portfolio, deepening its partnership with NVIDIA to build turnkey 'AI Factories' while simultaneously launching a new rack-scale supercomputing architecture with AMD. The dual-front strategy marks HPE's most aggressive move yet to capture a larger share of the burgeoning enterprise AI infrastructure market by offering customers a broad choice of technologies.
Despite the significance of the announcements, HPE shares were muted in Tuesday trading, closing down slightly at $21.92. The reaction reflects a broader cooling for the stock, which has fallen over 10% in the past month, even as analysts maintain an average price target of $26.50.
The centerpiece of the expanded collaboration with NVIDIA is a joint initiative to build and deploy secure, sovereign AI solutions. These 'AI Factories' are designed as complete, pre-configured systems that allow governments and enterprises to develop their own AI capabilities while maintaining strict control over their data. To accelerate this, the companies announced the first AI Factory Lab in Grenoble, France, which will allow customers to test AI workloads in a sovereign, air-cooled environment within the European Union.
"Every nation and enterprise needs to own the production of its intelligence," said Jensen Huang, founder and CEO of NVIDIA, in a statement. "We're transforming the data center into an AI factory — a manufacturing plant for the new industrial revolution — and by deploying the full stack of NVIDIA accelerated computing and Spectrum-X Ethernet networking with HPE, we're creating the template for sovereign AI."
As part of the expanded offering, HPE announced it will integrate NVIDIA's latest RTX PRO 6000 Blackwell Server Edition GPUs into its HPE Private Cloud AI solution. The company will also offer the NVIDIA GB200 NVL4 by HPE, a compact system combining Grace CPUs and Blackwell GPUs aimed at high-performance large language model (LLM) inference.
"Together, HPE and NVIDIA are showcasing our unique strengths to deliver true full-stack AI infrastructures that provide enterprises with a greater range of performance for more diverse workloads," said Antonio Neri, president and CEO of HPE.
In a parallel move signaling its intent to support open-standard ecosystems, HPE also revealed its first AMD-powered AI architecture, codenamed 'Helios'. This rack-scale system is designed for cloud service providers and large enterprises building massive AI models, offering an alternative to more proprietary systems.
The Helios architecture integrates a full stack of AMD technologies, including its EPYC CPUs and Instinct MI455X GPUs. A key innovation is its use of an industry-first scale-up Ethernet fabric, developed with Broadcom and based on the open Ultra Accelerator Link over Ethernet (UALoE) standard. According to details from HPE, this provides a high-bandwidth, low-latency network for AI workloads over standard Ethernet, reducing vendor lock-in.
Each Helios rack will connect 72 AMD Instinct GPUs, delivering up to 2.9 exaflops of AI performance. The system is built on specifications from the Open Compute Project (OCP), reinforcing the strategy to appeal to customers demanding more flexibility and open standards in their data centers. HPE plans to make the AMD 'Helios' solution available worldwide in 2026.
HPE's dual-track approach—deepening its lucrative partnership with the dominant AI chipmaker, NVIDIA, while simultaneously cultivating an open-standard alternative with AMD and Broadcom—positions it as a comprehensive technology provider in the AI arms race. While competitors have also forged alliances, HPE's strategy of offering fully integrated yet distinct ecosystems from both leading chip designers is a notable differentiator.
The move allows HPE to cater to the entire market, from enterprises seeking the performance and robust software stack of NVIDIA's platform to cloud providers and research institutions that may prefer the flexibility and open nature of the AMD-powered Helios system.
While the market's immediate reaction was subdued, HPE's strategic offensive aims to reverse its recent stock slide and capitalize on the next wave of AI investment, which is expected to shift from cloud hyperscalers to enterprise and sovereign government deployments. By providing the foundational infrastructure for both established and emerging AI ecosystems, HPE is betting it can become the go-to vendor for organizations building their own intelligence factories.