Nvidia gains from Samsung HBM4 milestone
Memory breakthrough validates 2026 Vera Rubin AI platform launch
Samsung Electronics has begun commercial shipments of its fourth-generation high-bandwidth memory, providing a critical component for NVIDIA's next-generation Vera Rubin AI platform that is set to launch in the second half of 2026.
The South Korean memory manufacturer achieved an industry-first by shipping HBM4 chips that deliver 11.7 gigabits per second of processing speed—46% faster than standard memory—with 40% improved power efficiency. The memory stacks offer up to 3.3 terabytes per second of bandwidth per single stack, with capacity ranging from 24GB to 36GB through 12-layer stacking.
Samsung's entry into HBM4 production positions it as a verified supplier for NVIDIA's Vera Rubin platform, alongside SK Hynix, which is expected to supply approximately 70% of the memory for NVIDIA's VR200 NVL72 systems. Samsung is projected to provide the remaining 30% of HBM4 for the platform.
The Vera Rubin architecture, unveiled at CES 2026, represents NVIDIA's most ambitious AI computing system to date. The platform will feature Rubin GPUs equipped with HBM4 memory offering 288GB capacity and 22TB/s of memory bandwidth, paired with Vera CPUs in a rack-scale design optimized for massive AI workloads.
NVIDIA shares advanced 0.8% to $190.05 on Wednesday, bringing the chipmaker's market capitalization to $4.59 trillion. The stock remains below its 52-week high of $212.18 but has rallied significantly from its yearly low of $86.60. Analysts maintain a consensus price target of $253.62, representing roughly 33% upside from current levels, with 60 analysts rating the stock a buy or strong buy compared to just one sell recommendation.
The commercialization of HBM4 comes as NVIDIA faces unprecedented demand for its AI processors. Chief Executive Jensen Huang has confirmed the company is "sold out until 2026", indicating a substantial multi-year backlog for its Blackwell, Rubin, and H200 series chips. This demand is being fueled by hyperscale cloud providers, whose capital expenditure is projected to increase by approximately $145 billion, or 25%, in 2026.
Samsung projects that HBM sales will more than triple in 2026 compared to 2025, underscoring the explosive growth trajectory of AI datacenter infrastructure. The company plans to begin HBM4E sampling in the second half of 2026, with custom HBM samples for customers expected in 2027.
The validation of Samsung's HBM4 technology removes a potential supply chain bottleneck for NVIDIA's Vera Rubin rollout. Earlier concerns had emerged that NVIDIA might need to lower required HBM4 speeds for Rubin chips to ensure sufficient memory supply, but Samsung's successful commercial launch with 11.7 Gbps speeds—capable of reaching up to 13 Gbps—appears to have alleviated those concerns.
NVIDIA is scheduled to report fiscal fourth-quarter 2026 earnings on February 25, with analysts projecting revenue around $65.5 billion, a 67% year-over-year increase, and earnings growth of 71%. The company's datacenter business, which accounted for the majority of its $187 billion in trailing 12-month revenue, continues to benefit from the shift from AI model training to inference workloads, the rise of "agentic AI," and government investments in "sovereignty AI" clouds.
The dual-supplier strategy for Vera Rubin memory, combining Samsung's HBM4 with SK Hynix's production, provides NVIDIA with greater supply chain resilience as it competes in a market where demand for AI computing power shows no signs of abating. With full-scale HBM4 deliveries expected between June 2026, aligning with the Vera Rubin platform rollout, NVIDIA appears positioned to maintain its dominance in the AI accelerator market through at least the next technology cycle.