Nvidia gains as Samsung secures HBM4 approval for Rubin AI platform
Korean chipmaker set to begin production next month, diversifying critical memory supply for next-generation accelerators
Samsung Electronics has secured qualification for its High Bandwidth Memory 4 (HBM4) chips for integration into Nvidia's forthcoming Vera Rubin AI platform, with production scheduled to begin next month, according to multiple industry reports.
The development, which follows months of testing and validation, marks a significant breakthrough for Samsung as it seeks to reestablish itself in the high-performance memory market after lagging behind rival SK Hynix in earlier HBM generations. Samsung's HBM4 modules boast transfer speeds of 11 Gbps+, surpassing the 8 Gbps standard established by JEDEC, the semiconductor industry standards body.
The Korean chipmaker plans to utilize an internally sourced 4nm logic base die for the HBM4 production, a strategic advantage that allows for more timely deliveries compared to competitors including SK Hynix and Micron, who plan to source their logic dies from Taiwan Semiconductor Manufacturing Company. Production is set to commence in February 2026, with customer shipments for the Rubin platform expected to begin in August.
Nvidia officially unveiled its Rubin platform in early January 2026, positioning it as the successor to the current Blackwell architecture. The Rubin GPUs will feature up to 288 GB of HBM4 memory per accelerator, delivering an aggregate bandwidth of 22 TB/s—nearly three times the memory bandwidth of Blackwell. This represents a critical performance leap for AI training and inference workloads, which increasingly depend on memory bandwidth rather than raw compute power.
For Nvidia, which has faced supply chain constraints in recent quarters due to limited availability of high-bandwidth memory, Samsung's qualification serves to diversify a critical supplier base. The move reduces potential bottlenecks for the Rubin launch, which analysts anticipate will drive the next wave of AI infrastructure spending by major cloud providers and technology companies.
The news arrives as Nvidia maintains its position as the world's most valuable chipmaker with a market capitalization of $4.56 trillion, having gained 29.4% over the past year. The company's shares closed at $187.68 on Monday, up 1.6%, reflecting continued investor optimism despite broader market volatility.
Analysts remain overwhelmingly bullish on Nvidia's prospects. The stock carries a "Strong Buy" consensus from 60 analysts, with an average 12-month price target of $253.19—approximately 35% above current levels—according to data compiled as of January 24. Mizuho and Goldman Sachs analysts have recently raised their price targets, citing strong industry sentiment driven by memory controller advancements and robust demand for AI accelerators.
"The diversification of HBM supply is crucial for Nvidia as it scales the Rubin platform," analysts at Morgan Stanley noted in a recent report, projecting substantial profit growth for Samsung's memory business in 2026. The firm views the expanded supplier ecosystem as a positive development for the entire AI semiconductor supply chain.
The qualification of Samsung's HBM4 follows months of speculation about whether the Korean manufacturer could meet Nvidia's stringent quality and performance requirements. Earlier reports in August 2025 suggested Samsung had passed initial quality evaluations, but final approval for mass production was not confirmed until late January.
Samsung's entry into the HBM4 market intensifies competition among memory manufacturers, with SK Hynix, Samsung, and Micron all vying to supply components for next-generation AI accelerators from Nvidia, AMD, and other chipmakers. Analysts estimate that the HBM market will exceed $30 billion in 2026, driven primarily by demand for AI training and inference chips.
The Rubin platform is expected to feature prominently at Nvidia's GPU Technology Conference (GTC) in 2026, where both the Rubin AI chips and Samsung's HBM4 modules will be showcased. Partner products based on the platform are anticipated to become available in the second half of 2026, with initial customer shipments projected around August.
Nvidia's continued dominance in AI accelerators has fueled extraordinary revenue growth, with trailing twelve-month revenue reaching $187.1 billion and quarterly revenue growth of 62.5% year-over-year. The company's profit margin stands at 53%, reflecting strong pricing power in a market where demand continues to outpace supply.
As the AI infrastructure buildout accelerates across major technology companies, securing reliable sources of high-bandwidth memory has become a strategic imperative. Samsung's qualification for Rubin marks a critical milestone in that effort, positioning both companies to capture what analysts expect will be a record year for AI semiconductor sales in 2026.