After Samsung Electronics chairman Jay Y. Lee met Nvidia boss Jensen Huang, the two announced a deal that will see the Korean outfit embed Nvidia’s chips across its entire production chain. The partnership signals Samsung’s intention to move deep into AI-powered manufacturing.
The GPUs will be spread throughout Samsung’s operations, from smartphone and semiconductor development to robotics. Everything will be connected in one vast network where AI constantly monitors, analyses, and optimises production in real time.
Samsung claimed it has achieved more than 20 times faster performance in computational lithography using Nvidia’s cuLitho and CUDA-X libraries in its optical proximity correction process. Both companies are also cooking up next-generation GPU-accelerated design tools to sharpen chipmaking further.
The partnership started when Samsung’s DRAM chips powered Nvidia’s early graphics cards, and Samsung’s foundry division still manufactures many of Nvidia’s current GPUs.
The Korean outfit said it is also developing HBM4 chips for Nvidia’s next wave of AI accelerators. These new parts, built with Samsung’s sixth-generation 10nm-class DRAM and 4nm logic base die, promise data speeds up to 11Gbps, comfortably ahead of JEDEC’s 8Gbps benchmark.
Samsung plans to supply every type of memory Nvidia’s AI servers could want, including GDDR, HBM, and SOCAMM chips. It will use Nvidia’s Omniverse platform to build digital twins of its factories, effectively simulating manufacturing lines before they even start rolling.
To boost manufacturing and robotics, Samsung will deploy Nvidia’s RTX PRO 6000 Blackwell Server Edition and Jetson Thor platforms. These systems will power real-time AI reasoning, improve safety controls, and enhance robotic task execution.
The partnership will reach into telecoms, where the pair will develop AI-RAN technology for South Korean service providers. The project combines Samsung’s software-defined networking expertise with Nvidia GPUs to give local telecoms a powerful AI edge.
 
				 
		  	

