Nvidia Opens NVLink Fusion to Custom CPUs and ASICs: More Flexibility in the AI Ecosystem | 4 main parts of a computer | Hardware | Gpu benchmark | Turtles AI
At Computex 2025, Nvidia introduced NVLink Fusion, a high-speed interconnect that enables the integration of custom CPUs and ASICs with Nvidia GPUs. This strategic opening aims to expand the AI ecosystem, providing greater flexibility in designing accelerated systems.
Key Points:
- NVLink Fusion: New interconnect that enables the integration of custom CPUs and ASICs with Nvidia GPUs.
- Collaborations: Partnerships with companies such as Fujitsu, Qualcomm, MediaTek, and Marvell to support NVLink Fusion.
- DGX Cloud Lepton: Platform that enables access to Nvidia GPUs through a decentralized marketplace.
- Ecosystem Expansion: Nvidia opens its AI infrastructure to third-party components, while maintaining control over the interconnect.
At Computex 2025 in Taipei, Nvidia announced NVLink Fusion, an innovative interconnect technology that enables the integration of custom CPUs and ASICs with Nvidia GPUs. This strategic move represents a significant opening of the NVIDIA ecosystem, providing greater flexibility in designing AI-accelerated systems.
NVLink Fusion enables high-speed interconnection between NVIDIA GPUs and custom CPUs, offering up to 14 times the bandwidth of PCIe 5.0. The technology supports two main configurations: the first allows custom CPUs to be connected to NVIDIA GPUs; the second allows NVIDIA Grace CPUs to be integrated with non-NVIDIA accelerators. This flexibility is made possible through the integration of NVLink IP into designs or through interconnect chiplets provided by NVIDIA.
Partners who have joined the NVLink Fusion initiative include companies such as Fujitsu, Qualcomm, MediaTek, Marvell, AIchip, Astera Labs, Synopsys, and Cadence. These collaborations mark an important step towards creating more modular and interoperable AI systems. However, Nvidia maintains control over the interconnect, requiring the use of its own CPUs or GPUs to leverage NVLink Fusion.
In parallel, Nvidia launched DGX Cloud Lepton, a platform that provides access to Nvidia GPUs through a decentralized marketplace. Lepton functions as a ride-sharing app for computing power, connecting developers to GPU providers such as CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nscale, SoftBank, and Yotta. This initiative aims to simplify access to AI computing resources, offering greater flexibility and scalability.
With NVLink Fusion and DGX Cloud Lepton, Nvidia is expanding its AI ecosystem, offering more flexible and modular solutions to meet the growing needs of the industry.