New Microsoft AI Solutions: Power and Scalability with Blackwell and AMD EPYC | Cpu hardware list | Hardware catalog | 4 main parts of a computer | Turtles AI
Microsoft is expanding its AI offerings with new virtual machines powered by NVIDIA Blackwell chips and AMD EPYC “Genoa” CPUs, promising extraordinary performance and scalable infrastructure for high-intensity workloads. Collaboration with AMD and NVIDIA is making the Azure platform one of the most advanced environments for high-performance computing.
Key Points:
- Introducing Azure ND GB200 V6 VMs with NVIDIA Blackwell Superchip
- Partnership with AMD for new EPYC Genoa CPUs, optimized for HPC
- Up to 20x faster Azure HBv5 VM performance over previous generation
- Advanced scalability with technologies like NVLink, InfiniBand, and HBM3
Microsoft, a key player in the evolution of cloud services, is taking a significant step forward in the field of AI processing by integrating the most modern solutions into its Azure portfolio. During the Microsoft Ignite event, the company presented some of the most significant innovations, including the integration of the NVIDIA Blackwell platform, a superchip that combines the potential of Blackwell GPUs and Grace CPUs. The Azure ND GB200 V6 virtual machine is one of the most notable examples of this new approach. Equipped with two Grace Blackwell chips per server, each with two Blackwell GPUs and a Grace CPU, the platform stands out for its ability to manage highly scalable AI workloads, with interconnection between the various components guaranteed by NVIDIA’s NVLink interface. With this configuration, Microsoft is able to manage up to 72 Blackwell GPUs on a single server, leveraging NVIDIA’s InfiniBand fabric system to optimize communications between servers and ensure exceptional scalability. The Blackwell-based VMs are still in a private preview phase, but the company plans to make them available to a wider audience in the coming months.
Alongside these solutions, Microsoft has presented the Azure HBv5 virtual machine, designed specifically for compute- and memory-intensive applications, such as those related to high-performance computing (HPC). With this in mind, the company has chosen to rely on the fourth-generation AMD EPYC "Genoa" CPUs, a choice that further improves the performance of Azure cloud solutions. The features of the new CPUs, such as the presence of HBM3 (High Bandwidth Memory) and an architecture that supports up to 352 cores per server, make these machines particularly suitable for managing operations that require high bandwidth and an extremely high number of cores. In addition, HBM3 memory offers a bandwidth of 6.9 terabytes per second, supporting extremely demanding workloads, while the bandwidth of the Infinity Fabric enables efficient communication between the different components of the system. HBv5 machines are designed to offer best-in-class performance, managing to increase processing speed by up to 20 times compared to previous solutions, which further underscores the power of the new architectures.
However, Microsoft’s work is not limited to improving performance, but also focuses on efficient resource management and system scalability. With the introduction of technologies such as NVIDIA Quantum-2 InfiniBand and 160 Gbps connection via second-generation Azure Boost NIC, the new Azure solutions are ready to support workloads distributed across hundreds of thousands of CPU cores. In particular, the Azure platform is designed to scale easily, allowing companies to manage their AI and HPC applications without compromising speed or flexibility.
Microsoft continues to strengthen its position as a major player in cloud services, bringing AI and high-performance computing solutions to the market that aim to offer extraordinary results in terms of efficiency, power and scalability.