HBM4: New Frontiers for High-Performance Memory | Parts of computer and their functions | 10 parts of computer | Computer hardware parts | Turtles AI

HBM4: New Frontiers for High-Performance Memory
Next-generation memory promises faster speeds and higher bandwidth, transforming the AI and data center industry by 2026
Isabella V2 October 2024

 


HBM4 memory, intended to improve AI and data centers, will offer speeds of up to 10 Gb/s and a bandwidth of 2.56 TB/s per stack. Production will begin between 2025 and 2026, with support from NVIDIA and potentially AMD.

Key points:

  •  Speeds up to 10 Gb/s per pin.
  •  Bandwidth of 2.56 TB/s per stack.
  •  Production expected from SK Hynix and Samsung by 2025.
  •  NVIDIA Rubin GPUs will support HBM4 in 2026.

The new specifications of the HBM4 memory are gradually outlining the future of the sector, with a clear emphasis on the increase in performance and capacity, in particular for the AI ​​and Data Center market. Rambus has recently revealed its HBM4 memory controller, which promises to reach speeds of up to 10 GB/s for PIN, an improvement compared to the HBM3 and HBM3E solutions currently on the market. The band width for Stack will be significantly increased, with an estimated value of 2.56 TB/s, more than double compared to that offered by the current HBM3E solutions, which stop at around 1.229 TB/s with a speed of 9.6 GB/s. HBM4 will maintain the same 16-layer stack design with a maximum capacity of 64 GB per stack, but with the implementation of new features such as the Error Correcting Code (etc.), Read-Modify-Writite (RMW) and the process of Cleaning of errors, thus guaranteeing greater reliability and stability in data processing.

At the moment, HBM4 is still under development, with the final specifications currently in definition at the Jedec, the international body for the standardization of dram technology. However, the first details on the production and adoption of this technology have already emerged. SK Hynix, one of the main memory manufacturers, started the production in HBM3E series with a 12 -layer configuration and a capacity of 36 GB, preparing the ground for the introduction of the HBM4 memory, scheduled for the start of production within the end of the year. Samsung, another prominent actor in the memories market, also plans to start the mass production of HBM4 by the end of 2025. These timing suggest that HBM4 solutions will become operational between the end of 2025 and the beginning of 2026, ready to be integrated into the future high -performance calculation platforms.

In terms of practical applications, Nvidia has already announced that its new generation GPUs, notes with the code name Rubin, will support HBM4 memory and will be available on the market in 2026. In the same way, AMD could adopt HBM4 for its Future GPU of the Instinct Mi400 series, although there are currently no official confirmations from the company. These developments mark an important step forward in data processing capacity, which will translate into an improvement in performance in the AI ​​and advanced data processing, without however the complete potential of HBM4 are fully visible as long as the production and the rendered will not be optimized.