Fujitsu Lancia Middleware for gpu optimization | Gpu vs cpu vs graphics card | Gpu hardware list | GPU hardware | Turtles AI
Fujitsu has launched an innovative middleware for the optimization of the use of GPUs, facing the growing demand for accelerators for AI. This technology aims to improve computational efficiency and better manage the resources available in a GPU global deficiency context.
Key points:
- Fujitsu introduces a middleware to optimize the use of GPUs.
- The software increases computational efficiency up to 2.25 times.
- Currently, technology is limited to individual servers, with future expansion plans.
- Two main customers, tradom and Sakura Internet, are already implementing the solution.
Fujitsu recently announced the arrival of a middleware dedicated to the optimization of the use of the GPUs, a strategic step to face the growing lack of accelerators dedicated to AI. This new technology allows you to distinguish between applications that require the use of the GPU and those that can function adequately with the CPU only, thus guaranteeing a more effective management of resources. The innovation, developed and tested in November 2023, was conceived to ensure that the GPUs, already difficult to find, be exploited at most by those who are lucky enough to own them. During the tests conducted with companies such as AWL Inc., Xtreme-D and Morgenrot Inc., Middleware has shown that it can improve computational efficiency up to 2.25 times, a result that testifies to the effectiveness of the proposed solution. In addition to optimizing resources, the software offers the possibility of dynamically allocating resources in real time, prioritizing the most efficient processes, even when a GPU is already engaged in other operations. Fujitsu has confirmed that technology also includes an advanced memory management system, which allows GPUs to manage activities that require more memory than that actually available. Although technology is currently limited to use on individual servers, Fujitsu plans to extend their skills to support multiple GPUs on different servers. This innovation represents a concrete opportunity to maximize the efficiency of the servers equipped with GPU, making them more productive. Two prominent companies, between the Fintech sector, and the internet Sakura, a cloud service provider, have already started projects to implement this new solution in their data center. Fujitsu aims to face the challenges related to the scarcity of GPU and high energy consumption, which have become important problems in the current technological panorama, thus helping to improve the productivity and creativity of its customers. Although the real effectiveness of the AI generating productivity remains a topic of debate, the need to access GPU remains a fact. The concerns of US regulators regarding the difficulty of accessing accelerators, together with the delays of delivery by suppliers such as Nvidia, show a critical context. Nvidia, for its part, has tried to reassure customers by promising that its new generation of accelerators, Blackwell, will be available in major quantities. However, the growing demand for AI -based solutions has prompted many organizations to search for tools and services for sharing GPUs, reporting a continuously evolving panorama.
The answer to this question, while the sector is preparing for any future changes, remains a fundamental element in the optimization of technological resources.