Meta prepares to upgrade Llama: 10 times more computing power needed | What is Generative ai Select the Best Option | Generative ai Examples in Real - World | | Turtles AI

Meta prepares to upgrade Llama: 10 times more computing power needed
Isabella V1 August 2024

 


Meta intensifies investment in computing power for artificial intelligence training

 Key points:
1. Meta plans to increase the computing power for training Llama 4 by 10 times compared to Llama 3.
2. Meta CFO Susan Li announced an increase in capital expenditures in 2025 to support new data center projects.
3. Expansion of training capabilities is crucial to maintaining competitive advantage in the field of generative AI.
4. India emerges as the leading market for Meta AI’s consumer-facing chatbots.

Meta, a leading developer of open source language models, is planning a significant increase in its computing capabilities to train its future AI models. During the 2024 second quarter earnings conference call, Mark Zuckerberg revealed that the company expects to need ten times more computing power to train Llama 4 than Llama 3.

"The amount of processing needed to train Llama 4 will likely be nearly 10 times the amount used to train Llama 3, and future models will continue to grow beyond that," Zuckerberg said, stressing the importance of developing these capabilities early to avoid falling behind competitors.

Meta has already shown its commitment to language model expansion, launching Llama 3 with 80 billion parameters in April, followed by the updated version Llama 3.1 with 405 billion parameters. This makes Llama 3.1 the largest open source model in Meta.

Susan Li, Meta’s CFO, announced that the company is considering new data center projects and developing the capabilities needed to train future AI models. This significant investment is set to increase capital expenditures in 2025, reflecting the company’s commitment to remain at the forefront of AI technology. Meta’s capital expenditures have already increased by 33 percent in Q2 2024, reaching $8.5 billion, up from $6.4 billion in the same period last year. These investments were mainly for servers, data centers, and network infrastructure.

The need for more computing power is not an isolated phenomenon in the industry. OpenAI, for example, spends billions of dollars on training its models and renting servers from Microsoft. However, Meta aims to build a flexible infrastructure that can be used for both training and inference for generative AI, thus optimizing resource use.

One interesting aspect that emerged during the conference call concerns consumer adoption of Meta AI, with India proving to be the largest market for the company’s chatbots. Despite this success, Li pointed out that Meta does not expect generative AI products to contribute significantly to revenues in the short term.

By expanding training capabilities and implementing advanced infrastructure, Meta is preparing to support future growth of its AI models, thus remaining competitive in a rapidly evolving industry.