Meta Unveils Llama 4: Advanced AI with Agentic and Multimodal Capabilities | WhatsApp Meta AI | Meta AI image generator | Meta AI chatbot | Turtles AI

Meta Unveils Llama 4: Advanced AI with Agentic and Multimodal Capabilities
Llama 4 introduces new agentic capabilities, increased computing power, and a multimodal approach, offering advanced tools for processing text, images, audio, and video in increasingly complex scenarios
Isabella V6 April 2025

 

Meta announces the launch of Llama 4, an open-source AI model with advanced reasoning and voice interaction, backed by significant infrastructure investments.

Key Points:

  • Llama 4 will introduce agentic capabilities, enabling autonomous planning and adaptation.
  • Meta plans to use 10 times the computational power of Llama 3 to train Llama 4.
  • The model will be multimodal, handling text, images, video and audio.
  • Llama 4 will be released in multiple iterations throughout 2025, with a focus on advanced reasoning and voice interaction.


Meta Platforms Inc. announced the upcoming release of Llama 4, the latest evolution of its large language models (LLMs), designed to offer advanced multimodal capabilities. According to Reuters, Llama 4 will be able to process and translate multiple data formats, including text, video, images and audio, making it extremely versatile. Meta CEO Mark Zuckerberg noted that Llama 4 will introduce “agentic capabilities,” allowing the model to plan, evaluate decisions, and adapt based on real-time feedback, moving closer to human-like behavior.

Meta plans to use 10 times the compute power it used for Llama 3 to train Llama 4, The Decoder reported. The investment reflects the company’s commitment to ramping up its AI infrastructure, with plans to spend up to $65 billion, primarily on data center expansion and advanced hardware.

Llama 4 will be released in multiple iterations throughout 2025, with a focus on improving reasoning and voice interaction. According to The Decoder, Meta is developing specific versions of the model for mobile applications and exploring the integration of AI agents in areas such as customer service and retail.

In parallel, Meta has recently undergone significant changes to its AI research team. Joelle Pineau, Meta’s head of AI research, has announced her resignation, The Wall Street Journal reported. Her departure marks a major shift as the company ramps up its AI efforts.

Meta also took an open-source approach to Llama 4, allowing developers and businesses to access and customize the model for different applications. However, as The Verge noted, the license places restrictions on commercial entities with more than 700 million users, prompting criticism from the Open Source Initiative.

With the introduction of Llama 4, Meta aims to cement its position in the AI ​​landscape, offering advanced and accessible tools that could redefine interactions between humans and machines.