Mistral launches new AI models for Edge devices | Llm python -- tutorial | Large language models vs generative ai | Llm model | Turtles AI

Mistral launches new AI models for Edge devices
The “Les Ministx” models offer local and focused focused solutions for advanced applications
Isabella V17 October 2024

 

 

Mistral, a French startup active in the field of AI, has unveiled its “Les Ministraux” generative models designed to run on edge devices. These models address growing needs for privacy and local performance.

Key points:

  • Mistral launched two generative models, Ministral 3B and Ministral 8B.
  • Both models support a context window of 128,000 tokens.
  • The company focuses on local applications, emphasizing privacy and low latency.
  • The models can be used through Mistral’s cloud platform or downloaded for research purposes.

Mistral, a Paris-based startup co-founded by former members of Meta and DeepMind, recently introduced its first set of generative AI models called “Les Ministraux,” designed to operate on edge devices such as laptops and smartphones. These models, Ministral 3B and Ministral 8B, feature significant context management capabilities, being able to process up to 128,000 tokens, an amount equivalent to about 50 pages of text. The decision to develop these models was driven by customer requests for local inferences that prioritize privacy, particularly in contexts such as real-time translation, offline virtual assistants, on-site data analysis, and autonomous robotics applications. Mistral pointed out that its models are designed for processing efficiency and low latency, qualities essential for today’s critical applications. Currently, the Ministral 8B model is available for download for research purposes only, while interested companies can apply for a commercial license for self-managed versions of the models. Developers, alternatively, can access the models via Mistral’s cloud platform and other collaborations planned for the coming weeks. The costs associated with use are 10 cents per million tokens for Ministral 8B and 4 cents for Ministral 3B. Against a backdrop of a growing inclination toward smaller models that are cheaper and faster to train, Mistral claims superior performance over comparable models such as those in Meta’s Llama family and Google’s Gemma family. Recently, Mistral secured $640 million in funding and began generating revenue, despite the difficulties common to many startups in the generative AI sector. Aiming to create competitive models in the AI landscape, Mistral continues to expand its product portfolio, including services and developer tools, while maintaining a vision focused on quality and sustainability.

Thus, Mistral positions itself as a significant player in the AI landscape, seeking to offer concrete solutions to increasingly specific market needs.