On Wednesday, Mistral unveiled a new artificial intelligence (AI) model focused on coding, named Devstral. This open-source coding agent is designed to tackle various software development tasks. According to the Paris-based AI company, Devstral distinguishes itself from other open software engineering (SWE) agents by effectively addressing real-world software engineering challenges and generating contextually relevant code within a codebase. The firm announced that the AI model scored highest in the SWE-Verified benchmark during internal assessments. Devstral was developed in collaboration with All Hands AI.
Mistral’s Devstral Coding Agent Is Claimed to Offer Practical Coding Capabilities
In a recent newsroom update, Mistral provided insights into the functionality of Devstral. The release comes amid a surge of interest from leading AI companies in launching AI-driven coding agents. OpenAI introduced Codex, Microsoft presented GitHub Copilot, and Google released Jules in public beta. Mistral now enters this competitive landscape with Devstral.
The company pointed out that while existing open-source large language models (LLMs) can handle discrete coding tasks such as writing standalone functions or completing code, they often falter when tasked with developing contextual code within expansive codebases. This limitation can hinder AI agents in discerning the relationships between different components and identifying subtle bugs.
Mistral asserts that Devstral effectively addresses these challenges with its capacity to add context to coding tasks, utilizing the existing database and frameworks. Through internal trials, the model achieved a score of 46.8 percent on the SWE-Verified benchmark, placing it at the forefront of its category. It outshined larger open-source models, including Qwen 3 and DeepSeek V3, and proprietary models like OpenAI’s GPT-4.1-mini and Anthropic’s Claude 3.5 Haiku.
Devstral is fine-tuned from the Mistral-Small-3.1 AI model and features a context window capable of accommodating up to 128,000 tokens. Unlike its predecessor, this model is text-only, omitting the vision encoder found in the Small-3.1. Devstral can also utilize tools to navigate codebases, edit multiple files, and support other SWE agents.
Mistral emphasizes that Devstral is a lightweight model, operable on a single Nvidia RTX 4090 GPU or a Mac with 32GB of RAM. This capability allows for local deployment, enabling the model to function entirely on-device. Interested users can download Devstral through platforms such as Hugging Face, Ollama, Kaggle, Unsloth, and LM Studio, under a permissive Apache 2.0 license for both academic and commercial applications.
Moreover, Devstral can be accessed as an application programming interface (API). Mistral has labeled the AI agent as devstral-small-2505, with pricing set at $0.10 (approximately Rs. 8.6) per million input tokens and $0.30 (about Rs. 25) per million output tokens.