LangMart: Mistral: Devstral Small 1.1
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/mistralai/devstral-small |
| Name | Mistral: Devstral Small 1.1 |
| Provider | mistralai |
| Released | 2025-07-10 |
Description
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI. Finetuned from Mistral Small 3.1 and released under the Apache 2.0 license, it features a 128k token context window and supports both Mistral-style function calling and XML output formats.
Designed for agentic coding workflows, Devstral Small 1.1 is optimized for tasks such as codebase exploration, multi-file edits, and integration into autonomous development agents like OpenHands and Cline. It achieves 53.6% on SWE-Bench Verified, surpassing all other open models on this benchmark, while remaining lightweight enough to run on a single 4090 GPU or Apple silicon machine. The model uses a Tekken tokenizer with a 131k vocabulary and is deployable via vLLM, Transformers, Ollama, LM Studio, and other OpenAI-compatible runtimes.
Description
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/mistralai/devstral-small |
| Name | Mistral: Devstral Small 1. |
Provider
mistralai
Specifications
| Spec | Value |
|---|---|
| Context Window | 128,000 tokens |
| Modalities | text->text |
| Input Modalities | text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.07 per 1M tokens |
| Output | $0.28 per 1M tokens |
Capabilities
- Frequency penalty
- Max tokens
- Min p
- Presence penalty
- Repetition penalty
- Response format
- Seed
- Stop
- Structured outputs
- Temperature
- Tool choice
- Tools
- Top k
- Top p
Detailed Analysis
Devstral Small is a lightweight agentic coding model optimized for faster response times while maintaining multi-file codebase understanding. Built with fewer parameters than Devstral Medium, it provides 2-3x faster inference for common development tasks while still supporting function calling and tool integration. Devstral Small excels at focused development tasks like implementing single features across related files, investigating bugs within a module, updating dependencies, and writing tests. It achieves 53.6% on SWE-Bench Verified (vs 61.6% for Medium), trading some capability for significantly improved latency. The model is ideal for interactive development sessions where developers need quick AI assistance without the overhead of larger models. It supports the same tool ecosystem as Devstral Medium but with optimized prompting for faster task completion. Best suited for developers working in resource-constrained environments or requiring real-time coding assistance.