Mistral: Devstral 2 2512 (Free)
Description
Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding. It is a 123B-parameter dense transformer model supporting a 256K context window.
The model excels at exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections - solving challenges like bug fixing and modernizing legacy systems.
Devstral 2 can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases. It is available under a modified MIT license.
Technical Specifications
| Specification |
Value |
| Context Window |
262,144 tokens (256K) |
| Parameters |
123 billion (dense transformer) |
| Input Modalities |
Text |
| Output Modalities |
Text |
| Max Completion Tokens |
65,536 |
| Architecture |
Dense Transformer |
Pricing
| Type |
Price |
| Input Tokens |
$0 per 1M tokens (FREE) |
| Output Tokens |
$0 per 1M tokens (FREE) |
Paid Version Comparison
| Variant |
Input Cost |
Output Cost |
mistralai/devstral-2512:free |
$0/M |
$0/M |
mistralai/devstral-2512 (paid) |
$0.05/M |
$0.22/M |
Capabilities
| Capability |
Status |
| Agentic Coding |
Primary Specialty |
| Multi-File Orchestration |
Supported |
| Codebase Exploration |
Supported |
| Framework Dependency Tracking |
Supported |
| Bug Detection & Fixing |
Supported |
| Legacy System Modernization |
Supported |
| Tool/Function Calling |
Supported |
| Structured Outputs |
Supported |
| JSON Mode |
Supported |
| Long-Context Processing |
Supported (256K) |
| Fine-Tuning |
Available |
| Vision |
Not Supported |
Supported Parameters
| Parameter |
Supported |
Description |
max_tokens |
Yes |
Maximum number of tokens to generate |
temperature |
Yes |
Controls randomness (default: 0.3) |
top_p |
Yes |
Nucleus sampling parameter |
stop |
Yes |
Stop sequences for generation |
frequency_penalty |
Yes |
Reduces repetition of token sequences |
presence_penalty |
Yes |
Encourages new topics |
seed |
Yes |
For reproducible outputs |
response_format |
Yes |
Specify output format (e.g., JSON) |
structured_outputs |
Yes |
Schema-based structured responses |
tools |
Yes |
Tool/function calling definitions |
tool_choice |
Yes |
Control tool selection behavior |
Default Parameter Values
| Parameter |
Default Value |
temperature |
0.3 |
top_p |
null |
frequency_penalty |
null |
API Usage Example
curl https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer $LANGMART_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/devstral-2512:free",
"messages": [
{
"role": "user",
"content": "Analyze this Python function and suggest improvements for better error handling."
}
],
"temperature": 0.3,
"max_tokens": 4096
}'
curl https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer $LANGMART_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/devstral-2512:free",
"messages": [
{
"role": "user",
"content": "Read the config.py file and update the database connection settings."
}
],
"tools": [
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read contents of a file",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string"}
},
"required": ["path"]
}
}
},
{
"type": "function",
"function": {
"name": "write_file",
"description": "Write contents to a file",
"parameters": {
"type": "object",
"properties": {
"path": {"type": "string"},
"content": {"type": "string"}
},
"required": ["path", "content"]
}
}
}
],
"tool_choice": "auto"
}'
Mistral AI Coding Models
mistralai/codestral-2508 - Specialized code generation model
mistralai/devstral-small-2507 - Smaller development-focused model
mistralai/devstral-2512 - Paid version with higher rate limits
Mistral AI General Models
mistralai/mistral-large-2512 - Latest large general-purpose model
mistralai/mistral-large-2411 - Previous large model version
mistralai/mistral-large-2407 - Stable large model
mistralai/mistral-medium-3.1 - Mid-tier model
mistralai/mistral-small-3.2-24b-instruct-2506 - 24B instruction model
mistralai/mistral-small-24b-instruct-2501 - Earlier 24B instruction model
mistralai/mistral-nemo - High-traffic general model
mistralai/ministral-3b - Lightweight 3B model
mistralai/ministral-8b - Mid-size efficient model
mistralai/ministral-14b-2512 - 14B parameter model
mistralai/mixtral-8x7b-instruct - Mixture of Experts model
Comparable Coding Models (Other Providers)
deepseek/deepseek-coder - DeepSeek's coding specialist
qwen/qwen-2.5-coder-32b - Qwen's coding model
anthropic/claude-3.5-sonnet - Strong coding capabilities
openai/gpt-4-turbo - OpenAI's advanced coding support
Model Identification
| Field |
Value |
| Model Name |
Devstral 2 2512 (Free) |
| Model ID |
mistralai/devstral-2512:free |
| Version |
devstral-2-2512 |
| Author/Provider |
Mistral AI |
| Created |
December 9, 2025 |
| License |
Modified MIT License |
| Headquarters |
France |
Rate Limits
| Limit Type |
Value |
| Requests Per Minute (RPM) |
600 |
| Requests Per Day (RPD) |
Unlimited |
Mistral AI (Primary Provider)
| Field |
Value |
| Base URL |
https://api.mistral.ai/v1 |
| Status Page |
https://status.mistral.ai/ |
| Data Policy |
No training use; 30-day retention; no publishing rights |
| BYOK (Bring Your Own Key) |
Enabled |
| Quantization |
Full precision |
Alternative Provider: Chutes (Paid Version Only)
| Field |
Value |
| Quantization |
FP8 |
| Max Completion Tokens |
65,536 |
| Data Policy |
Training allowed; retains prompts; non-publishable |
Key Use Cases
- Agentic Coding - Autonomous code generation and modification with self-correction
- Multi-File Refactoring - Orchestrating changes across entire codebases
- Bug Fixing - Detecting, diagnosing, and fixing bugs automatically
- Legacy System Modernization - Updating old codebases to modern standards
- Codebase Exploration - Understanding and navigating large codebases
- Framework Integration - Tracking and managing framework dependencies
- Enterprise Development - Optimizable for large enterprise codebases
Model Weights
The model weights are publicly available on Hugging Face:
- Optimized for agentic coding workflows with autonomous error detection and correction
- 256K context window enables processing of entire codebases in a single context
- Tool calling support makes it ideal for IDE integrations and coding assistants
- Fine-tunable for specific programming languages or enterprise requirements
- Free tier has same capabilities as paid tier but with rate limits
Data Retention & Privacy
| Policy |
Status |
| Training Use |
Disabled (data not used for training) |
| Prompt Retention |
30 days |
| Publishing Rights |
Not granted |
Source: LangMart API - https://langmart.ai/model-docs:free
Last Updated: December 2025