Mistral Medium Model Documentation
Model Overview
Model Name: Mistral Medium
Provider: Mistral AI
Inference Model ID: mistralai/mistral-medium
Released: January 10, 2024
Model Type: Closed-source Language Model
Description
A closed-source, medium-sized model from Mistral AI that excels at reasoning, code, JSON, chat, and more. This model performs comparably to other companies' flagship models and represents Mistral's mid-tier offering, positioned between smaller and larger language models in their product lineup.
Technical Specifications
Context Window & Modalities
- Context Window: 32,000 tokens
- Input Modality: Text
- Output Modality: Text
- Image Input: Not supported
Default Parameters
- Temperature: 0.3 (default)
- Architecture: Closed-source prototype
- Training Focus: Text-based tasks
Configuration Details
- Instruction Type: Not explicitly documented
- System Prompts: Not documented
- Stop Sequences: No default stop sequences specified
- Deprecation Status: Not deprecated
Pricing
| Type | Price |
|---|---|
| Input | $2.70 per 1M tokens |
| Output | $8.10 per 1M tokens |
Note: Mistral Medium has been deprecated. Consider using Mistral Small or Mistral Large for new projects.
Capabilities
The Mistral Medium model excels at:
- Reasoning Tasks - Complex logical reasoning and problem-solving
- Code Generation - Writing and understanding code across programming languages
- JSON Handling - Structured data generation and parsing
- Chat Applications - Multi-turn conversational interactions
- General Conversation - Natural language processing and responses
Limitations
- Image Processing: No image input or vision capabilities
- Real-time Data: Training data has knowledge cutoff
- No Multimodal: Text-only input and output
- Activity Metrics: Usage statistics not yet tracked on LangMart platform
Related Models
From Mistral AI:
- Mistral 7B - Smaller, more efficient model
- Mistral Large - Larger, more capable model
- Mistral Next - Latest generation model
Availability
- Supported Providers: Available through LangMart
- Access: Public API via LangMart platform
- Usage Data: Activity and usage metrics not yet available on platform
Usage Examples
Basic Chat Completion
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-medium",
"messages": [
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}
]
}'
Code Generation
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-medium",
"messages": [
{
"role": "user",
"content": "Write a Python function to sort a list of dictionaries by a specific key."
}
]
}'
JSON Processing
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-medium",
"messages": [
{
"role": "user",
"content": "Convert this data into JSON format: Name: John, Age: 30, City: New York"
}
],
"temperature": 0.3
}'
Multi-turn Conversation
curl -X POST https://api.langmart.ai/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "mistralai/mistral-medium",
"messages": [
{
"role": "user",
"content": "What is machine learning?"
},
{
"role": "assistant",
"content": "Machine learning is a subset of artificial intelligence..."
},
{
"role": "user",
"content": "Can you give me some practical examples?"
}
]
}'
Integration with LangMart Design
Connection Configuration
When adding Mistral Medium to LangMart:
- Provider: Mistral AI
- Connection Type: API Key Authentication
- Model ID:
mistralai/mistral-medium - Endpoint: LangMart API (
https://api.langmart.ai/v1)
Recommended Settings
{
"model": "mistralai/mistral-medium",
"temperature": 0.3,
"max_tokens": 2000,
"top_p": 0.9,
"provider": "mistralai"
}
Performance Characteristics
- Comparable Performance: Performs at levels similar to other flagship models
- Latency: Standard LangMart API latency
- Throughput: Support for batch processing via LangMart
- Reliability: Consistent performance for mid-tier inference tasks
References
- Documentation: https://langmart.ai/model-docs
- Provider: Mistral AI (https://mistral.ai)
- Documentation Date: January 10, 2024
Last Updated: December 23, 2025 Source: LangMart Model Registry