LangMart: Mistral Large
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/mistralai/mistral-large |
| Name | Mistral Large |
| Provider | mistralai |
| Released | 2024-02-26 |
Description
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407). It's a proprietary weights-available model and excels at reasoning, code, JSON, chat, and more. Read the launch announcement here.
It supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash. Its long context window allows precise information recall from large documents.
Description
LangMart: Mistral Large is a language model provided by mistralai. This model offers advanced capabilities for natural language processing tasks.
Provider
mistralai
Specifications
| Spec | Value |
|---|---|
| Context Window | 128,000 tokens |
| Modalities | text->text |
| Input Modalities | text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $2.00 per 1M tokens |
| Output | $6.00 per 1M tokens |
Capabilities
- Frequency penalty
- Max tokens
- Presence penalty
- Response format
- Seed
- Stop
- Structured outputs
- Temperature
- Tool choice
- Tools
- Top p
Detailed Analysis
Mistral Large represents the latest stable version of Mistral's flagship model line, currently pointing to 2411 (Large 2.1) or newer releases. This endpoint automatically provides access to the most advanced Mistral model without version pinning, ensuring applications benefit from ongoing improvements in reasoning, function calling, long-context processing, and safety features. Using the base endpoint is recommended for applications prioritizing cutting-edge capabilities over strict version control. The model inherits all 2411 enhancements (improved function calling, optimized system prompts, superior long-context accuracy) plus any subsequent refinements. As Mistral releases updates, this endpoint automatically incorporates improvements in code generation quality, mathematical reasoning, multilingual performance, and instruction following. Ideal for production applications requiring maximum intelligence and automatic access to improvements, enterprise deployments with CI/CD pipelines that can handle prompt format updates, and use cases where staying current with latest capabilities justifies occasional integration adjustments. For version stability, use explicit 2407 or 2411 endpoints.