LangMart: MiniMax: MiniMax M1
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/minimax/minimax-m1 |
| Name | MiniMax: MiniMax M1 |
| Provider | minimax |
| Released | 2025-06-17 |
Description
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks.
Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.
Description
LangMart: MiniMax: MiniMax M1 is a language model provided by minimax. This model offers advanced capabilities for natural language processing tasks.
Provider
minimax
Specifications
| Spec | Value |
|---|---|
| Context Window | 1,000,000 tokens |
| Modalities | text->text |
| Input Modalities | text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.40 per 1M tokens |
| Output | $2.20 per 1M tokens |
Capabilities
- Frequency penalty
- Include reasoning
- Max tokens
- Presence penalty
- Reasoning
- Repetition penalty
- Seed
- Stop
- Temperature
- Tool choice
- Tools
- Top k
- Top p
Detailed Analysis
MiniMax: MiniMax M1