O

LangMart: Qwen: Qwen3 235B A22B

Openrouter
41K
Context
$0.1800
Input /1M
$0.5400
Output /1M
N/A
Max Output

LangMart: Qwen: Qwen3 235B A22B

Model Overview

Property Value
Model ID openrouter/qwen/qwen3-235b-a22b
Name Qwen: Qwen3 235B A22B
Provider qwen
Released 2025-04-28

Description

Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model developed by Qwen, activating 22B parameters per forward pass. It supports seamless switching between a "thinking" mode for complex reasoning, math, and code tasks, and a "non-thinking" mode for general conversational efficiency. The model demonstrates strong reasoning ability, multilingual support (100+ languages and dialects), advanced instruction-following, and agent tool-calling capabilities. It natively handles a 32K token context window and extends up to 131K tokens using YaRN-based scaling.

Description

LangMart: Qwen: Qwen3 235B A22B is a language model provided by qwen. This model offers advanced capabilities for natural language processing tasks.

Provider

qwen

Specifications

Spec Value
Context Window 40,960 tokens
Modalities text->text
Input Modalities text
Output Modalities text

Pricing

Type Price
Input $0.18 per 1M tokens
Output $0.54 per 1M tokens

Capabilities

  • Frequency penalty
  • Include reasoning
  • Logit bias
  • Logprobs
  • Max tokens
  • Min p
  • Presence penalty
  • Reasoning
  • Repetition penalty
  • Response format
  • Seed
  • Stop
  • Structured outputs
  • Temperature
  • Tool choice
  • Tools
  • Top k
  • Top logprobs
  • Top p

Detailed Analysis

Qwen3-235B-A22B is the flagship Mixture-of-Experts language model in the Qwen 3 series, representing state-of-the-art capabilities with efficient sparse activation. Released April 2025. Key characteristics: (1) Architecture: 235B total parameters with ~22B activated per forward pass (A22B), achieving ~83% compute reduction versus hypothetical dense 235B model while maintaining frontier performance; uses Qwen 3 MoE design with global-batch load balancing excluding shared experts, trained on 36T tokens; (2) Performance: Achieves results competitive with or exceeding GPT-4, Claude 3 Opus, and Gemini 1.5 Pro on major benchmarks; excels at complex reasoning, advanced mathematics, sophisticated code generation, and long-context understanding; (3) Use Cases: Applications requiring maximum language model capability, complex research and analysis, advanced code generation with architectural design, sophisticated content creation, high-stakes reasoning tasks, enterprise AI requiring frontier performance; (4) Context Window: 131K tokens supporting extensive document processing; (5) Pricing: Based on 22B activated parameters rather than 235B total, offering frontier capability at fraction of cost versus equivalent dense model; (6) Trade-offs: Highest capability in Qwen 3 lineup; MoE architecture provides remarkable efficiency. Best for applications requiring absolute maximum capability while optimizing inference costs - demonstrates that sparse activation can achieve frontier performance at significantly reduced computational cost.