LangMart: Cogito V2 Preview Llama 109B
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/deepcogito/cogito-v2-preview-llama-109b-moe |
| Name | Cogito V2 Preview Llama 109B |
| Provider | deepcogito |
| Released | 2025-09-02 |
Description
An instruction-tuned, hybrid-reasoning Mixture-of-Experts model built on Llama-4-Scout-17B-16E. Cogito v2 can answer directly or engage an extended “thinking” phase, with alignment guided by Iterated Distillation & Amplification (IDA). It targets coding, STEM, instruction following, and general helpfulness, with stronger multilingual, tool-calling, and reasoning performance than size-equivalent baselines. The model supports long-context use (up to 10M tokens) and standard Transformers workflows. Users can control the reasoning behaviour with the reasoning enabled boolean. Learn more in our docs
Description
LangMart: Cogito V2 Preview Llama 109B is a language model provided by deepcogito. This model offers advanced capabilities for natural language processing tasks.
Provider
deepcogito
Specifications
| Spec | Value |
|---|---|
| Context Window | 32,767 tokens |
| Modalities | text+image->text |
| Input Modalities | image, text |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.18 per 1M tokens |
| Output | $0.59 per 1M tokens |
Capabilities
- Frequency penalty
- Include reasoning
- Logit bias
- Max tokens
- Min p
- Presence penalty
- Reasoning
- Repetition penalty
- Stop
- Temperature
- Tool choice
- Tools
- Top k
- Top p
Detailed Analysis
Cogito v2 Preview Llama 109B MoE is an advanced Mixture-of-Experts model by DeepCogito that combines 109 billion parameters with specialized routing capabilities. Built on Llama-4-Scout-17B-16E architecture, this model represents a significant advancement in instruction-tuned, hybrid-reasoning AI systems.