LangMart: NVIDIA: Nemotron Nano 12B 2 VL
Model Overview
| Property | Value |
|---|---|
| Model ID | openrouter/nvidia/nemotron-nano-12b-v2-vl |
| Name | NVIDIA: Nemotron Nano 12B 2 VL |
| Provider | nvidia |
| Released | 2025-10-28 |
Description
NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency.
The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension.
Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost.
Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.
Description
LangMart: NVIDIA: Nemotron Nano 12B 2 VL is a language model provided by nvidia. This model offers advanced capabilities for natural language processing tasks.
Provider
nvidia
Specifications
| Spec | Value |
|---|---|
| Context Window | 131,072 tokens |
| Modalities | text+image->text |
| Input Modalities | image, text, video |
| Output Modalities | text |
Pricing
| Type | Price |
|---|---|
| Input | $0.20 per 1M tokens |
| Output | $0.60 per 1M tokens |
Capabilities
- Frequency penalty
- Include reasoning
- Max tokens
- Min p
- Presence penalty
- Reasoning
- Repetition penalty
- Response format
- Seed
- Stop
- Temperature
- Top k
- Top p