O

LangMart: Arcee AI: Virtuoso Large

Openrouter
131K
Context
$0.7500
Input /1M
$1.20
Output /1M
N/A
Max Output

LangMart: Arcee AI: Virtuoso Large

Model Overview

Property Value
Model ID openrouter/arcee-ai/virtuoso-large
Name Arcee AI: Virtuoso Large
Provider arcee-ai
Released 2025-05-05

Description

Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k context inherited from Qwen 2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeek R1 distillation, multi‑epoch supervised fine‑tuning and a final DPO/RLHF alignment stage, yielding strong performance on BIG‑Bench‑Hard, GSM‑8K and long‑context Needle‑In‑Haystack tests. Enterprises use Virtuoso‑Large as the "fallback" brain in Conductor pipelines when other SLMs flag low confidence. Despite its size, aggressive KV‑cache optimizations keep first‑token latency in the low‑second range on 8× H100 nodes, making it a practical production‑grade powerhouse.

Description

LangMart: Arcee AI: Virtuoso Large is a language model provided by arcee-ai. This model offers advanced capabilities for natural language processing tasks.

Provider

arcee-ai

Specifications

Spec Value
Context Window 131,072 tokens
Modalities text->text
Input Modalities text
Output Modalities text

Pricing

Type Price
Input $0.75 per 1M tokens
Output $1.20 per 1M tokens

Capabilities

  • Frequency penalty
  • Logit bias
  • Max tokens
  • Min p
  • Presence penalty
  • Repetition penalty
  • Stop
  • Temperature
  • Tool choice
  • Tools
  • Top k
  • Top p

Detailed Analysis

Arcee AI: Virtuoso Large