BrianOnAI logoBrianOnAI

Arcee AI: Virtuoso Large

byArcee-Ai

Virtuoso‑Large is Arcee's top‑tier general‑purpose LLM at 72 B parameters, tuned to tackle cross‑domain reasoning, creative writing and enterprise QA. Unlike many 70 B peers, it retains the 128 k context inherited from Qwen 2.5, letting it ingest books, codebases or financial filings wholesale. Training blended DeepSeek R1 distillation, multi‑epoch supervised fine‑tuning and a final DPO/RLHF alignment stage, yielding strong performance on BIG‑Bench‑Hard, GSM‑8K and long‑context Needle‑In‑Haystack tests. Enterprises use Virtuoso‑Large as the "fallback" brain in Conductor pipelines when other SLMs flag low confidence. Despite its size, aggressive KV‑cache optimizations keep first‑token latency in the low‑second range on 8× H100 nodes, making it a practical production‑grade powerhouse.

Pricing

Input
$0.75 / 1M tokens
Output
$1.20 / 1M tokens

Specifications

Context Window131K tokens
Max Output64K tokens
Modalitytext
Input Typestext
Output Typestext

Strategic Analysis 🔒

Unlock vCAIO insights to make better model decisions:

  • Governance Risk Rating (Low / Medium / High)
  • Quality Tier Classification
  • Best Use Cases & Tags
  • Strategic Verdict from vCAIO
  • AI-Verified Fit Scoring

Not sure if this model fits your use case?

Describe your task and get AI-verified recommendations in seconds.

Try Model Advisor

Pricing last updated: Invalid Date