BrianOnAI logoBrianOnAI

NVIDIA: Nemotron Nano 12B 2 VL

byNvidia

NVIDIA Nemotron Nano 2 VL is a 12-billion-parameter open multimodal reasoning model designed for video understanding and document intelligence. It introduces a hybrid Transformer-Mamba architecture, combining transformer-level accuracy with Mamba’s memory-efficient sequence modeling for significantly higher throughput and lower latency. The model supports inputs of text and multi-image documents, producing natural-language outputs. It is trained on high-quality NVIDIA-curated synthetic datasets optimized for optical-character recognition, chart reasoning, and multimodal comprehension. Nemotron Nano 2 VL achieves leading results on OCRBench v2 and scores ≈ 74 average across MMMU, MathVista, AI2D, OCRBench, OCR-Reasoning, ChartQA, DocVQA, and Video-MME—surpassing prior open VL baselines. With Efficient Video Sampling (EVS), it handles long-form videos while reducing inference cost. Open-weights, training data, and fine-tuning recipes are released under a permissive NVIDIA open license, with deployment supported across NeMo, NIM, and major inference runtimes.

Pricing

Input
$0.20 / 1M tokens
Output
$0.60 / 1M tokens

Specifications

Context Window131K tokens
Max Output tokens
Modalitymultimodal
Input Typesimage, text, video
Output Typestext

Strategic Analysis 🔒

Unlock vCAIO insights to make better model decisions:

  • Governance Risk Rating (Low / Medium / High)
  • Quality Tier Classification
  • Best Use Cases & Tags
  • Strategic Verdict from vCAIO
  • AI-Verified Fit Scoring

Not sure if this model fits your use case?

Describe your task and get AI-verified recommendations in seconds.

Try Model Advisor

Pricing last updated: Invalid Date