BrianOnAI logoBrianOnAI

MiniMax: MiniMax M1

byMinimax

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

Pricing

Input
$0.40 / 1M tokens
Output
$2.20 / 1M tokens

Specifications

Context Window1.0M tokens
Max Output40K tokens
Modalitytext
Input Typestext
Output Typestext

Strategic Analysis 🔒

Unlock vCAIO insights to make better model decisions:

  • Governance Risk Rating (Low / Medium / High)
  • Quality Tier Classification
  • Best Use Cases & Tags
  • Strategic Verdict from vCAIO
  • AI-Verified Fit Scoring

Not sure if this model fits your use case?

Describe your task and get AI-verified recommendations in seconds.

Try Model Advisor

Pricing last updated: Invalid Date