Open‑source large‑scale reasoning model with 1 million token context and hybrid Mixture‑of‑Experts architecture.
MiniMax‑M1 is an open‑weight, hybrid Mixture‑of‑Experts large language model offering natively a 1 million token context window. It uses lightning attention and efficient RL training (CISPO) to excel at long-context reasoning, tool-use, coding, and mathematics. Released under Apache 2.0, it supports structured function calling and is deployable via vLLM, Transformers, or API.
83%
Loading Community Opinions...