Open-source hybrid‑reasoning MoE foundation model optimized for intelligent agent tasks with 128K context and tool use.
GLM‑4.5 is an open‑source, Mixture‑of‑Experts (MoE) large‑language model from Z.ai, designed for intelligent agent applications. It features 355 B parameters (32 B active) with an efficient ’thinking/non‑thinking’ hybrid reasoning mode and 128 K context length. A smaller variant, GLM‑4.5‑Air, offers 106 B parameters (12 B active) for more efficient deployments. GLM‑4.5 excels at reasoning, coding, and tool‑calling tasks and ranks among the top performers on benchmarks compared with models like o3, Grok‑4, Claude Sonnet, and GPT‑4.1.
84%
Loading Community Opinions...