Massive Multitask Language Understanding covers 57 subjects across STEM, the humanities, social sciences, and more.
Beta version: *Information might not be fully accurate. Please report any discrepancies.
Beta version: *Information might not be fully accurate. Please report any discrepancies.
Latest Data
2026-02-16
Context Window
256k
tokens
Input Cost
$1.00
per 1M tokens
Output Cost
$4.00
per 1M tokens
Parameters
Proprietary
model footprint
Performance Analysis // Verified Benchmarks
Massive Multitask Language Understanding covers 57 subjects across STEM, the humanities, social sciences, and more.
Functional correctness of synthesized programs from docstrings.
Multi-discipline Multimodal Understanding and Reasoning.
American Invitational Mathematics Examination. Competition-level math.
Chatbot Arena ELO score. Crowd-sourced human preference ranking.
Artificial Analysis aggregate intelligence index.
A more robust and harder version of MMLU, focusing on complex reasoning and STEM subjects.
Humanity's Last Exam - Hard reasoning benchmark without tools.
Artificial Analysis aggregate math capability index.
500-problem math benchmark for broad quantitative reasoning.
Contamination-free coding benchmark using recent problems.
Artificial Analysis aggregate coding capability index.
Graduate-Level Google-Proof Q&A Benchmark.
Artificial Analysis Long Context Reasoning benchmark. Evaluates reasoning over long contexts.
Artificial Analysis IFBench. Evaluates precise instruction following with constraints.
American Invitational Mathematics Examination 2025 problems.
Hard split of Terminal-Bench focused on tougher terminal workflows.
Telecom-domain tool-use and workflow benchmark.
Scientific programming benchmark for code synthesis and correctness.