Skip to content

Beta version: *Information might not be fully accurate. Please report any discrepancies.

API Reference

API Documentation

REST API for accessing models, benchmarks, scores, and leaderboard data programmatically.

Models

1573

Benchmarks

206

Categories

17

Sources

30

Static API Architecture
Optimized for performance and reliability

This API uses static export for optimal performance:

  • All endpoints pre-generated at build time
  • Served from Cloudflare's edge network globally
  • Response time: <20ms worldwide
  • No server-side processing or cold starts
  • 100% uptime SLA

⚠️ Note: Query parameters for filtering are not supported server-side. Use client-side JavaScript to filter the returned data.

Client-Side Filtering

Since the API is static, filtering must be done client-side. Here are examples:

// Fetch all models
const response = await fetch('/api/v1/models');
const data = await response.json();

// Filter by provider
const openaiModels = data.models.filter(m => m.provider === 'OpenAI');

// Filter by family
const llamaModels = data.models.filter(m => m.family === 'llama');

// Filter by capability
const reasoningModels = data.models.filter(m => 
  m.apiSupport?.reasoning === true
);

// Filter by open source
const openSourceModels = data.models.filter(m => m.isOpenSource);

// Combine filters
const openaiReasoningModels = data.models.filter(m => 
  m.provider === 'OpenAI' && m.apiSupport?.reasoning
);

Base URL

https://llm-registry.com/api/v1

Format

JSON

Auth

None

Endpoints

GET/api/v1

API root with endpoint listing

Sample Response

{
  "apiVersion": "v1",
  "endpoints": { ... },
  "attribution": { ... }
}
GET/api/v1/meta

Registry metadata including counts, categories, and latest score date

Sample Response

{
  "apiVersion": "v1",
  "generatedAt": "2026-02-19T...",
  "latestScoreDate": "2026-02-16",
  "counts": {
    "models": <dynamic>,
    "benchmarks": <dynamic>,
    "categories": <dynamic>,
    "sources": <dynamic>
  },
  "categories": [...],
  "endpoints": [...]
}
GET/api/v1/models

List all models (static pre-generated response)

Query Parameters

limitResults per page (1-500, default: 100) - Client-side only
offsetPagination offset (default: 0) - Client-side only

Sample Response

{
  "total": 150,
  "offset": 0,
  "limit": 100,
  "models": [
    {
      "id": "claude-3-5-sonnet-20241022",
      "name": "Claude 3.5 Sonnet",
      "provider": "Anthropic",
      "family": "claude-sonnet",
      "status": "active",
      "releaseDate": "2024-10-22",
      "trainingCutoff": "2024-04",
      "capabilities": ["text", "vision", "tools"],
      "isOpenSource": false,
      "specs": {
        "contextWindow": 200000,
        "maxOutputTokens": 64000,
        "pricing": {
          "input": 3.0,
          "output": 15.0,
          "cacheInput": 0.3,
          "cacheOutput": 3.75
        }
      },
      "apiSupport": {
        "reasoning": false,
        "vision": true,
        "tools": true,
        "structuredOutput": true,
        "attachment": true
      },
      "coverage": 85.2
    }
  ]
}
GET/api/v1/models/[id]

Get a single model by ID with full details and scores

Sample Response

{
  "model": {
    "id": "claude-3-5-sonnet",
    "name": "Claude 3.5 Sonnet",
    "provider": "Anthropic",
    "scores": {
      "mmlu": { "score": 88.7, "sourceId": "anthropic", ... },
      ...
    }
  }
}
GET/api/v1/benchmarks

List all benchmarks with optional category filter

Query Parameters

categoryFilter by category (e.g., 'Coding', 'Reasoning')

Sample Response

{
  "total": <dynamic>,
  "categories": ["Coding", "Math", "Reasoning", ...],
  "benchmarks": [
    {
      "id": "mmlu",
      "name": "MMLU",
      "category": "Knowledge",
      "maxScore": 100,
      "normalizeMethod": "max"
    }
  ]
}
GET/api/v1/scores

Query scores with flexible filtering

Query Parameters

modelIdFilter by model ID
benchmarkIdFilter by benchmark ID
categoryFilter by benchmark category
sourceIdFilter by data source
limitResults per page (1-5000, default: 500)
offsetPagination offset (default: 0)

Sample Response

{
  "total": <dynamic>,
  "scores": [
    {
      "modelId": "gpt-4o",
      "modelName": "GPT-4o",
      "benchmarkId": "mmlu",
      "benchmarkName": "MMLU",
      "category": "Knowledge",
      "score": 88.7,
      "normalizedScore": 88.7,
      "verified": true,
      "verificationLevel": "third_party",
      "sourceId": "openai",
      "asOfDate": "2024-05-13"
    }
  ]
}
GET/api/v1/leaderboards/[category]

Get ranked leaderboard for a category (use 'all' for global)

Query Parameters

limitMax results (1-500, default: 100)

Sample Response

{
  "category": "Coding",
  "categorySlug": "coding",
  "benchmarkCount": 24,
  "leaderboard": [
    {
      "rank": 1,
      "modelId": "claude-3-5-sonnet",
      "modelName": "Claude 3.5 Sonnet",
      "provider": "Anthropic",
      "average": 89.3,
      "coverage": 95.8,
      "scoreCount": 23
    }
  ]
}
GET/api/v1/export

Export all scores in JSON or CSV format for research workflows

Query Parameters

formatOutput format: 'json' (default) or 'csv'
modelIdFilter by model ID
benchmarkIdFilter by benchmark ID
categoryFilter by benchmark category
sourceIdFilter by data source

Sample Response

{
  "total": <dynamic>,
  "exportedAt": "2026-02-19T...",
  "filters": { ... },
  "scores": [ ... ],
  "attribution": { ... }
}

Static Slicing (Per-Model Files)

For better performance, individual model metadata is available as pre-generated JSON files:

Endpoint: /api/v1/models/[model-id].json

Size: <1 KB per model (vs ~800 KB for full dataset)

# Get metadata for a specific model
curl https://llm-registry.dev/api/v1/models/claude-3-5-sonnet.json

# Get metadata for GPT-4o
curl https://llm-registry.dev/api/v1/models/openai-gpt-4o.json

Benefit: Reduces API payload from ~800 KB to <1 KB per model request!

Rate Limiting (Cloudflare WAF)

Rate limiting is handled by Cloudflare WAF at the edge:

Limit

100 requests/min per IP

Enforcement

Cloudflare WAF (edge)

Response

HTTP 429 Too Many Requests

Note: Cloudflare returns HTTP 429 directly. No rate limit headers are sent by the application.

Cloudflare Dashboard Configuration

  1. Go to Security → WAF → Rate limiting rules
  2. Create rule: "API Rate Limiting"
  3. Expression: (http.request.uri.path contains "/api/v1/")
  4. Characteristics: ip.src
  5. Request limit: 100 per minute
  6. Mitigation: Block for 1 minute

Response Headers

Cache-Control

Responses are cached for 5 minutes (public, max-age=300)

Last-Modified

Date of the most recent score update

Attribution

All API responses include an attribution object. If you use this data, please credit:

  • Artificial Analysis — Scores marked with sourceId "artificial-analysis" are from artificialanalysis.ai
  • LLM Registry — Link back to llm-registry.com when displaying data