/api/v1API root with endpoint listing
Sample Response
{
"apiVersion": "v1",
"endpoints": { ... },
"attribution": { ... }
}Beta version: *Information might not be fully accurate. Please report any discrepancies.
API Reference
REST API for accessing models, benchmarks, scores, and leaderboard data programmatically.
Models
1573
Benchmarks
206
Categories
17
Sources
30
This API uses static export for optimal performance:
⚠️ Note: Query parameters for filtering are not supported server-side. Use client-side JavaScript to filter the returned data.
Since the API is static, filtering must be done client-side. Here are examples:
// Fetch all models
const response = await fetch('/api/v1/models');
const data = await response.json();
// Filter by provider
const openaiModels = data.models.filter(m => m.provider === 'OpenAI');
// Filter by family
const llamaModels = data.models.filter(m => m.family === 'llama');
// Filter by capability
const reasoningModels = data.models.filter(m =>
m.apiSupport?.reasoning === true
);
// Filter by open source
const openSourceModels = data.models.filter(m => m.isOpenSource);
// Combine filters
const openaiReasoningModels = data.models.filter(m =>
m.provider === 'OpenAI' && m.apiSupport?.reasoning
);Base URL
https://llm-registry.com/api/v1
Format
JSON
Auth
None
/api/v1API root with endpoint listing
Sample Response
{
"apiVersion": "v1",
"endpoints": { ... },
"attribution": { ... }
}/api/v1/metaRegistry metadata including counts, categories, and latest score date
Sample Response
{
"apiVersion": "v1",
"generatedAt": "2026-02-19T...",
"latestScoreDate": "2026-02-16",
"counts": {
"models": <dynamic>,
"benchmarks": <dynamic>,
"categories": <dynamic>,
"sources": <dynamic>
},
"categories": [...],
"endpoints": [...]
}/api/v1/modelsList all models (static pre-generated response)
Query Parameters
limitResults per page (1-500, default: 100) - Client-side onlyoffsetPagination offset (default: 0) - Client-side onlySample Response
{
"total": 150,
"offset": 0,
"limit": 100,
"models": [
{
"id": "claude-3-5-sonnet-20241022",
"name": "Claude 3.5 Sonnet",
"provider": "Anthropic",
"family": "claude-sonnet",
"status": "active",
"releaseDate": "2024-10-22",
"trainingCutoff": "2024-04",
"capabilities": ["text", "vision", "tools"],
"isOpenSource": false,
"specs": {
"contextWindow": 200000,
"maxOutputTokens": 64000,
"pricing": {
"input": 3.0,
"output": 15.0,
"cacheInput": 0.3,
"cacheOutput": 3.75
}
},
"apiSupport": {
"reasoning": false,
"vision": true,
"tools": true,
"structuredOutput": true,
"attachment": true
},
"coverage": 85.2
}
]
}/api/v1/models/[id]Get a single model by ID with full details and scores
Sample Response
{
"model": {
"id": "claude-3-5-sonnet",
"name": "Claude 3.5 Sonnet",
"provider": "Anthropic",
"scores": {
"mmlu": { "score": 88.7, "sourceId": "anthropic", ... },
...
}
}
}/api/v1/benchmarksList all benchmarks with optional category filter
Query Parameters
categoryFilter by category (e.g., 'Coding', 'Reasoning')Sample Response
{
"total": <dynamic>,
"categories": ["Coding", "Math", "Reasoning", ...],
"benchmarks": [
{
"id": "mmlu",
"name": "MMLU",
"category": "Knowledge",
"maxScore": 100,
"normalizeMethod": "max"
}
]
}/api/v1/scoresQuery scores with flexible filtering
Query Parameters
modelIdFilter by model IDbenchmarkIdFilter by benchmark IDcategoryFilter by benchmark categorysourceIdFilter by data sourcelimitResults per page (1-5000, default: 500)offsetPagination offset (default: 0)Sample Response
{
"total": <dynamic>,
"scores": [
{
"modelId": "gpt-4o",
"modelName": "GPT-4o",
"benchmarkId": "mmlu",
"benchmarkName": "MMLU",
"category": "Knowledge",
"score": 88.7,
"normalizedScore": 88.7,
"verified": true,
"verificationLevel": "third_party",
"sourceId": "openai",
"asOfDate": "2024-05-13"
}
]
}/api/v1/leaderboards/[category]Get ranked leaderboard for a category (use 'all' for global)
Query Parameters
limitMax results (1-500, default: 100)Sample Response
{
"category": "Coding",
"categorySlug": "coding",
"benchmarkCount": 24,
"leaderboard": [
{
"rank": 1,
"modelId": "claude-3-5-sonnet",
"modelName": "Claude 3.5 Sonnet",
"provider": "Anthropic",
"average": 89.3,
"coverage": 95.8,
"scoreCount": 23
}
]
}/api/v1/exportExport all scores in JSON or CSV format for research workflows
Query Parameters
formatOutput format: 'json' (default) or 'csv'modelIdFilter by model IDbenchmarkIdFilter by benchmark IDcategoryFilter by benchmark categorysourceIdFilter by data sourceSample Response
{
"total": <dynamic>,
"exportedAt": "2026-02-19T...",
"filters": { ... },
"scores": [ ... ],
"attribution": { ... }
}For better performance, individual model metadata is available as pre-generated JSON files:
Endpoint: /api/v1/models/[model-id].json
Size: <1 KB per model (vs ~800 KB for full dataset)
# Get metadata for a specific model curl https://llm-registry.dev/api/v1/models/claude-3-5-sonnet.json # Get metadata for GPT-4o curl https://llm-registry.dev/api/v1/models/openai-gpt-4o.json
✅ Benefit: Reduces API payload from ~800 KB to <1 KB per model request!
Rate limiting is handled by Cloudflare WAF at the edge:
Limit
100 requests/min per IP
Enforcement
Cloudflare WAF (edge)
Response
HTTP 429 Too Many Requests
Note: Cloudflare returns HTTP 429 directly. No rate limit headers are sent by the application.
Cloudflare Dashboard Configuration
(http.request.uri.path contains "/api/v1/")ip.src100 per minuteBlock for 1 minuteCache-Control
Responses are cached for 5 minutes (public, max-age=300)
Last-Modified
Date of the most recent score update
All API responses include an attribution object. If you use this data, please credit: