The AI Cap Score is a composite metric (0-100) that ranks AI models, tools, and infrastructure by combining quality benchmarks, developer adoption, social momentum, community health, and development activity into a single comparable number.
Think of it like a market cap for AI tools, but instead of dollars, it measures real usage and impact signals.
| Signal | Weight | Source |
|---|---|---|
| Quality (Benchmarks) | 30% | LM Arena, MMLU, SWE-bench, MTEB |
| Developer Adoption | 25% | GitHub stars, PyPI/npm/HF downloads |
| Social Momentum | 20% | Hacker News, Reddit mentions |
| Community Health | 15% | Contributors, forks, HF likes |
| Development Activity | 10% | Recent commits, release recency |
For proprietary models without GitHub repos (GPT-5, Gemini, etc.), weights are adjusted: Quality 45%, Social 30%, Adoption 25%.
All signals are normalized within their category using log-scaling. This means a model is ranked against peers in the same category (LLMs vs LLMs, Code Tools vs Code Tools), not against unrelated tools. A score of 85 in the LLM category means that model is in the top tier for LLMs specifically.
All data comes from free, public APIs:
We track 9 categories across the AI ecosystem: LLMs, Image Gen, Video Gen, Audio/TTS, Code Tools, Search/RAG, Agents/Frameworks, Infrastructure, and Embeddings. Each category has its own normalized scoring, so an 80 in Code Tools reflects the same relative standing as an 80 in LLMs.
Built with Next.js, SQLite, and free public APIs. Inspired by Balaji's call for an "AI Market Cap" tracker.