Key People in AI
The CEOs, researchers, and thought leaders shaping AI's future
~50
Key decision makers
Industry-shaping
$1M+
Top researcher pay
Reported salaries
2
Nobel Prizes 2024
Physics + Chemistry
High
Executive turnover
2024-2025
AI power is concentrated. A small group of CEOs, researchers, and investors make decisions that affect billions. Understanding who they are and what they believe is essential to understanding where AI is headed.
The Power Structure
AI's direction is shaped by a surprisingly small group: the CEOs of major labs, a handful of key researchers, and the executives at tech giants funding the revolution. Their beliefs about AI safety, timelines, and strategy drive trillions in investment.
The Revolving Door: 2024-2025 saw unprecedented movement: Sutskever left OpenAI to start SSI, Murati departed, and key researchers moved between labs. This churn reflects both opportunity and tension in the field.
Lab CEOs & Founders
Sam Altman
OpenAI
CEO, OpenAI
Survived a board coup in 2023. Leading OpenAI's transition to for-profit structure. Driving the Stargate infrastructure project. The face of consumer AI.
Dario Amodei
Anthropic
CEO, Anthropic
Ex-OpenAI VP who left over safety concerns. Built Anthropic as the "safety-first" lab. Constitutional AI pioneer. Thoughtful voice on responsible scaling.
Demis Hassabis
DeepMind
CEO, Google DeepMind
Nobel Prize in Chemistry 2024 for AlphaFold. Leads Google's combined AI research. Chess prodigy turned AI pioneer. Focused on beneficial AGI.
Jensen Huang
NVIDIA
CEO, NVIDIA
Built the company that powers AI. Leather jacket icon. NVIDIA's market cap hit $3T+. Controls the GPU bottleneck that constrains the entire industry.
Key Researchers
Ilya Sutskever
SSI
Co-founder, Safe Superintelligence Inc.
OpenAI co-founder who left in 2024. Led Superalignment team. Now building SSI focused purely on safe superintelligence. Possibly the most important researcher in AI.
Geoffrey Hinton
Independent
"Godfather of Deep Learning"
Nobel Prize in Physics 2024. Left Google to warn about AI risks. Pioneer of backpropagation. Now focuses on existential risk awareness.
Yann LeCun
Meta
Chief AI Scientist, Meta
Turing Award winner. Vocal advocate for open source AI. Skeptic of existential risk claims. Leads Meta's AI research. Influential voice on X.
Andrej Karpathy
Independent
AI Educator & Researcher
Ex-Tesla AI Director, ex-OpenAI. Now independent educator. YouTube tutorials reach millions. Most trusted technical voice for developers.
AI Voices & Perspectives
The AI community divides into camps with different views on risk, speed, and governance.
Safety/Risk Focused
Hinton, Bengio, Sutskever, Connor Leahy — emphasize existential and catastrophic risks, advocate for caution and coordination.
Accelerationists (e/acc)
Andreessen, some tech VCs — argue AI development should accelerate, benefits outweigh risks, regulation is harmful.
Pragmatists
Dario Amodei, Satya Nadella — acknowledge risks but believe we can develop AI responsibly with proper guardrails.
Open Source Advocates
Yann LeCun, Meta's AI team — believe open source democratizes AI and prevents concentration of power.
The Talent Market
AI talent remains the scarcest resource. Top researchers command salaries that rival executives.
Compensation Ranges (Reported): Top researchers: $1M-$10M+. Senior ML engineers: $500K-$1M. ML engineers: $300K-$600K. New grads with ML PhD: $200K-$400K. Stock and signing bonuses add significantly. Poaching is aggressive.
✓ Key Takeaways
✓ ~50 people make industry-shaping decisions
✓ 2024 Nobel Prizes went to AI researchers
✓ High executive turnover in 2024-2025
✓ Community divided on risk and speed
✓ Top talent earns $1M+ annually
✓ Follow researchers on X for real-time insights