AI Regulation
The global regulatory landscape — what's law, what's coming, and what it means
AI regulation is fragmenting globally. The EU leads with comprehensive rules, the US takes a sectoral approach, and China requires state approval. Understanding the regulatory landscape is now essential for AI deployment.
The Regulatory Landscape
Three models are emerging: the EU's risk-based comprehensive framework, the US's sector-specific voluntary approach, and China's state-control model. Most countries are watching and borrowing from these templates.
The EU AI Act is the world's first comprehensive AI law. It entered into force August 1, 2024, with requirements applying in phases through 2027. Uses a risk-based approach, categorizing AI systems from minimal to unacceptable risk.
Key Requirements
- Risk classification for all AI systems
- Prohibited: social scoring, certain biometric uses
- High-risk: hiring, credit, healthcare need audits
- GPAI models: transparency, safety testing
- Watermarking for AI-generated content
Phased Implementation
- Feb 2025: Prohibitions, AI literacy
- Aug 2025: GPAI rules, governance, penalties
- Aug 2026: Most remaining provisions
- Aug 2027: Some high-risk obligations
No comprehensive federal AI law. Biden's Executive Order (Oct 2023) set guidelines but the Trump administration has rolled back requirements. States like California are filling the gap. Sector-specific rules apply.
Current State
- Executive Order largely rolled back
- Voluntary commitments from major labs
- California SB 1047 vetoed (2024)
- State-level patchwork emerging
- Export controls on AI chips to China
Existing Sector Rules
- FDA: AI in medical devices
- FTC: Deceptive AI practices
- EEOC: AI in hiring discrimination
- SEC: AI in financial services
- NIST: AI Risk Management Framework
China regulates AI to maintain state control while promoting domestic industry. Algorithm registration required. Content must align with state values. Generative AI needs government approval before public launch.
Key Regulations
- Generative AI Measures (Aug 2023)
- Algorithm registration required
- Security assessments for public AI
- Content moderation mandates
- Data localization requirements
Practical Impact
- All major AI models need approval
- 50+ models approved for public use
- Foreign models (ChatGPT) blocked
- Domestic focus, export limited
The Regulation Debate
- Safety: AI risks (bias, misinfo, job displacement) require guardrails
- Trust: Clear rules build public confidence for adoption
- Accountability: Someone must be liable when AI causes harm
- Level field: Prevents race to the bottom on safety
- Innovation: Premature rules may slow progress
- Uncertainty: Technology moves faster than regulators
- Competition: Asymmetric rules may advantage rivals
- Compliance costs: May favor Big Tech over startups