Research Initiative

AI is joining society.
Who writes the terms?

Artificial intelligence is already making decisions about hiring, lending, healthcare, and resource allocation. Yet no framework exists for how these systems should participate in humanity's social contract. This is the defining infrastructure challenge—and opportunity—of our era.

$5B+
AI Safety Market (2024)
95%
Enterprise AI pilots failing
36%
YoY alignment funding growth
30-47%
Jobs at displacement risk

The social contract was never
designed for non-human agents

For centuries, the social contract has governed the implicit agreements between individuals, institutions, and governments. Now a new participant has arrived—uninvited and unintegrated.

👤
🏛️
🏢
⚖️
?

We're building the plane while flying it

The social contract—developed by Hobbes, Locke, and Rousseau—describes how individuals surrender certain freedoms in exchange for protection, order, and opportunity. Over centuries, it expanded to encompass economic relationships between workers, employers, and society.

AI breaks this model. These systems now allocate attention, opportunity, and resources. They influence elections, determine creditworthiness, and triage medical care. Yet they operate outside any coherent framework of rights, responsibilities, or accountability.

The question isn't whether AI will participate in the social contract. It already does. The question is whether that participation will be designed—or chaotic.

Two futures diverge

The path we choose now determines whether AI amplifies human flourishing or accelerates systemic fragility.

⚠️

If We Fail

Economic Collapse of the Labor Bargain

The social contract depends on labor for value distribution. If AI captures cognitive work faster than alternatives emerge, tax revenue collapses, consumer spending declines, and wealth concentrates in a small cohort of AI asset owners.

Institutional Delegitimization

When AI systems make decisions perceived as unfair, biased, or opaque—without accountability frameworks—trust in institutions erodes. Misaligned AI in hiring, lending, and policing reinforces systemic biases at unprecedented scale.

Manipulation & Oversight Failure

Frontier AI now approaches human-level persuasion. Research shows misaligned systems could undermine human oversight through sustained influence, corrupting organizational safety culture from within.

Regulatory Fragmentation

Without coherent global frameworks: EU regulates heavily, US deregulates for competition, China consolidates state control. Multinational operations face impossible compliance burdens while gaps create race-to-the-bottom dynamics.

🎯

If We Succeed

Trust Enables Adoption Enables Value

Aligned AI earns the trust required for deployment in high-stakes domains: healthcare, finance, critical infrastructure. Alignment and usefulness are correlated—aligned systems do what users actually want.

New Economic Architectures

Proper integration enables new models: AI-generated value distributed through updated tax mechanisms, universal dividend structures, and AI-powered public services that enhance rather than replace human capabilities.

Governance as Infrastructure

The companies and frameworks that solve AI governance become essential infrastructure. First movers in alignment tools, audit systems, and compliance platforms capture enduring market positions as regulation crystallizes.

Incentive Compatibility at Scale

Game-theoretic frameworks—mechanism design, contract theory, Bayesian persuasion—create systems where AI's incentives naturally align with human interests. This is engineering trust, not just hoping for it.

Why this is the play

AI alignment isn't a cost center. It's the next major technology infrastructure layer.

"Alignment and commercial success are the same vector. The most useful AI will be the most aligned AI—because aligned systems actually do what users want."
— Core insight from alignment research
1

Regulatory Inevitability

The EU AI Act is law. China mandates AI content labeling. ISO/IEC 42001 standards are crystallizing globally. Companies that build alignment infrastructure now set the standards others must follow—and pay to comply with.

2

The Alignment-Usefulness Correlation

Misaligned systems optimize for the wrong things. They fail enterprise deployments at 95% rates not due to capability, but integration and alignment failures. Commercial success requires solving alignment.

3

Infrastructure Layer Economics

Alignment tools, governance platforms, audit systems, and compliance automation become mandatory infrastructure. This creates durable, recurring revenue with regulatory moats—the same dynamics that built enterprise software.

4

The Social License to Operate

AI without societal buy-in faces backlash, regulation, and rejection. Companies that earn trust through genuine alignment—becoming partners in the social contract—gain sustainable competitive advantage.

The numbers are unambiguous

AI safety and governance markets are expanding rapidly across multiple vectors. This is infrastructure, not speculation.

$4.9B
20-30% CAGR
AI Safety Market
(2024)
$86B
22.8% CAGR → 2030
AI Cybersecurity
(Projected)
$6.9B
36.4% CAGR → 2035
AI Governance
Platforms
$170M
+36% YoY
Alignment Research
Funding (2025)

Market Growth Trajectory (CAGR)

AI Governance
36.4%
Public Safety AI
30.2%
Gen AI Cybersecurity
26.5%
AI Security Platforms
22.0%
Robot Safety Monitoring
21.2%

What success looks like

A world where AI participates in human flourishing—by design, not by accident.

From participant to partner

The goal isn't to constrain AI—it's to integrate it. Just as the industrial revolution eventually produced labor laws, workplace safety, and social insurance, the AI revolution requires new frameworks for participation.

This means AI systems that understand and operate within social norms. Value distribution mechanisms that prevent winner-take-all dynamics. Governance infrastructure that enables accountability without stifling innovation.

  • Incentive-compatible systems where AI goals naturally align with human interests
  • Economic architectures that distribute AI-generated value broadly
  • Governance frameworks that enable trust at institutional scale
  • Technical standards that make alignment verifiable and auditable
🔬
Technical Alignment

AI systems that pursue intended goals

⚖️
Governance Infrastructure

Frameworks for accountability & oversight

💰
Economic Integration

Value distribution mechanisms

🤝
Social Contract

AI as partner in human flourishing

The window for shaping this is now

AI's integration into the social contract is not optional—it's happening. The only question is whether it happens through deliberate design or catastrophic iteration. Join researchers, investors, and business leaders working to get this right.

Join 2,400+ leaders receiving our research updates. Unsubscribe anytime.