Who We are
We are a non-profit research organization building a foundational series of sociotechnical reports on key AI scenarios and governance recommendations, and conducting AI awareness efforts to inform the general public.
Get research updates from Convergence
Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released.
You may opt out at any time. View our Privacy Policy.
Recent Publications
Why We Exist
The development of AI technology poses both great opportunities and perils for humanity. We're currently at a watershed moment in history - plausibly, human-level AI will be here within 10 years. Shortly after this, we’ll be in a world powered by intelligence beyond human capabilities.
Hence, we need to answer some critical questions: What will the emergence of advanced AI look like? How will this impact human society? What constitutes wise decision making regarding advanced AI? How can we help society adopt better decision making?
By studying these questions, we hope to bring these risks and their solutions to the forefront of societal awareness.

Our Initiatives For Mitigating AI Risk
Scenario Research
Scenario planning is a method of research aimed at modeling future outcomes of AI development. It involves mapping possible future pathways, identifying key parameters which shape their trajectory, and developing strategies to mitigate their negative consequences.
We’re building a foundational set of technical reports modeling plausible and highly consequential AI scenarios.
Example questions we're answering
Selected projects:

AI Clarity: An Initial Research Agenda
Our research method centers on scenario planning - an analytical tool used to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty.

Timelines to Transformative AI: an investigation
The timeline for the arrival of advanced AI is a key consideration for AI safety. We investigate a range of notable recent predictions of the timeline to transformative AI.
Governance Research
Given a set of harmful scenarios, what are the most effective governance policies to reduce the likelihood that they occur? What is the feasibility, effectiveness, and negative externalities of enacting such policies?
We're conducting research into key governance strategies and recommendations aimed at mitigating risks detailed by our AI scenario research.
Example questions we're answering
Selected projects:

Evaluating an AI Chip Registration Policy
This report evaluates the feasibility and potential impacts of a US policy requiring the registration and tracking of high-end AI chips.

2024 State of the AI Regulatory Landscape
A primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current state of AI regulation.
AI Awareness
It's essential to effectively disseminate up-to-date research on the potential dangers from AI technologies and the governance strategies that can allow us to manage these dangers responsibly.
We're informing the general public, policymakers, and AI safety community about existential risks through initiatives that widely distribute the findings of experts.
Selected projects:

Oxford Handbook of AI Governance
An array of scholars from a wide variety of fields and cultural origins come together in this handbook to share a worldwide viewpoint on AI governance.

Building a God
Christopher DiCarlo’s upcoming book “Building a God” explores the consequences of the future progress of humanity in developing an agentic, super-intelligent being via machine learning.

All Thinks Considered
Christopher DiCarlo and his guests navigate the complexities of critical thinking, fostering open dialogue, and exploring diverse perspectives on the pressing issues of our time.
Collaborate With Us
We're currently conducting partnerships with external researchers, strategic advising for policymakers & private organizations, and raising funding to accelerate our research initiatives.