Who We are
We are a non-profit research organization building a foundational series of sociotechnical reports on key AI scenarios and governance recommendations, and conducting AI awareness efforts to inform the general public.
Our Team Members Have Worked With:
Our Initiatives For Mitigating AI Risk
Scenario Research
Scenario planning is a method of research aimed at modeling future outcomes of AI development. It involves mapping possible future pathways, identifying key parameters which shape their trajectory, and developing strategies to mitigate their negative consequences.
We’re building a foundational set of technical reports modeling plausible and highly consequential AI scenarios.
Example questions we're answering
What are the most likely set of negative societal outcomes to result from the acceleration of AI capabilities?
What are the possible paths of development for AI technology, and how likely is each path? What inputs will alter which path of development ends up happening?
What types of strategies might best prevent or mitigate the likelihood of these global risks?
Governance Research
Given a set of harmful scenarios, what are the most effective governance policies to reduce the likelihood that they occur? What is the feasibility, effectiveness, and negative externalities of enacting such policies?
We're conducting research into key governance strategies and recommendations aimed at mitigating risks detailed by our AI scenario research.
Example questions we're answering
What does the political landscape look like for the U.S. to require the registration and transfer reporting of key AI chips?
What incident reporting mechanisms are necessary to allow regulatory bodies to monitor the impact of AI systems in real-world settings?
What domain-specific safety assessments are critical to identify dangerous capabilities of AI models?
AI Awareness
It's essential to effectively disseminate up-to-date research on the potential dangers from AI technologies and the governance strategies that can allow us to manage these dangers responsibly.
We're informing the general public, policymakers, and AI safety community about existential risks through initiatives that widely distribute the findings of experts.
Selected projects:
Oxford Handbook of AI Governance
An array of scholars from a wide variety of fields and cultural origins come together in this handbook to share a worldwide viewpoint on AI governance.
Building a God
Christopher DiCarlo’s upcoming book “Building a God” explores the consequences of the future progress of humanity in developing an agentic, super-intelligent being via machine learning.
All Thinks Considered
Christopher DiCarlo and his guests navigate the complexities of critical thinking, fostering open dialogue, and exploring diverse perspectives on the pressing issues of our time.
Collaborate With Us
We're currently conducting partnerships with external researchers, strategic advising for policymakers & private organizations, and raising funding to accelerate our research initiatives.