About us

Working towards a safe and flourishing future

Our mission, history, and projects in the AI safety community that we support

How can we ensure that human civilization survives and thrives through the AI revolution?

The development of frontier artificial intelligence poses both great opportunities and perils for humanity.

Already, we’re seeing the massive benefits of these technologies: advancements in research and development, automation of daily tasks, powerful visual tools for creatives, and many more.

However, we have yet to confront the most treacherous downsides of this nascent technology - abilities so dangerous that they could induce catastrophic outcomes such as civilizational collapse, geopolitical warfare, or pandemics.

At Convergence, we believe the most critical task on Earth today is to steer the evolution of AI technology in a direction that ensures it continues to advance human productivity and well-being, while reducing the likelihood of triggering catastrophic outcomes.

Convergence exists to develop and promote insights on how to create a thriving future by minimizing the existential risk from AI.

We’re building a cutting-edge research institution focused on answering the following questions:

What scenarios about the trajectory of AI are most likely and neglected, and what does it look like to model them?

What governance strategies are the most important for humanity to focus on to ensure we arrive at our desired scenario?

How can we best advocate and raise awareness of critical governance strategies to create change - in particular, by advocating to the AI safety community, policymakers, and the general public?

Our research is deeply interdisciplinary in nature and draws upon insights and methods from philosophy, computer science, mathematics, sociology, cognitive science, and psychology.

To learn about our philosophy around AI safety and how we intend to structure our work to answer these questions, you can take a look at our Theory of Change.

History

Convergence was formed as a vessel for researching existential risk reduction. In 2016, Justin Shovelain crystallized his research project into x-risk strategy under the label “Convergence”. In 2017, David Kristoffersson pivoted towards strategy research and began collaborating with Justin, where they quickly formulated the vision for Convergence as a world-class research organization.

Over the next four years, Convergence worked steadily on an internal corpus of research modeling technological, societal, and intervention mechanics. We published our foundational strategic research primarily via the EA Forum and LessWrong, covering topics such as:

In 2021, Convergence received substantial funding and began scaling to build a research organization centered around the reduction of AI existential risk. Over the past two years, Convergence has brought on talented experts from the fields of ethics, governance policy, and hardware research to develop an interdisciplinary research agenda targeting critically neglected research in the AI safety space.

Advising

Convergence serves as an advisory organization to a number of affiliated organizations developing next-generation technologies.

For these organizations, we provide strategic and technical feedback on practical concerns around modern AI technologies, by identifying potential risk factors impacting organizations and proposing effective mitigation strategies.

Organizations that we advise directly

Fiscal Sponsorship

We’ve taken on a fiscal sponsorship role for several independent organizations. Our role includes extending our non-profit status to other organizations, providing financial management assistance, and managing legal administrative tasks to enable these teams to conduct their research more effectively.

If you’re interested in receiving fiscal sponsorship from us, contact us here (link) for more information on how we might be able to support your team.

Organizations that we provide fiscal sponsorship to

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released in early 2024.

By subscribing to our list, you agree with our Terms and Conditions and Data and Privacy policies.

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released in early 2024.

By subscribing to our list, you agree with our Terms and Conditions and Data and Privacy policies.

Newsletter

Get research updates from Convergence

Leave us your contact info and we’ll share our latest research, partnerships, and projects as they're released in early 2024.

By subscribing to our list, you agree with our Terms and Conditions and Data and Privacy policies.